You are on page 1of 1560

HALCON Version 8.0.

MVTec Software GmbH

HALCON/C
Reference Manual
This manual describes the operators of HALCON, version 8.0.2, in C syntax. It was generated on May 13, 2008.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in
any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without prior written
permission of the publisher.

Copyright
c 1997-2008 by MVTec Software GmbH, München, Germany MVTec Software GmbH

More information about HALCON can be found at: http://www.mvtec.com


Contents

1 Classification 1
1.1 Gaussian-Mixture-Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
add_sample_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
classify_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
clear_all_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
clear_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
clear_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
create_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
evaluate_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
get_params_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
get_prep_info_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
get_sample_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
get_sample_num_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
read_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
read_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
train_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
write_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
write_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2 Hyperboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
clear_sampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
close_all_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
close_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
create_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
descript_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
enquire_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
enquire_reject_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
get_class_box_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
learn_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
learn_sampset_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
read_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
read_sampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
set_class_box_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
test_sampset_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
write_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Neural-Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
add_sample_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
classify_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
clear_all_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
clear_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
clear_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
create_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
evaluate_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
get_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
get_prep_info_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
get_sample_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
get_sample_num_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
read_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
read_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
train_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
write_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
write_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.4 Support-Vector-Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
add_sample_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
classify_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
clear_all_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
clear_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
clear_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
create_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
get_params_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
get_prep_info_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
get_sample_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
get_sample_num_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
get_support_vector_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
get_support_vector_num_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
read_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
read_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
reduce_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
train_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
write_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
write_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

2 File 61
2.1 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
read_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
read_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
write_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.2 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
delete_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
file_exists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
list_files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
read_world_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.3 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
read_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
write_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.4 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
close_all_files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
close_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
fnew_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
fread_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
fread_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
fread_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
fwrite_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
open_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.5 Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
read_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
write_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.6 XLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
read_contour_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
read_contour_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
read_polygon_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
read_polygon_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
write_contour_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
write_contour_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
write_polygon_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
write_polygon_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

3 Filter 87
3.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
abs_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
add_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
div_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
invert_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
max_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
min_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
mult_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
scale_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
sqrt_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
sub_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.2 Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
bit_and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
bit_lshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
bit_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
bit_not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
bit_or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
bit_rshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
bit_slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
bit_xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.3 Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
cfa_to_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
gen_principal_comp_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
linear_trans_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
principal_comp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
rgb1_to_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
rgb3_to_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
trans_from_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
trans_to_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.4 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
close_edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
close_edges_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
derivate_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
diff_of_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
edges_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
edges_color_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
edges_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
edges_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
frei_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
frei_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
highpass_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
info_edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
kirsch_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
kirsch_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
laplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
laplace_of_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
prewitt_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
prewitt_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
roberts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
robinson_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
robinson_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
sobel_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
sobel_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.5 Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
adjust_mosaic_images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
coherence_enhancing_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
emphasize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
equ_histo_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
illuminate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
mean_curvature_flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
scale_image_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
shock_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
3.6 FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
convol_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
convol_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
correlation_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
energy_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
fft_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
fft_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
fft_image_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
gen_bandfilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
gen_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
gen_derivative_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
gen_filter_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
gen_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
gen_gauss_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
gen_highpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
gen_lowpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
gen_sin_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
gen_std_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
optimize_fft_speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
optimize_rft_speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
phase_deg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
phase_rad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
power_byte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
power_ln . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
power_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
read_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
rft_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
write_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
3.7 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
affine_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
affine_trans_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
gen_bundle_adjusted_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
gen_cube_map_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
gen_projective_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
gen_spherical_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
map_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
mirror_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
polar_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
polar_trans_image_ext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
polar_trans_image_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
projective_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
projective_trans_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
rotate_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
zoom_image_factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
zoom_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
3.8 Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
harmonic_interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
inpainting_aniso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
inpainting_ced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
inpainting_ct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
inpainting_mcf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
inpainting_texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
3.9 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
bandpass_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
lines_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
lines_facet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
lines_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
3.10 Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
exhaustive_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
exhaustive_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
gen_gauss_pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
monotony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
3.11 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
convol_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
expand_domain_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
gray_inside . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
gray_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
lut_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
topographic_sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
3.12 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
add_noise_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
add_noise_white . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
gauss_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
noise_distribution_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
sp_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
3.13 Optical-Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
optical_flow_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
unwarp_image_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
vector_field_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
3.14 Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
corner_response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
dots_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
points_foerstner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
points_harris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
points_sojka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
3.15 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
anisotrope_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
anisotropic_diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
binomial_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
eliminate_min_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
eliminate_sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
fill_interlace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
gauss_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
info_smooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
isotropic_diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
mean_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
mean_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
mean_sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
median_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
median_separate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
median_weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
midrange_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
rank_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
sigma_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
smooth_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
trimmed_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
3.16 Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
deviation_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
entropy_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
texture_laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
3.17 Wiener-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
gen_psf_defocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
gen_psf_motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
simulate_defocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
simulate_motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
wiener_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
wiener_filter_ni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298

4 Graphics 301
4.1 Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
drag_region1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
drag_region2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
drag_region3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
draw_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
draw_circle_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
draw_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
draw_ellipse_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
draw_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
draw_line_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
draw_nurbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
draw_nurbs_interp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
draw_nurbs_interp_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
draw_nurbs_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
draw_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
draw_point_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
draw_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
draw_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
draw_rectangle1_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
draw_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
draw_rectangle2_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
draw_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
draw_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
draw_xld_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
4.2 Gnuplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
gnuplot_close . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
gnuplot_open_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
gnuplot_open_pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
gnuplot_plot_ctrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
gnuplot_plot_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
gnuplot_plot_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
4.3 LUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
disp_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
draw_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
get_fixed_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
get_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
get_lut_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
query_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
set_fixed_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
set_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
set_lut_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
write_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
4.4 Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
get_mbutton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
get_mposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
get_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
query_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
set_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
4.5 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
disp_arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
disp_arrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
disp_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
disp_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
disp_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
disp_cross . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
disp_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
disp_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
disp_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
disp_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
disp_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
disp_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
disp_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
disp_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
disp_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
disp_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
4.6 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
get_comprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
get_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
get_fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
get_hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
get_icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
get_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
get_line_approx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
get_line_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
get_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
get_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
get_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
get_part_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
get_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
get_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
get_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
query_all_colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
query_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
query_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
query_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
query_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
query_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
query_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
query_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
set_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
set_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
set_comprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
set_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
set_fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
set_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
set_hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
set_icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
set_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
set_line_approx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
set_line_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
set_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
set_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
set_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
set_part_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
set_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
set_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
set_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
4.7 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
get_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
get_string_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
get_tposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
get_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
new_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
query_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
query_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
read_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
read_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
set_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
set_tposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
set_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
write_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
4.8 Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
clear_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
clear_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
close_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
copy_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
dump_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
dump_window_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
get_os_window_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
get_window_attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
get_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
get_window_pointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
get_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
move_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
new_extern_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
open_textwindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
open_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
query_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
set_window_attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
set_window_dc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
set_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
set_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
slide_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431

5 Image 433
5.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
get_grayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
get_image_pointer1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
get_image_pointer1_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
get_image_pointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
get_image_time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
5.2 Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
close_all_framegrabbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
close_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
get_framegrabber_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
get_framegrabber_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
grab_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
grab_data_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
grab_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
grab_image_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
grab_image_start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
info_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
open_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
set_framegrabber_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
set_framegrabber_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
5.3 Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
access_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
append_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
channels_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
compose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
compose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
compose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
compose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
compose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
compose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
count_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
decompose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
decompose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
decompose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
decompose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
decompose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
decompose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
image_to_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
5.4 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
copy_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
gen_image1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
gen_image1_extern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
gen_image1_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
gen_image3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
gen_image_const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
gen_image_gray_ramp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
gen_image_interleaved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
gen_image_proto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
gen_image_surface_first_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
gen_image_surface_second_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
region_to_bin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
region_to_label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
region_to_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
5.5 Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
add_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
change_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
full_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
get_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
rectangle1_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
reduce_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
5.6 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
area_center_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
cooc_feature_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
cooc_feature_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
elliptic_axis_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
entropy_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
estimate_noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
fit_surface_first_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
fit_surface_second_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
fuzzy_entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
fuzzy_perimeter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
gen_cooc_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
gray_histo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
gray_histo_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
gray_projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
histo_2dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
min_max_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
moments_gray_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
plane_deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
select_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
shape_histo_all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
shape_histo_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
5.7 Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
change_format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
crop_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
crop_domain_rel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
crop_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
crop_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
tile_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
tile_images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
tile_images_offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
5.8 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
overpaint_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
overpaint_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
paint_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
paint_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
paint_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
set_grayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
5.9 Type-Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
complex_to_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
convert_image_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
real_to_complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
real_to_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
vector_field_to_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527

6 Lines 529
6.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
approx_chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
approx_chain_simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
6.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
line_orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
line_position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
partition_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
select_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
select_lines_longest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538

7 Matching 541
7.1 Component-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
clear_all_component_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
clear_all_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
clear_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
clear_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
cluster_model_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
create_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
create_trained_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
find_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
gen_initial_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
get_component_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
get_component_model_tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
get_component_relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
get_found_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
get_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
inspect_clustered_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
modify_component_relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
read_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
read_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
train_model_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
write_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
write_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
7.2 Correlation-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
clear_all_ncc_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
clear_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
create_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
find_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
get_ncc_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
get_ncc_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
read_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
set_ncc_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
write_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
7.3 Gray-Value-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
adapt_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
best_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
best_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
best_match_pre_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
best_match_rot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
best_match_rot_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
clear_all_templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
clear_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
create_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
create_template_rot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
fast_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
fast_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
read_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
set_offset_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
set_reference_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
write_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
7.4 Shape-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
clear_all_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
clear_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
create_aniso_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
create_scaled_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
create_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
determine_shape_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
find_aniso_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
find_aniso_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
find_scaled_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
find_scaled_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
find_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
find_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
get_shape_model_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
get_shape_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
get_shape_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
inspect_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
read_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
set_shape_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
write_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645

8 Matching-3D 647
affine_trans_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
clear_all_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
clear_all_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
clear_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
clear_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
convert_point_3d_cart_to_spher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
convert_point_3d_spher_to_cart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
create_cam_pose_look_at_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
create_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
find_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
get_object_model_3d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
get_shape_model_3d_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
get_shape_model_3d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
project_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
project_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
read_object_model_3d_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
read_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
trans_pose_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
write_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671

9 Morphology 673
9.1 Gray-Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
dual_rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
gen_disc_se . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
gray_bothat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
gray_closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
gray_closing_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
gray_closing_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
gray_dilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
gray_dilation_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
gray_dilation_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
gray_erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
gray_erosion_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
gray_erosion_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
gray_opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
gray_opening_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
gray_opening_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
gray_range_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
gray_tophat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
read_gray_se . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
9.2 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
bottom_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
closing_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
closing_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
closing_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
dilation1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
dilation2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
dilation_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
dilation_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
dilation_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
dilation_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
erosion1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
erosion2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
erosion_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
erosion_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
erosion_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
erosion_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
gen_struct_elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
golay_elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
hit_or_miss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
hit_or_miss_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
hit_or_miss_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
minkowski_add1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
minkowski_add2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
minkowski_sub1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
minkowski_sub2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
morph_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
morph_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
morph_skiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
opening_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
opening_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
opening_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
opening_seg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
thickening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
thickening_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
thickening_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
thinning_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
thinning_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
top_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742

10 OCR 743
10.1 Hyperboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
close_all_ocrs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
close_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
create_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
do_ocr_multi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
do_ocr_single . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
info_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
ocr_change_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
ocr_get_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
read_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
testd_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
traind_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
trainf_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
write_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
10.2 Lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
clear_all_lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
clear_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
create_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
import_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
inspect_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757
lookup_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757
suggest_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
10.3 Neural-Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
clear_all_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
clear_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
create_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
do_ocr_multi_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
do_ocr_single_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
do_ocr_word_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
get_features_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766
get_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
get_prep_info_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
read_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
trainf_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
write_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
10.4 Support-Vector-Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
clear_all_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
clear_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
create_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
do_ocr_multi_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
do_ocr_single_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
do_ocr_word_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
get_features_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
get_params_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
get_prep_info_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
get_support_vector_num_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
get_support_vector_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
read_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
reduce_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
trainf_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
write_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
10.5 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
segment_characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
select_characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
text_line_orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792
text_line_slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
10.6 Training-Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
append_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
concat_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
read_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
read_ocr_trainf_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
read_ocr_trainf_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
write_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
write_ocr_trainf_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799

11 Object 801
11.1 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
count_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
get_channel_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
get_obj_class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
test_equal_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
test_obj_def . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
11.2 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
clear_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
concat_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
copy_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
gen_empty_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
integer_to_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
obj_to_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
select_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809

12 Regions 811
12.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
get_region_chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
get_region_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
get_region_convex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
get_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
get_region_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
get_region_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
12.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
gen_checker_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
gen_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
gen_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
gen_empty_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
gen_grid_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
gen_random_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
gen_random_regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823
gen_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
gen_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826
gen_region_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
gen_region_histo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
gen_region_hline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
gen_region_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
gen_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
gen_region_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 832
gen_region_polygon_filled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
gen_region_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
gen_region_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
label_to_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
12.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
area_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
circularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
connect_and_holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
contlength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
diameter_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
eccentricity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
elliptic_axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
euler_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
find_neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
get_region_index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
get_region_thickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
hamming_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
hamming_distance_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
inner_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
inner_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
moments_region_2nd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
moments_region_2nd_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
moments_region_2nd_rel_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
moments_region_3rd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
moments_region_3rd_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
moments_region_central . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857
moments_region_central_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
orientation_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
rectangularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
roundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
runlength_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
runlength_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
select_region_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
select_region_spatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865
select_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
select_shape_proto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
select_shape_std . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
smallest_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
smallest_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
smallest_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
spatial_relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
12.4 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
affine_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
mirror_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
move_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
polar_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
polar_trans_region_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
projective_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
transpose_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
zoom_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
12.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
symm_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
union1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
union2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890
12.6 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890
test_equal_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890
test_region_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
test_subset_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
12.7 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
background_seg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
clip_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
clip_region_rel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894
connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896
distance_transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
eliminate_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
expand_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
fill_up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900
fill_up_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901
hamming_change_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
interjacent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
junctions_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904
merge_regions_line_scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
partition_dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
partition_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
rank_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
remove_noise_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
shape_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911
sort_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
split_skeleton_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913
split_skeleton_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914

13 Segmentation 917
13.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
add_samples_image_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
add_samples_image_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
add_samples_image_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
class_2dim_sup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 920
class_2dim_unsup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
class_ndim_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
class_ndim_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
classify_image_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925
classify_image_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 926
classify_image_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927
learn_ndim_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
learn_ndim_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
13.2 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
detect_edge_segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
hysteresis_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
nonmax_suppression_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 934
nonmax_suppression_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935
13.3 Regiongrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
expand_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
expand_gray_ref . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
expand_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
regiongrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
regiongrowing_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942
regiongrowing_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 943
13.4 Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
auto_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
bin_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948
char_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
check_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950
dual_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
dyn_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
fast_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
histo_to_thresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
threshold_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958
var_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959
zero_crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
zero_crossing_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
13.5 Topography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962
critical_points_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962
local_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963
local_max_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964
local_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
local_min_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
lowlands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967
lowlands_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967
plateaus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
plateaus_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
pouring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
saddle_points_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
watersheds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972
watersheds_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973

14 System 975
14.1 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
count_relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
get_modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
reset_obj_db . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 977
14.2 Error-Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
get_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
get_error_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
get_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
query_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
set_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
set_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
14.3 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
get_chapter_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
get_keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
get_operator_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
get_operator_name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
get_param_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
get_param_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988
get_param_num . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
get_param_types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
query_operator_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
query_param_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
search_operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
14.4 Operating-System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992
count_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992
system_call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
wait_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
14.5 Parallelization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
check_par_hw_potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
load_par_knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
store_par_knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
14.6 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
get_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
set_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
14.7 Serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
clear_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
close_all_serials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
close_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
get_serial_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
open_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1008
read_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
set_serial_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
write_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
14.8 Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
close_socket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
get_next_socket_data_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
get_socket_descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
get_socket_timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
open_socket_accept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
open_socket_connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015
receive_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015
receive_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
receive_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
receive_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
send_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
send_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018
send_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018
send_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
set_socket_timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
socket_accept_connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020

15 Tools 1023
15.1 2D-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
affine_trans_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
affine_trans_point_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1024
bundle_adjust_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025
hom_mat2d_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
hom_mat2d_determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
hom_mat2d_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
hom_mat2d_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029
hom_mat2d_rotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
hom_mat2d_rotate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1031
hom_mat2d_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032
hom_mat2d_scale_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1033
hom_mat2d_slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
hom_mat2d_slant_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1036
hom_mat2d_to_affine_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037
hom_mat2d_translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038
hom_mat2d_translate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1039
hom_mat2d_transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1040
hom_mat3d_project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1041
hom_vector_to_proj_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
proj_match_points_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
projective_trans_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
projective_trans_point_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047
vector_angle_to_rigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
vector_field_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
vector_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1050
vector_to_proj_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1051
vector_to_rigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053
vector_to_similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054
15.2 3D-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055
affine_trans_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055
convert_pose_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
create_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057
get_pose_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
hom_mat3d_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
hom_mat3d_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1062
hom_mat3d_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
hom_mat3d_rotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
hom_mat3d_rotate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
hom_mat3d_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067
hom_mat3d_scale_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1068
hom_mat3d_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
hom_mat3d_translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1070
hom_mat3d_translate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
pose_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1072
read_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
set_origin_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074
write_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
15.3 Background-Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
close_all_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
close_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
create_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1078
get_bg_esti_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1081
give_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
run_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083
set_bg_esti_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084
update_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086
15.4 Barcode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087
clear_all_bar_code_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087
clear_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088
create_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088
find_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089
get_bar_code_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1090
get_bar_code_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1091
get_bar_code_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
set_bar_code_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094
15.5 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
caltab_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
cam_mat_to_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097
cam_par_to_cam_mat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
camera_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1099
change_radial_distortion_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106
change_radial_distortion_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107
change_radial_distortion_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1108
contour_to_world_plane_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109
create_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1110
disp_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1112
find_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114
find_marks_and_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115
gen_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1117
gen_image_to_world_plane_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1120
gen_radial_distortion_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1123
get_circle_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1124
get_line_of_sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125
get_rectangle_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126
hand_eye_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1129
image_points_to_world_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137
image_to_world_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1138
project_3d_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1140
radiometric_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1141
read_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1144
sim_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
stationary_camera_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
write_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153
15.6 Datacode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156
clear_all_data_code_2d_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156
clear_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157
create_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157
find_data_code_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1160
get_data_code_2d_objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164
get_data_code_2d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166
get_data_code_2d_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168
query_data_code_2d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175
read_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
set_data_code_2d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
write_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
15.7 Fourier-Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
abs_invar_fourier_coeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
fourier_1dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
fourier_1dim_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
invar_fourier_coeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185
match_fourier_coeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186
move_contour_orig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1187
prep_contour_fourier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1188
15.8 Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
abs_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
compose_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
create_funct_1d_array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
create_funct_1d_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
derivate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191
distance_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191
funct_1d_to_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
get_pair_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
get_y_value_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
integrate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193
invert_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193
local_min_max_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194
match_funct_1d_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195
negate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
num_points_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
read_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
sample_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197
scale_y_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1198
smooth_funct_1d_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1198
smooth_funct_1d_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
transform_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
write_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
x_range_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
y_range_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201
zero_crossings_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201
15.9 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202
angle_ll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202
angle_lx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1203
distance_cc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
distance_cc_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
distance_lc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205
distance_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
distance_pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
distance_pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
distance_pp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1209
distance_pr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
distance_ps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
distance_rr_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
distance_rr_min_dil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
distance_sc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
distance_sl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
distance_sr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
distance_ss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216
get_points_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
intersection_ll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
projection_pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1220
15.10 Grid-Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
connect_grid_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
create_rectification_grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
find_rectification_grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
gen_arbitrary_distortion_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1223
gen_grid_rectification_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224
15.11 Hough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
hough_circle_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
hough_circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
hough_line_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
hough_line_trans_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1228
hough_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1229
hough_lines_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230
select_matching_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232
15.12 Image-Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
clear_all_variation_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
clear_train_data_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
clear_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
compare_ext_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
compare_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236
create_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237
get_thresh_images_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1239
get_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
prepare_direct_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
prepare_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1242
read_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
train_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
write_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
15.13 Kalman-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
filter_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
read_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
sensor_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
update_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
15.14 Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255
close_all_measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255
close_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255
fuzzy_measure_pairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256
fuzzy_measure_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1258
fuzzy_measure_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1260
gen_measure_arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262
gen_measure_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264
measure_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266
measure_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1268
measure_projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1269
measure_thresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1270
reset_fuzzy_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272
set_fuzzy_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272
set_fuzzy_measure_norm_pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
translate_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
15.15 OCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
close_all_ocvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
close_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
create_ocv_proj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1278
do_ocv_simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
read_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1280
traind_ocv_proj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
write_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
15.16 Shape-from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
depth_from_focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
estimate_al_am . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
estimate_sl_al_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284
estimate_sl_al_zc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284
estimate_tilt_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
estimate_tilt_zc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
phot_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286
select_grayvalues_from_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
sfs_mod_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
sfs_orig_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289
sfs_pentland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1290
shade_height_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291
15.17 Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1292
binocular_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1292
binocular_disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1296
binocular_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1299
disparity_to_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1302
disparity_to_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
distance_to_disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
essential_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
gen_binocular_proj_rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1305
gen_binocular_rectification_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307
intersect_lines_of_sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1310
match_essential_matrix_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1311
match_fundamental_matrix_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1314
match_rel_pose_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1317
reconst3d_from_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
rel_pose_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1322
vector_to_essential_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1323
vector_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1325
vector_to_rel_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1328
15.18 Tools-Legacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330
decode_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330
decode_2d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1331
discrete_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1332
find_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1333
find_1d_bar_code_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1338
find_1d_bar_code_scanline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1339
find_2d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341
gen_1d_bar_code_descr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1344
gen_1d_bar_code_descr_gen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1346
gen_2d_bar_code_descr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1347
get_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349
get_1d_bar_code_scanline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1350
get_2d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352
get_2d_bar_code_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357

16 Tuple 1359
16.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1359
tuple_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1359
tuple_acos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1359
tuple_add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
tuple_asin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
tuple_atan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1361
tuple_atan2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1361
tuple_ceil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1362
tuple_cos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1362
tuple_cosh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1362
tuple_cumul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
tuple_deg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
tuple_div . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364
tuple_exp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364
tuple_fabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
tuple_floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
tuple_fmod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
tuple_ldexp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
tuple_log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
tuple_log10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
tuple_max2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
tuple_min2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
tuple_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
tuple_mult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
tuple_neg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
tuple_pow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
tuple_rad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
tuple_sgn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
tuple_sin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
tuple_sinh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
tuple_sqrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1372
tuple_sub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1372
tuple_tan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373
tuple_tanh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373
16.2 Bit-Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
tuple_band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
tuple_bnot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
tuple_bor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
tuple_bxor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
tuple_lsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1376
tuple_rsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1376
16.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
tuple_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
tuple_greater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
tuple_greater_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378
tuple_less . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378
tuple_less_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
tuple_not_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
16.4 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
tuple_chr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
tuple_chrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
tuple_int . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
tuple_is_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
tuple_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
tuple_ord . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1382
tuple_ords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1382
tuple_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383
tuple_round . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383
tuple_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
16.5 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
tuple_concat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
tuple_gen_const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
tuple_rand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
16.6 Element-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
tuple_inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
tuple_sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
tuple_sort_index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
16.7 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1388
tuple_deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1388
tuple_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1388
tuple_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
tuple_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
tuple_median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
tuple_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
tuple_sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
16.8 Logical-Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391
tuple_and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391
tuple_not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391
tuple_or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
tuple_xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
16.9 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
tuple_find . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
tuple_first_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
tuple_last_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
tuple_remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
tuple_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395
tuple_select_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396
tuple_select_rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396
tuple_str_bit_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1397
tuple_uniq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1397
16.10 String-Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
tuple_environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
tuple_regexp_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
tuple_regexp_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1400
tuple_regexp_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401
tuple_regexp_test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
tuple_split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
tuple_str_first_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
tuple_str_last_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404
tuple_strchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404
tuple_strlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405
tuple_strrchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405
tuple_strrstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
tuple_strstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406

17 XLD 1409
17.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1409
get_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1409
get_lines_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1409
get_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1410
get_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1411
17.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1411
gen_contour_nurbs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1411
gen_contour_polygon_rounded_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1413
gen_contour_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1414
gen_contour_region_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1415
gen_contours_skeleton_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1416
gen_cross_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417
gen_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417
gen_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1419
gen_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1420
gen_rectangle2_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
mod_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
17.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
area_center_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
area_center_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
circularity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1424
compactness_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425
contour_point_num_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426
convexity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427
diameter_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427
dist_ellipse_contour_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1428
dist_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429
dist_rectangle2_contour_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431
eccentricity_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1432
eccentricity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1433
elliptic_axis_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1434
elliptic_axis_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
fit_circle_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1437
fit_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1439
fit_line_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442
fit_rectangle2_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444
get_contour_angle_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446
get_contour_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
get_contour_global_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
get_regress_params_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448
info_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1449
length_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
local_max_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
max_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
moments_any_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1452
moments_any_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1453
moments_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
moments_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456
orientation_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1457
orientation_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1457
query_contour_attribs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458
query_contour_global_attribs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458
select_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1459
select_shape_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1460
select_xld_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1462
smallest_circle_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1463
smallest_rectangle1_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1464
smallest_rectangle2_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1465
test_self_intersection_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1466
test_xld_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
17.4 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
affine_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
affine_trans_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1468
gen_parallel_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470
polar_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470
polar_trans_contour_xld_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1472
projective_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1474
17.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
difference_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
difference_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476
intersection_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1477
intersection_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1478
symm_difference_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1479
symm_difference_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1480
union2_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1481
union2_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
17.6 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
add_noise_white_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
clip_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
close_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
combine_roads_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1485
crop_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486
merge_cont_line_scan_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1487
regress_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1488
segment_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1489
shape_trans_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1491
smooth_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492
sort_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1493
split_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1494
union_adjacent_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1494
union_cocircular_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1496
union_collinear_contours_ext_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497
union_collinear_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1499
union_straight_contours_histo_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1501
union_straight_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1503

Index 1505
Chapter 1

Classification

1.1 Gaussian-Mixture-Models
T_add_sample_class_gmm ( const Htuple GMMHandle,
const Htuple Features, const Htuple ClassID, const Htuple Randomize )

Add a training sample to the training data of a Gaussian Mixture Model.


add_sample_class_gmm adds a training sample to the Gaussian Mixture Model (GMM) given by
GMMHandle. The training sample is given by Features and ClassID. Features is the feature vector
of the sample, and consequently must be a real vector of length NumDim, as specified in create_class_gmm.
ClassID is the class of the sample, an integer between 0 and NumClasses-1 (set in create_class_gmm).
In the special case where the feature vectors are of integer type, they are lying in the feature space in a grid with
step width 1.0. For example, the RGB feature vectors typically used for color classification are triples having
integer values between 0 and 255 for each of their components. In fact, there might be even several feature vectors
representing the same point. When training a GMM with such data, the training algorithm may tend to align the
modelled Gaussians along linearly dependent lines or planes of data that are parallel to the grid dimensions. If
the number of Centers returned by train_class_gmm is unusually high, this indicates such a behavior of
the algorithm. The parameter Randomize can be used to handle such undesired effects. If Randomize > 0.0,
random Gaussian noise with mean 0 and standard deviation Randomize is added to each component of the
training data vectors, and the transformed training data is stored in the GMM. For values of Randomize ≤ 1.0,
the randomized data will look like small clouds around the grid points, which does not improve the properties of
the data cloud. For values of Randomize  2.0, the randomization might have a too strong influence on the
resulting GMM. For integer feature vectors, a value of Randomize between 1.5 and 2.0 is recommended, which
transfoms the integer data into homogeneous clouds, without modifying its general form in the feature space. If
the data has been created from integer data by scaling, the same problem may occur. Here, Randomize must be
scaled with the same scale factor that was used to scale the original data.
Before the GMM can be trained with train_class_gmm, all training samples must be added to the GMM with
add_sample_class_gmm.
The number of currently stored training samples can be queried with get_sample_num_class_gmm. Stored
training samples can be read out again with get_sample_class_gmm.
Normally, it is useful to save the training samples in a file with write_samples_class_gmm to facilitate
reusing the samples, and to facilitate that, if necessary, new training samples can be added to the data set, and
hence to facilitate that a newly created GMM can be trained anew with the extended data set.
Parameter

. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Htuple . Hlong


GMM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Feature vector of the training sample to be stored.
. ClassID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Class of the training sample to be stored.

1
2 CHAPTER 1. CLASSIFICATION

. Randomize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double


Standard deviation of the Gaussian noise added to the training data.
Default Value : 0.0
Suggested values : Randomize ∈ {0.0, 1.5, 2.0}
Restriction : Randomize ≥ 0.0
Result
If the parameters are valid, the operator add_sample_class_gmm returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
add_sample_class_gmm is processed completely exclusively without parallelization.
Possible Predecessors
create_class_gmm
Possible Successors
train_class_gmm, write_samples_class_gmm
Alternatives
read_samples_class_gmm, add_samples_image_class_gmm
See also
clear_samples_class_gmm, get_sample_num_class_gmm, get_sample_class_gmm
Module
Foundation

T_classify_class_gmm ( const Htuple GMMHandle, const Htuple Features,


const Htuple Num, Htuple *ClassID, Htuple *ClassProb, Htuple *Density,
Htuple *KSigmaProb )

Calculate the class of a feature vector by a Gaussian Mixture Model.


classify_class_gmm computes the best Num classes of the feature vector Features with the Gaussian
Mixture Model (GMM) GMMHandle and returns the classes in ClassID and the corresponding probabili-
ties of the classes in ClassProb. Before calling classify_class_gmm, the GMM must be trained with
train_class_gmm.
classify_class_gmm corresponds to a call to evaluate_class_gmm and an additional step that ex-
tracts the best Num classes. As described with evaluate_class_gmm, the output values of the GMM can
be interpreted as probabilities of the occurrence of the respective classes. However, here the posterior probabil-
ity ClassProb is further normalized as ClassProb = p(i|x)/p(x), where p(i|x) and p(x) are specified with
evaluate_class_gmm. In most cases it should be sufficient to use Num = 1 in order to decide whether the
probability of the best class is high enough. In some applications it may be interesting to also take the second best
class into account (Num = 2), particularly if it can be expected that the classes show a significant degree of overlap.
Density and KSigmaProb are explained with evaluate_class_gmm.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Htuple . Hlong
GMM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Feature vector.
. Num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of best classes to determine.
Default Value : 1
Suggested values : Num ∈ {1, 2, 3, 4, 5}
. ClassID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong *
Result of classifying the feature vector with the GMM.
. ClassProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
A-posteriori probability of the classes.
. Density (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Probability density of the feature vector.

HALCON/C Reference Manual, 2008-5-13


1.1. GAUSSIAN-MIXTURE-MODELS 3

. KSigmaProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *


Normalized k-sigma-probability for the feature vector.
Result
If the parameters are valid, the operator classify_class_gmm returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
classify_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
train_class_gmm, read_class_gmm
Alternatives
evaluate_class_gmm
See also
create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation

clear_all_class_gmm ( )
T_clear_all_class_gmm ( )

Clear all Gaussian Mixture Models.


clear_all_class_gmm clears all Gaussian Mixture Models (GMM) and frees all memory required for the
GMMs. After calling clear_all_class_gmm, no GMM can be used any longer.
Attention
clear_all_class_gmm exists solely for the purpose of implementing the “reset program” functionality in
HDevelop. clear_all_class_gmm must not be used in any application.
Result
clear_all_class_gmm always returns H_MSG_TRUE.
Parallelization Information
clear_all_class_gmm is processed completely exclusively without parallelization.
Possible Predecessors
classify_class_gmm, evaluate_class_gmm
Alternatives
clear_class_gmm
See also
create_class_gmm, read_class_gmm, write_class_gmm, train_class_gmm
Module
Foundation

clear_class_gmm ( Hlong GMMHandle )


T_clear_class_gmm ( const Htuple GMMHandle )

Clear a Gaussian Mixture Model.


clear_class_gmm clears the Gaussian Mixture Model (GMM) given by GMMHandle and frees all mem-
ory required for the GMM. After calling clear_class_gmm, the GMM can no longer be used. The handle
GMMHandle becomes invalid.

HALCON 8.0.2
4 CHAPTER 1. CLASSIFICATION

Parameter

. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong


GMM handle.
Result
If the parameters are valid, the operator clear_class_gmm returns the value H_MSG_TRUE. If necessary an
exception handling is raised.
Parallelization Information
clear_class_gmm is processed completely exclusively without parallelization.
Possible Predecessors
classify_class_gmm, evaluate_class_gmm
See also
create_class_gmm, read_class_gmm, write_class_gmm, train_class_gmm
Module
Foundation

clear_samples_class_gmm ( Hlong GMMHandle )


T_clear_samples_class_gmm ( const Htuple GMMHandle )

Clear the training data of a Gaussian Mixture Model.


clear_samples_class_gmm clears all training samples that have been stored in the Gaussian Mixture
Model (GMM) GMMHandle. clear_samples_class_gmm should only be used if the GMM is trained
in the same process that uses the GMM for evaluation with evaluate_class_gmm or for classification
with classify_class_gmm. In this case, the memory required for the training samples can be freed
with clear_samples_class_gmm, and hence memory can be saved. In the normal usage, in which the
GMM is trained offline and written to a file with write_class_gmm, it is typically unnecessary to call
clear_samples_class_gmm because write_class_gmm does not save the training samples, and hence
the online process, which reads the GMM with read_class_gmm, requires no memory for the training samples.
Parameter

. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong


GMM handle.
Result
If the parameters are valid, the operator clear_samples_class_gmm returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
clear_samples_class_gmm is processed completely exclusively without parallelization.
Possible Predecessors
train_class_gmm, write_samples_class_gmm
See also
create_class_gmm, clear_class_gmm, add_sample_class_gmm,
read_samples_class_gmm
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


1.1. GAUSSIAN-MIXTURE-MODELS 5

create_class_gmm ( Hlong NumDim, Hlong NumClasses, Hlong NumCenters,


const char *CovarType, const char *Preprocessing, Hlong NumComponents,
Hlong RandSeed, Hlong *GMMHandle )

T_create_class_gmm ( const Htuple NumDim, const Htuple NumClasses,


const Htuple NumCenters, const Htuple CovarType,
const Htuple Preprocessing, const Htuple NumComponents,
const Htuple RandSeed, Htuple *GMMHandle )

Create a Gaussian Mixture Model for classification


create_class_gmm creates a Gaussian Mixture Model (GMM) for classification. NumDim specifies the num-
ber of dimensions of the feature space, NumClasses specifies the number of classes. A GMM consists of
NumCenters Gaussian centers per class. NumCenters can not only be the exact number of centers to be used,
but, depending on the number of parameters, can specify upper and lower bounds for the number of centers:

exactly one parameter: The parameter determines the exact number of centers to be used for all classes.
exactly two parameters: The first parameter determines the mimimum number of centers, the second determines
the maximum number of centers for all classes.
exactly 2 · N umClasses parameters: Alternatingly every first parameter determines the minimum number of
centers per class and every second parameters determines the maximum number of centers per class.

When upper and lower bounds are specified, the optimum number of centers will be determined with the help of
the Mimimum Message Length Criterion (MML). In general, we recommend to start the training with (too) many
centers as maximum and the expected number of centers as minimum.
Each center is described by the parameters center mj , covariance matrix Cj , and mixing coefficient Pj . These pa-
rameters are calculated from the training data by means of the Expectation Maximization (EM) algorithm. A GMM
can approximate an arbitrary probability density, provided that enough centers are being used. The covariance ma-
trices Cj have the dimensions NumDim · NumDim (NumComponents · NumComponents if preprocessing is
used) and are symmetric. Further constraints can be given by CovarType:
For CovarType = ’spherical’, Cj is a scalar multiple of the identity matrix Cj = s2j I. The center density
function p(x|j) is

2
1 kx − mj k
p(x|j) = exp(− )
(2πs2j )d/2 2s2j

For CovarType = ’diag’, Cj is a diagonal matrix Cj = diag(s2j,1 , ..., s2j,d ). The center density function p(x|j)
is

d
1 X (xi − mj,i )2
p(x|j) = exp(− )
d 2s2j,i
s2j,i )d/2
Q
(2π i=1
i=1

For CovarType = ’full’, Cj is a positive definite matrix. The center density function p(x|j) is

1 1
p(x|j) = 1 exp(− (x − mj )T C−1 (x − mj ))
(2π)d/2 |Cj | 2 2

The complexity of the calculations increases from CovarType = ’spherical’ over CovarType = ’diag’ to
CovarType = ’full’. At the same time the flexibility of the centers increases. In general, ’spherical’ therefore
needs higher values for NumCenters than ’full’.
The procedure to use GMM is as follows: First, a GMM is created by create_class_gmm. Then,
training vectors are added by add_sample_class_gmm, afterwards they can be written to disk with
write_samples_class_gmm. With train_class_gmm the classifier center parameters (defined above)
are determined. Furthermore, they can be saved with write_class_gmm for later classifications.
From the mixing probabilities Pj and the center density function p(x|j), the probability density function p(x) can
be calculated by:

HALCON 8.0.2
6 CHAPTER 1. CLASSIFICATION

ncomp
X
p(x) = P (j)p(x|j)
j=1

The probability density function p(x) can be evaluated with evaluate_class_gmm for a feature vector x.
classify_class_gmm sorts the p(x) and therefore discovers the most probable class of the feature vector.
The parameters Preprocessing and NumComponents can be used to preprocess the training data and reduce
its dimensions. These parameteters are explained in the description of the operator create_class_mlp.
create_class_gmm initializes the coordinates of the centers with random numbers. To ensure that the results of
training the classifier with train_class_gmm are reproducible, the seed value of the random number generator
is passed in RandSeed.
Parameter

. NumDim (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong


Number of dimensions of the feature space.
Default Value : 3
Suggested values : NumDim ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumDim ≥ 1
. NumClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Number of classes of the GMM.
Default Value : 5
Suggested values : NumClasses ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : NumClasses ≥ 1
. NumCenters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Number of centers per class.
Default Value : 1
Suggested values : NumCenters ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30}
Restriction : NumClasses ≥ 1
. CovarType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Type of the covariance matrices.
Default Value : "spherical"
List of values : CovarType ∈ {"spherical", "diag", "full"}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "normalization"
List of values : Preprocessing ∈ {"none", "normalization", "principal_components",
"canonical_variates"}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default Value : 10
Suggested values : NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumComponents ≥ 1
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Seed value of the random number generator that is used to initialize the GMM with random values.
Default Value : 42
. GMMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; (Htuple .) Hlong *
GMM handle.
Example (Syntax: HDevelop)

* Classification with Gaussian Mixture Models


create_class_gmm (NumDim, NumClasses, [1,5], ’full’, 42, GMMHandle)
* Add the training data
for J := 0 to NData-1 by 1
Features := [...]

HALCON/C Reference Manual, 2008-5-13


1.1. GAUSSIAN-MIXTURE-MODELS 7

Class := [...]
add_sample_class_gmm (GMMHandle, Features, Class)
endfor
* Train the GMM
train_class_gmm (GMMHandle, 100, 0.001, 0, Centers, Iter)
* Classify unknown data in ’Features’
classify_class_gmm (GMMHandle, Features, 1, ClassProb, Density, KSigmaProb)
clear_class_gmm (GMMHandle)

Result
If the parameters are valid, the operator create_class_gmm returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
create_class_gmm is processed completely exclusively without parallelization.
Possible Successors
add_sample_class_gmm, add_samples_image_class_gmm
Alternatives
create_class_mlp, create_class_svm, create_class_box
See also
clear_class_gmm, train_class_gmm, classify_class_gmm, evaluate_class_gmm,
classify_image_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation

T_evaluate_class_gmm ( const Htuple GMMHandle, const Htuple Features,


Htuple *ClassProb, Htuple *Density, Htuple *KSigmaProb )

Evaluate a feature vector by a Gaussian Mixture Model.


evaluate_class_gmm computes three different probability values for a feature vector Features with the
Gaussian Mixture Model (GMM) GMMHandle.
The a-posteriori probablity of class i for the sample Features is computed as

ncomp
X
p(i|x) = P (j)p(x|j)
j=1

and returned for each class in ClassProb. The formulas for the calculation of the center density function p(x|j)
are described with create_class_gmm.
The probablity density of the feature vector is computed as a sum of the posterior class probabilities

nclasses
X
p(x) = P r(i)p(i|x)
i=1

and is returned in Density. Here, P r(i) are the prior classes probabilities as computed by
train_class_gmm. Density can be used for novelty detection, i.e., to reject feature vectors that do not
belong to any of the trained classes. However, since Density depends on the scaling of the feature vectors
and since Density is a probability density, and consequently does not need to lie between 0 and 1, the novelty
detection can typically be performed more easily with KSigmaProb (see below).
A k-sigma error ellipsoid is defined as a locus of points for which

HALCON 8.0.2
8 CHAPTER 1. CLASSIFICATION

(x − µ)T C −1 (x − µ) = k 2

In the one dimensional case this is the interval [µ − kσ, µ + kσ]. For any 1D Gaussian distribution, it is true
that approximately 65% of the occurrences of the random variable are within this range for k = 1, approximately
95% for k = 2, approximately 99% for k = 3, etc. Hence, the probability that a Gaussian distribution will
generate a random variable outside this range is approximately 35%, 5%, and 1%, respectively. This probability is
called k-sigma probability and is denoted by P [k]. P [k] can be computed numerically for univariate as well as for
multivariate Gaussian distributions, where it should be noted that for the same values of k, P (N ) [k] > P (N +1) [k]
(here N and (N+1) denote dimensions). For Gaussian mixture models the k-sigma probability is computed as:
ncomp
X
PGM M [x] = P (j)Pj [kj ], where kj2 = (x − µj )T Cj−1 (x − µj )
j=1

They then are weighted with the class priors, normalized, and returned for each class in KSigmaProb, such that

P r(i)
KSigmaProb[i] = PGM M [x]
P rmax

KSigmaProb can be used for novelty detection. Typically, feature vectors having values below 0.0001 should
be rejected. The parameter RejectionThreshold in classify_image_class_gmm is based on the
KSigmaProb values of the features.
Before calling evaluate_class_gmm, the GMM must be trained with train_class_gmm.
The position of the maximum value of ClassProb is usally interpreted as the class of the feature vector and the
corresponding value as the probability of the class. In this case, classify_class_gmm should be used instead
of evaluate_class_gmm, because classify_class_gmm directly returns the class and corresponding
probability.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Htuple . Hlong
GMM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Feature vector.
. ClassProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
A-posteriori probability of the classes.
. Density (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Probability density of the feature vector.
. KSigmaProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Normalized k-sigma-probability for the feature vector.
Result
If the parameters are valid, the operator evaluate_class_gmm returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
evaluate_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
train_class_gmm, read_class_gmm
Alternatives
classify_class_gmm
See also
create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


1.1. GAUSSIAN-MIXTURE-MODELS 9

T_get_params_class_gmm ( const Htuple GMMHandle, Htuple *NumDim,


Htuple *NumClasses, Htuple *MinCenters, Htuple *MaxCenters,
Htuple *CovarType )

Return the parameters of a Gaussian Mixture Model.


get_params_class_gmm returns the parameters of a Gaussian Mixture Model (GMM) that were specified
when the GMM was created with create_class_gmm. This is particularly useful if the GMM was read with
read_class_gmm. The output of get_params_class_gmm can, for example, be used to check whether
the feature vectors and/or the target data to be used have appropriate dimensions to be used with GMM. For a
description of the parameters, see create_class_gmm.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Htuple . Hlong
GMM handle.
. NumDim (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Number of dimensions of the feature space.
. NumClasses (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Number of classes of the GMM.
. MinCenters (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Minimum number of centers per GMM class.
. MaxCenters (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Maximum number of centers per GMM class.
. CovarType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
Type of the covariance matrices.
Result
If the parameters are valid, the operator get_params_class_gmm returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
get_params_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
create_class_gmm, read_class_gmm
Possible Successors
add_sample_class_gmm, train_class_gmm
See also
evaluate_class_gmm, classify_class_gmm
Module
Foundation

T_get_prep_info_class_gmm ( const Htuple GMMHandle,


const Htuple Preprocessing, Htuple *InformationCont,
Htuple *CumInformationCont )

Compute the information content of the preprocessed feature vectors of a GMM.


get_prep_info_class_gmm computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_mlp.
The information content is derived from the variations of the transformed components of the feature vector, i.e., it
is computed solely based on the training data, independent of any error rate on the training data. The information
content is computed for all relevant components of the transformed feature vectors (NumComponents for ’princi-
pal_components’ and ’canonical_variates’, see create_class_gmm), and is returned in InformationCont
as a number between 0 and 1. To convert the information content into a percentage, it simply needs to be mul-
tiplied by 100. The cumulative information content of the first n components is returned in the n-th compo-
nent of CumInformationCont, i.e., CumInformationCont contains the sums of the first n elements of
InformationCont. To use get_prep_info_class_gmm, a sufficient number of samples must be added
to the GMM given by GMMHandle by using add_sample_class_gmm or read_samples_class_gmm.

HALCON 8.0.2
10 CHAPTER 1. CLASSIFICATION

InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_gmm. The call to get_prep_info_class_gmm al-
ready requires the creation of a GMM, and hence the setting of NumComponents in create_class_gmm
to an initial value. However, if get_prep_info_class_gmm is called, it is typically not known how many
components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step
approach should typically be used to select NumComponents: In a first step, a GMM with the maximum num-
ber for NumComponents is created (NumComponents for ’principal_components’ and ’canonical_variates’).
Then, the training samples are added to the GMM and are saved in a file using write_samples_class_gmm.
Subsequently, get_prep_info_class_gmm is used to determine the information content of the compo-
nents, and with this NumComponents. After this, a new GMM with the desired number of components is
created, and the training samples are read with read_samples_class_gmm. Finally, the GMM is trained
with train_class_gmm.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Htuple . Hlong
GMM handle.
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "principal_components"
List of values : Preprocessing ∈ {"principal_components", "canonical_variates"}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Cumulative information content of the transformed feature vectors.
Example (Syntax: HDevelop)

* Create the initial GMM


create_class_gmm (NDim, NClasses, ’full’, ’principal_components’,
NDim, 42, GMMHandle)
* Generate and add the training data
for J := 0 to NData-1 by 1
* Generate training features and classes
* Data = [...]
* Class = [...]
add_sample_class_gmm (GMMHandle, Data, Class)
endfor
write_samples_class_gmm (GMMHandle, ’samples.gtf’)
* Compute the information content of the transformed features
get_prep_info_class_gmm (GMMHandle, ’principal_components’,
InformationCont, CumInformationCont)
* Determine NComp by inspecting InformationCont and CumInformationCont
* NComp = [...]
clear_class_gmm (GMMHandle)
* Create the actual GMM
create_class_gmm (NIn, NClasses, ’full’, ’principal_components’,
NComp, 42, GMMHandle)
* Train the GMM
read_samples_class_gmm (GMMHandle, ’samples.gtf’)
train_class_gmm (GMMHandle, 200, 0.0001, 0.0001, Centers,Iter)
write_class_gmm (GMMHandle, ’classifier.gmm’)
clear_class_gmm (GMMHandle)

Result
If the parameters are valid, the operator get_prep_info_class_gmm returns the value H_MSG_TRUE. If
necessary an exception handling is raised.

HALCON/C Reference Manual, 2008-5-13


1.1. GAUSSIAN-MIXTURE-MODELS 11

get_prep_info_class_gmm may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
get_prep_info_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_gmm, read_samples_class_gmm
Possible Successors
clear_class_gmm, create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

T_get_sample_class_gmm ( const Htuple GMMHandle,


const Htuple NumSample, Htuple *Features, Htuple *ClassID )

Return a training sample from the training data of a Gaussian Mixture Models (GMM).
get_sample_class_gmm reads out a training sample from the Gaussian Mixture Model (GMM) given by
GMMHandle that was stored with add_sample_class_gmm or add_samples_image_class_gmm.
The index of the sample is specified with NumSample. The index is counted from 0, i.e., NumSample
must be a number between 0 and NumSamples − 1, where NumSamples can be determined with
get_sample_num_class_gmm. The training sample is returned in Features and ClassID. Features
is a feature vector of length NumDim, while ClassID is its class (see add_sample_class_gmm and
create_class_gmm).
get_sample_class_gmm can, for example, be used to reclassify the training data with
classify_class_gmm in order to determine which training samples, if any, are classified incorrectly.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Htuple . Hlong
GMM handle.
. NumSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Index of the stored training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Feature vector of the training sample.
. ClassID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong *
Class of the training sample.
Example (Syntax: HDevelop)

create_class_gmm (2, 2, [1,10], ’spherical’, ’none’, 2, 42, GMMHandle)


read_samples_class_gmm (GMMHandle, ’samples.gsf’)
train_class_gmm (GMMHandle, 100, 1e-4, ’training’, 1e-4, Centers, Iter)
* Reclassify the training samples
get_sample_num_class_gmm (GMMHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_gmm (GMMHandle, I, Features, Class)
classify_class_gmm (GMMHandle, Features, 2, ClassProb, Density,
KSigmaProb)
if (not (Class=ClassProb[0]))
* classified incorrectly
endif
endfor
clear_class_gmm (GMMHandle)

HALCON 8.0.2
12 CHAPTER 1. CLASSIFICATION

Result
If the parameters are valid, the operator get_sample_class_gmm returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
get_sample_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_gmm, add_samples_image_class_gmm, read_samples_class_gmm,
get_sample_num_class_gmm
Possible Successors
classify_class_gmm, evaluate_class_gmm
See also
create_class_gmm
Module
Foundation

get_sample_num_class_gmm ( Hlong GMMHandle, Hlong *NumSamples )


T_get_sample_num_class_gmm ( const Htuple GMMHandle,
Htuple *NumSamples )

Return the number of training samples stored in the training data of a Gaussian Mixture Model (GMM).
get_sample_num_class_gmm returns in NumSamples the number of training samples that are stored in the
Gaussian Mixture Model (GMM) given by GMMHandle. get_sample_num_class_gmm should be called
before the individual training samples are read out with get_sample_class_gmm, e.g., for the purpose of
reclassifying the training data (see get_sample_class_gmm).
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong
GMM handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of stored training samples.
Result
If the parameters are valid, the operator get_sample_num_class_gmm returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
get_sample_num_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_gmm, add_samples_image_class_gmm, read_samples_class_gmm
Possible Successors
get_sample_class_gmm
See also
create_class_gmm
Module
Foundation

read_class_gmm ( const char *FileName, Hlong *GMMHandle )


T_read_class_gmm ( const Htuple FileName, Htuple *GMMHandle )

Read a Gaussian Mixture Model from a file.


read_class_gmm reads a Gaussian Mixture Model (GMM) that has been stored with write_class_gmm.
Since the training of an GMM can consume a relatively long time, the GMM is typically trained in an of-
fline process and written to a file with write_class_gmm. In the online process the GMM is read with

HALCON/C Reference Manual, 2008-5-13


1.1. GAUSSIAN-MIXTURE-MODELS 13

read_class_gmm and subsequently used for evaluation with evaluate_class_gmm or for classification
with classify_class_gmm.
Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *


File name.
. GMMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong *
GMM handle.
Result
If the parameters are valid, the operator read_class_gmm returns the value H_MSG_TRUE. If necessary an
exception handling is raised.
Parallelization Information
read_class_gmm is processed completely exclusively without parallelization.
Possible Successors
classify_class_gmm, evaluate_class_gmm
See also
create_class_gmm, write_class_gmm
Module
Foundation

read_samples_class_gmm ( Hlong GMMHandle, const char *FileName )


T_read_samples_class_gmm ( const Htuple GMMHandle,
const Htuple FileName )

Read the training data of a Gaussian Mixture Model from a file.


read_samples_class_gmm reads training samples from the file given by FileName and adds them to the
training samples that have already been stored in the Gaussian Mixture Model (GMM) given by GMMHandle.
The GMM must be created with create_class_gmm before calling read_samples_class_gmm. As
described with train_class_gmm and write_samples_class_gmm, read_samples_class_gmm,
add_sample_class_gmm, and write_samples_class_gmm can be used to build up a database of train-
ing samples, and hence to improve the performance of the GMM by retraining the GMM with extended data
sets.
It should be noted that the training samples must have the correct dimensionality. The feature vectors stored in
FileName must have the lengths NumDim that was specified with create_class_gmm, and enough classes
must have been created in create_class_gmm. If this is not the case, an error message is returned.
It is possible to read files of samples that were written with write_samples_class_svm or
write_samples_class_mlp.
Parameter

. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong


GMM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name.
Result
If the parameters are valid, the operator read_samples_class_gmm returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
read_samples_class_gmm is processed completely exclusively without parallelization.
Possible Predecessors
create_class_gmm
Possible Successors
train_class_gmm

HALCON 8.0.2
14 CHAPTER 1. CLASSIFICATION

Alternatives
add_sample_class_gmm
See also
write_samples_class_gmm, write_samples_class_mlp, clear_samples_class_gmm
Module
Foundation

T_train_class_gmm ( const Htuple GMMHandle, const Htuple MaxIter,


const Htuple Threshold, const Htuple ClassPriors,
const Htuple Regularize, Htuple *Centers, Htuple *Iter )

Train a Gaussian Mixture Model.


train_class_gmm trains the Gaussian Mixture Model (GMM) referenced by GMMHandle. Before the
GMM can be trained, all training samples to be used for the training must be stored in the GMM using
add_sample_class_gmm, add_samples_image_class_gmm, or read_samples_class_gmm.
After the training, new training samples can be added to the GMM and the GMM can be trained again. Only
the classes with newly added training vectors will be calculated again.
During the training, the error that results from the GMM applied to the training vectors will be minimized with the
expectation maximization (EM) algorithm.
MaxIter specifies the maximum number of iterations per class for the EM algorithm. In practice, values between
20 and 200 should be sufficient for most problems. Threshold specifies a threshold for the relative changes
of the error. If the relative change in error exceeds the threshold after MaxIter iterations, the algorithm will be
canceled for this class. Because the algorithm starts with the maximum specified number of centers (parameter
NumCenters in create_class_gmm), in case of a premature termination the number of centers and the error
for this class will not be optimal. In this case, a new training with different parameters (e.g. another value for
RandSeed in create_class_gmm) can be tried.
ClassPriors specifies the method of calculation of the class priors in GMM. If ’training’ is specified, the
priors of the classes are taken from the proportion of the corresponding sample data during training. If ’uniform’
is specified, the priors are set equal to 1/NumClasses for all classes.
Regularize is used to regularize (nearly) singular covariance matrices during the training. A covariance matrix
might collapse to singularity if it is trained with linearly dependent data. To avoid this, a small value specified by
Regularize is added to each main diagonal element of the covariance matrix, which prevents this element from
becoming smaller than Regularize. A recommended value for Regularize is 0.0001. If Regularize is
set to 0.0, no regularization is performed.
The centers are initially randomly distributed. In individual cases, relatively high errors will result from the al-
gorithm because the initial random values determined by RandSeed in create_class_gmm lead to local
minima. In this case, a new GMM with a different value for RandSeed should be generated to test whether a
significantly smaller error can be obtained.
It should be noted that, depending on the number of centers, the type of covariance matrix, and the number of
training samples, the training can take from a few seconds to several hours.
On output, train_class_gmm returns in Centers the number of centers per class that have been
found to be optimal by the EM algorithm. This values can be used as a reference in NumCenters (in
create_class_gmm) for future GMMs. If the number of centers found by training a new GMM on integer
training data is unexpectedly high, this might be corrected by adding a Randomize noise to the training data in
add_sample_class_gmm. Iter contains the number of performed iterations per class. If a value in Iter
equals MaxIter, the training algorithm has been terminated prematurely (see above).
Parameter

. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Htuple . Hlong


GMM handle.
. MaxIter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Maximum number of iterations of the expectation maximization algorithm
Default Value : 100
Suggested values : MaxIter ∈ {10, 20, 30, 50, 100, 200}

HALCON/C Reference Manual, 2008-5-13


1.1. GAUSSIAN-MIXTURE-MODELS 15

. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double


Threshold for relative change of the error for the expectation maximization algorithm to terminate.
Default Value : 0.001
Suggested values : Threshold ∈ {0.001, 0.0001}
Restriction : (Threshold ≥ 0.0) ∧ (Threshold ≤ 1.0)
. ClassPriors (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Mode to determine the a-priori probabilities of the classes
Default Value : "training"
List of values : ClassPriors ∈ {"training", "uniform"}
. Regularize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Regularization value for preventing covariance matrix singularity.
Default Value : 0.0001
Restriction : (Regularize ≥ 0.0) ∧ (Regularize < 1.0)
. Centers (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Number of found centerss per class
. Iter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Number of executed iterations per class
Example (Syntax: HDevelop)

create_class_gmm (NumDim, NumClasses, [1,5], ’full’, ’none’, 0, 42,


GMMHandle)
* Add the training data
read_samples_class_gmm (GMMHandle, ’samples.gsf’)
* Train the GMM
train_class_gmm (GMMHandle, 100, 1e-4, ’training’, 1e-4, Centers, Iter)
* Write the Gaussian Mixture Model to file
write_class_gmm (GMMHandle, ’gmmclassifier.gmm’)
clear_class_gmm (GMMHandle)

Result
If the parameters are valid, the operator train_class_gmm returns the value H_MSG_TRUE. If necessary an
exception handling is raised.
Parallelization Information
train_class_gmm is processed completely exclusively without parallelization.
Possible Predecessors
add_sample_class_gmm, read_samples_class_gmm
Possible Successors
evaluate_class_gmm, classify_class_gmm, write_class_gmm
Alternatives
read_class_gmm
See also
create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation

write_class_gmm ( Hlong GMMHandle, const char *FileName )


T_write_class_gmm ( const Htuple GMMHandle, const Htuple FileName )

Write a Gaussian Mixture Model to a file.

HALCON 8.0.2
16 CHAPTER 1. CLASSIFICATION

write_class_gmm writes the Gaussian Mixture Model (GMM) GMMHandle to the file given by FileName.
write_class_gmm is typically called after the GMM has been trained with train_class_gmm. The GMM
can be read with read_class_gmm. write_class_gmm does not write any training samples that possibly
have been stored in the GMM. For this purpose, write_samples_class_gmm should be used.
Parameter

. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong


GMM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the parameters are valid, the operator write_class_gmm returns the value H_MSG_TRUE. If necessary an
exception handling is raised.
Parallelization Information
write_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
train_class_gmm
Possible Successors
clear_class_gmm
See also
create_class_gmm, read_class_gmm, write_samples_class_gmm
Module
Foundation

write_samples_class_gmm ( Hlong GMMHandle, const char *FileName )


T_write_samples_class_gmm ( const Htuple GMMHandle,
const Htuple FileName )

Write the training data of a Gaussian Mixture Model to a file.


write_samples_class_gmm writes the training samples stored in the Gaussian Mixture Model (GMM)
GMMHandle to the file given by FileName. write_samples_class_gmm can be used to build up a
database of training samples, and hence to improve the performance of the GMM by training it with an extended
data set (see train_class_gmm).
The file FileName is overwritten by write_samples_class_gmm. Nevertheless, extending the database
of training samples is easy because read_samples_class_gmm and add_sample_class_gmm add the
training samples to the training samples that are already stored in memory with the GMM.
The created file can be read with read_samples_class_mlp if the classificator of a multilayer perceptron
(MLP) should be used. The class of a training sample in the GMM corresponds to a component of the target vector
in the MLP being 1.0.
Parameter

. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong


GMM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the parameters are valid, the operator write_samples_class_gmm returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
write_samples_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_gmm

HALCON/C Reference Manual, 2008-5-13


1.2. HYPERBOXES 17

Possible Successors
clear_samples_class_gmm
See also
create_class_gmm, read_samples_class_gmm, read_samples_class_mlp,
write_samples_class_mlp
Module
Foundation

1.2 Hyperboxes

clear_sampset ( Hlong SampKey )


T_clear_sampset ( const Htuple SampKey )

Free memory of a data set.


clear_sampset frees the memory which was used for training data set having read by read_sampset. This
memory is only reusable in combination with read_sampset.
Parameter

. SampKey (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . feature_set ; Hlong


Number of the data set.
Result
clear_sampset returns H_MSG_TRUE. An exception handling is raised if the key SampKey does not exist.
Parallelization Information
clear_sampset is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, enquire_class_box, learn_class_box, write_class_box
See also
test_sampset_box, learn_sampset_box, read_sampset
Module
Foundation

close_all_class_box ( )
T_close_all_class_box ( )

Destroy all classificators.


close_all_class_box deletes all classificators and frees the used memory space. All the trained data will be
lost.
Attention
close_all_class_box exists solely for the purpose of implementing the “reset program” functionality in
HDevelop. close_all_class_box must not be used in any application.
Result
If it is possible to close the classificators the operator close_all_class_box returns the value
H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
close_all_class_box is local and processed completely exclusively without parallelization.
Alternatives
close_class_box
Module
Foundation

HALCON 8.0.2
18 CHAPTER 1. CLASSIFICATION

close_class_box ( Hlong ClassifHandle )


T_close_class_box ( const Htuple ClassifHandle )

Destroy the classificator.


close_class_box deletes the classificator and frees the used memory space. All the trained data will be lost.
For saving this trained data you should use write_class_box before.
Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong
Classificator’s handle number.
Result
close_class_box returns H_MSG_TRUE.
Parallelization Information
close_class_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, enquire_class_box, learn_class_box, write_class_box
See also
create_class_box, enquire_class_box, learn_class_box
Module
Foundation

create_class_box ( Hlong *ClassifHandle )


T_create_class_box ( Htuple *ClassifHandle )

Create a new classificator.


create_class_box creates a new adaptive classificator. All procedures which are explained in chapter classi-
fication refer to such a initialized classificator (of type 2). See learn_class_box for more details about the
functionality of the classificator.
Parameter
. ClassifHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong *
Classificator’s handle number.
Result
create_class_box returns H_MSG_TRUE if the parameter is correct. An exception handling is raised if a
classificator with this name already exists or there is not enough memory.
Parallelization Information
create_class_box is local and processed completely exclusively without parallelization.
Possible Successors
learn_class_box, enquire_class_box, write_class_box, close_class_box,
clear_sampset
See also
learn_class_box, enquire_class_box, close_class_box
Module
Foundation

descript_class_box ( Hlong ClassifHandle, Hlong Dimensions )


T_descript_class_box ( const Htuple ClassifHandle,
const Htuple Dimensions )

Description of the classificator.

HALCON/C Reference Manual, 2008-5-13


1.2. HYPERBOXES 19

A classificator uses a set of hyper cuboids for every class. With these hyper cuboids it is attempted to get the array
attributes inside the class. descript_class_box returns for every class the expansion of every appropriate
cuboid from dimension 1 up to Dimensions (to ’standard_output’).
Parameter

. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong


Classificator’s handle number.
. Dimensions (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Highest dimension for output.
Default Value : 3
Result
descript_class_box returns H_MSG_TRUE.
Parallelization Information
descript_class_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, learn_class_box, set_class_box_param
Possible Successors
enquire_class_box, learn_class_box, write_class_box, close_class_box
See also
create_class_box, enquire_class_box, learn_class_box, read_class_box,
write_class_box
Module
Foundation

T_enquire_class_box ( const Htuple ClassifHandle,


const Htuple FeatureList, Htuple *Class )

Classify a tuple of attributes.


FeatureList is a tuple of any floating point- or integer numbers (attributes) which has to be assigned to a class
with assistance of a previous trained ( learn_class_box) classificator. It is possible to specify attributes as
unknown by indicating the symbol ’∗’ instead of a number. If you specify n values, then all following values, i.e.
the attributes n+1 until ’max’, are automatically supposed to be undefined.
See learn_class_box for more details about the functionality of the classificator.
You may call the procedures learn_class_box and enquire_class_box alternately, so that it is possible
to classify already in the phase of learning. This means you could see when a satisfying behavior had been reached.
Parameter

. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Htuple . Hlong


Classificator’s handle number.
. FeatureList (input_control) . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong / const char *
Array of attributes which has to be classified.
Default Value : 1.0
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Number of the class to which the array of attributes had been assigned.
Result
enquire_class_box returns H_MSG_TRUE.
Parallelization Information
enquire_class_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, learn_class_box, set_class_box_param
Possible Successors
learn_class_box, write_class_box, close_class_box

HALCON 8.0.2
20 CHAPTER 1. CLASSIFICATION

Alternatives
enquire_reject_class_box
See also
test_sampset_box, learn_class_box, learn_sampset_box
Module
Foundation

T_enquire_reject_class_box ( const Htuple ClassifHandle,


const Htuple FeatureList, Htuple *Class )

Classify a tuple of attributes with rejection class.


FeatureList is a tuple of any floating point- or integer numbers (attributes) which has to be assigned to a class
with assistance of a previous trained ( learn_class_box) classificator. It is possible to specify attributes as
unknown by indicating the symbol ’∗’ instead of a number. If you specify n values, then all following values, i.e.
the attributes n+1 until ’max’, are automatically supposed to be undefined.
If the array of attributes cannot be assigned to a class, i.e. the array does not reside inside of one of the hyper boxes,
then in contrary to enquire_class_box not the next class is going to be returned, but the rejection class -1 as
a result is going to be passed.
See learn_class_box for more details about the functionality of the classificator.
You may call the procedures learn_class_box and enquire_class_box alternately, so that it is possible
to classify already in the phase of learning. By this means you could see when a satisfying behavior had been
reached.
Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Htuple . Hlong
Classificator’s handle number.
. FeatureList (input_control) . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong / const char *
Array of attributes which has to be classified.
Default Value : 1.0
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Number of the class, to which the array of attributes had been assigned or -1 for the rejection class.
Result
enquire_reject_class_box returns H_MSG_TRUE.
Parallelization Information
enquire_reject_class_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, learn_class_box, set_class_box_param
Possible Successors
learn_class_box, write_class_box, close_class_box
Alternatives
enquire_class_box
See also
test_sampset_box, learn_class_box, learn_sampset_box
Module
Foundation

get_class_box_param ( Hlong ClassifHandle, const char *Flag,


double *Value )

T_get_class_box_param ( const Htuple ClassifHandle, const Htuple Flag,


Htuple *Value )

Get information about the current parameter.

HALCON/C Reference Manual, 2008-5-13


1.2. HYPERBOXES 21

get_class_box_param gets the parameter of the classificator. The meaning of the parameter is explained in
set_class_box_param.
Default values:
’min_samples_for_split’ = 80,
’split_error’ = 0.1,
’prop_constant’ = 0.25
Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong
Classificator’s handle number.
. Flag (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the system parameter.
Default Value : "split_error"
List of values : Flag ∈ {"split_error", "prop_constant", "used_memory", "min_samples_for_split"}
. Value (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double * / Hlong *
Value of the system parameter.
Result
get_class_box_param returns H_MSG_TRUE. An exception handling is raised if Flag has been set with
wrong values.
Parallelization Information
get_class_box_param is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, enquire_class_box, learn_class_box, write_class_box
Possible Successors
set_class_box_param, learn_class_box, enquire_class_box, write_class_box,
close_class_box, clear_sampset
See also
create_class_box, set_class_box_param
Module
Foundation

T_learn_class_box ( const Htuple ClassifHandle, const Htuple Features,


const Htuple Class )

Train the classificator.


Features is a tuple of any floating point numbers or integers (attributes) which has to be assigned to the class
Class. This class is specified by an integer. You may use procedure enquire_class_box later to find the
most probable class for any array (=tupel). The algorithm tries to describe the set of arrays of one class by hyper
cuboids in the feature space. On demand you may even create several cuboids per class. Hence it is possible to
learn disjunct concepts, too. I.e such concepts which split in several "‘cluster"’ of points in the feature space. The
data structure is hidden to the user and only accessible with such procedures which are described in this chapter.
It is possible to specify attributes as unknown by indicating the symbol ’∗’ instead of a number. If you specify n
values, then all following values, i.e. the attributes n+1 until ’max’, are automatically supposed to be undefined.
You may call the procedures learn_class_box and enquire_class_box alternately, so that it is possible
to classify already in the phase of learning. By this means you could see when a satisfying behavior had been
reached.
The classificator is going to be bigger using further training. This means, that it is not advisable to continue training
after reaching a satisfactory behavior.
Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Htuple . Hlong
Classificator’s handle number.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong / const char *
Array of attributes to learn.
Default Value : [1.0,1.5,2.0]

HALCON 8.0.2
22 CHAPTER 1. CLASSIFICATION

. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Class to which the array has to be assigned.
Default Value : 1
Result
learn_class_box returns H_MSG_TRUE for a normal case. An exception handling is raised if there are
memory allocation problems. The number of classes is constrained. If this limit is passed, an exception handling
is raised, too.
Parallelization Information
learn_class_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, enquire_class_box
Possible Successors
test_sampset_box, learn_class_box, enquire_class_box, write_class_box,
close_class_box, clear_sampset
See also
test_sampset_box, close_class_box, create_class_box, enquire_class_box,
learn_sampset_box
Module
Foundation

learn_sampset_box ( Hlong ClassifHandle, Hlong SampKey,


const char *Outfile, Hlong NSamples, double StopError, Hlong ErrorN )

T_learn_sampset_box ( const Htuple ClassifHandle,


const Htuple SampKey, const Htuple Outfile, const Htuple NSamples,
const Htuple StopError, const Htuple ErrorN )

Train the classificator with one data set.


learn_sampset_box trains the classificator with data for the key SampKey (see read_sampset). The
training sequence is terminated at least after NSamples examples. If NSamples is bigger than the number of
examples in SampKey, then a cyclic start at the beginning occurs. If the error underpasses the value StopError,
then the training sequence is prematurely terminated. StopError is calculated with N / ErrorN. Whereby N
significates the number of examples which were wrong classified during the last ErrorN training examples.
Typically ErrorN is the number of examples in SampKey and NSamples is a multiple of it. If you want a data
set with 100 examples to run 5 times at most and if you want it to terminate with an error lower than 5%, then the
corresponding values are NSamples = 500, ErrorN = 100 and StopError = 0.05. A protocol of the training
activity is going to be written in file Outfile.
Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong
Classificator’s handle number.
. SampKey (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . feature_set ; Hlong
Number of the data set to train.
. Outfile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of the protocol file.
Default Value : "training_prot"
. NSamples (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of arrays of attributes to learn.
Default Value : 500
. StopError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Classification error for termination.
Default Value : 0.05
. ErrorN (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Error during the assignment.
Default Value : 100

HALCON/C Reference Manual, 2008-5-13


1.2. HYPERBOXES 23

Result
learn_sampset_box returns H_MSG_TRUE. An exception handling is raised if key SampKey does not exist
or there are problems while opening the file.
Parallelization Information
learn_sampset_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box
Possible Successors
test_sampset_box, enquire_class_box, write_class_box, close_class_box,
clear_sampset
See also
test_sampset_box, enquire_class_box, learn_class_box, read_sampset
Module
Foundation

read_class_box ( Hlong ClassifHandle, const char *FileName )


T_read_class_box ( const Htuple ClassifHandle, const Htuple FileName )

Read the classificator from a file.


read_class_box reads the saved classificator from the file FileName (see write_class_box). The
values of the current classificator are overwritten.
Attention
All values of the classificator are going to be overwritten.
Parameter

. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong


Classificator’s handle number.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Filename of the classificators.
Default Value : "klassifikator1"
Result
read_class_box returns H_MSG_TRUE. An exception handling is raised if it was not possible to open file
FileName or the file has the wrong format.
Parallelization Information
read_class_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box
Possible Successors
test_sampset_box, enquire_class_box, write_class_box, close_class_box,
clear_sampset
See also
create_class_box, write_class_box
Module
Foundation

read_sampset ( const char *FileName, Hlong *SampKey )


T_read_sampset ( const Htuple FileName, Htuple *SampKey )

Read a training data set from a file.

HALCON 8.0.2
24 CHAPTER 1. CLASSIFICATION

The training examples are accessible with the key SampKey by calling procedures clear_sampset and
learn_sampset_box. You may edit the file using an editor. Every row contains an array of attributes with
corresponding class. An example for a format might be:
(1.0, 25.3, *, 17 | 3)
This row specifies an array of attributes which belong to class 3. In this array the third attribute is unknown.
Attributes upwards 5 are supposed to be unknown, too. You may insert comments like /* .. */ in any place.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Filename of the data set to train.
Default Value : "sampset1"
. SampKey (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . feature_set ; Hlong *
Identification of the data set to train.
Result
read_sampset returns H_MSG_TRUE. An exception handling is raised if it is not possible to open the file or
it contains syntax errors or there is not enough memory.
Parallelization Information
read_sampset is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box
Possible Successors
test_sampset_box, enquire_class_box, write_class_box, close_class_box,
clear_sampset
See also
test_sampset_box, clear_sampset, learn_sampset_box
Module
Foundation

set_class_box_param ( Hlong ClassifHandle, const char *Flag,


double Value )

T_set_class_box_param ( const Htuple ClassifHandle, const Htuple Flag,


const Htuple Value )

Set system parameters for classification.


set_class_box_param modifies parameter which manipulate the training sequence while calling
learn_class_box. Only parameters of the classificator are modified, all other classificators remain unmodi-
fied. ’min_samples_for_split’ is the number of examples at least which have to train in one cuboid of this classi-
ficator, before the cuboid is allowed to divide itself. ’split_error’ indicates the critical error. By its exceeding the
cuboid divides itself, if there are more than ’min_samples_for_split’ examples to train. ’prop_constant’ manipu-
lates the extension of the cuboids. It is proportional to the average distance of the training examples in this cuboid
to the center of the cuboid. More detailed:
extension × prop = average distance of the expectation value.
This relation is valid in every dimension. Hence inside a cuboid the dimensions of the feature space are supposed
to be independent.
The parameters are set with problem independent default values, which must not modified without any rea-
son. Parameters are only important during a learning sequence. They do not influence on the behavior of
enquire_class_box.
Default setting:
’min_samples_for_split’ = 80,
’split_error’ = 0.1,
’prop_constant’ = 0.25

HALCON/C Reference Manual, 2008-5-13


1.2. HYPERBOXES 25

Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong
Classificator’s handle number.
. Flag (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the wanted parameter.
Default Value : "split_error"
Suggested values : Flag ∈ {"min_samples_for_split", "split_error", "prop_constant"}
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Value of the parameter.
Default Value : 0.1
Result
read_sampset returns H_MSG_TRUE.
Parallelization Information
set_class_box_param is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, enquire_class_box
Possible Successors
learn_class_box, test_sampset_box, write_class_box, close_class_box,
clear_sampset
See also
enquire_class_box, get_class_box_param, learn_class_box
Module
Foundation

test_sampset_box ( Hlong ClassifHandle, Hlong SampKey, double *Error )


T_test_sampset_box ( const Htuple ClassifHandle, const Htuple SampKey,
Htuple *Error )

Classify a set of arrays.


In contrast to learn_sampset_box there is not a learning here. Typically you use test_sampset_box to
classify independent test data. Error gives you information about the applicability of the learned training set on
new examples.
Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong
Classificator’s handle number.
. SampKey (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . feature_set ; Hlong
Key of the test data.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Error during the assignment.
Result
test_sampset_box returns H_MSG_TRUE. An exception handling is raised, if if key SampKey does not
exist or problems occur while opening the file.
Parallelization Information
test_sampset_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, learn_class_box, set_class_box_param
Possible Successors
enquire_class_box, learn_class_box, write_class_box, close_class_box,
clear_sampset
See also
enquire_class_box, learn_class_box, learn_sampset_box, read_sampset

HALCON 8.0.2
26 CHAPTER 1. CLASSIFICATION

Module
Foundation

write_class_box ( Hlong ClassifHandle, const char *FileName )


T_write_class_box ( const Htuple ClassifHandle, const Htuple FileName )

Save the classificator in a file.


write_class_box saves the classificator in a file. You may read the data by calling read_class_box.
Attention
If a file with this name exists, it is overwritten without a warning. The file can not be edited.
Parameter

. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong


Classificator’s handle number.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of the file which contains the written data.
Default Value : "klassifikator1"
Result
write_class_box returns H_MSG_TRUE. An exception handling is raised if it was not possible to open file
FileName.
Parallelization Information
write_class_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, enquire_class_box, learn_class_box, test_sampset_box,
write_class_box
Possible Successors
close_class_box, clear_sampset
See also
create_class_box, read_class_box
Module
Foundation

1.3 Neural-Nets
T_add_sample_class_mlp ( const Htuple MLPHandle,
const Htuple Features, const Htuple Target )

Add a training sample to the training data of a multilayer perceptron.


add_sample_class_mlp adds a training sample to the multilayer perceptron (MLP) given by MLPHandle.
The training sample is given by Features and Target. Features is the feature vector of the sample, and
consequently must be a real vector of length NumInput, as specified in create_class_mlp. Target is
the target vector of the sample, which must have the length NumOutput (see create_class_mlp) for all
three types of activation functions of the MLP (exception: see below). If the MLP is used for regression (function
approximation), i.e., if OutputFunction = ’linear’, Target is the value of the function at the coordinate
Features. In this case, Target can contain arbitrary real numbers. For OutputFunction = ’logistic’,
Target can only contain the values 0.0 and 1.0. A value of 1.0 specifies that the attribute in question is present,
while a value of 0.0 specifies that the attribute is absent. Because in this case the attributes are independent,
arbitrary combinations of 0.0 and 1.0 can be passed. For OutputFunction = ’softmax’, Target also can only
contain the values 0.0 and 1.0. In contrast to OutputFunction = ’logistic’, the value 1.0 must be present for
exactly one element of the tuple Target. The location in the tuple designates the class of the sample. For ease of
use, a single integer value may be passed if OutputFunction = ’softmax’. This value directly designates the

HALCON/C Reference Manual, 2008-5-13


1.3. NEURAL-NETS 27

class of the sample, which is counted from 0, i.e., the class must be an integer between 0 and NumOutput − 1.
The class is converted to a target vector of length NumOutput internally.
Before the MLP can be trained with train_class_mlp, all training samples must be added to the MLP with
add_sample_class_mlp.
The number of currently stored training samples can be queried with get_sample_num_class_mlp. Stored
training samples can be read out again with get_sample_class_mlp.
Normally, it is useful to save the training samples in a file with write_samples_class_mlp to facilitate
reusing the samples, and to facilitate that, if necessary, new training samples can be added to the data set, and
hence to facilitate that a newly created MLP can be trained anew with the extended data set.
Parameter

. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Htuple . Hlong


MLP handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Feature vector of the training sample to be stored.
. Target (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . Hlong / double
Class or target vector of the training sample to be stored.
Result
If the parameters are valid, the operator add_sample_class_mlp returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
add_sample_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp, write_samples_class_mlp
Alternatives
read_samples_class_mlp
See also
clear_samples_class_mlp, get_sample_num_class_mlp, get_sample_class_mlp
Module
Foundation

T_classify_class_mlp ( const Htuple MLPHandle, const Htuple Features,


const Htuple Num, Htuple *Class, Htuple *Confidence )

Calculate the class of a feature vector by a multilayer perceptron.


classify_class_mlp computes the best Num classes of the feature vector Features with the multilayer
perceptron (MLP) MLPHandle and returns the classes in Class and the corresponding confidences (probabili-
ties) of the classes in Confidence. Before calling classify_class_mlp, the MLP must be trained with
train_class_mlp.
classify_class_mlp can only be called if the MLP is used as a classifier with OutputFunction = ’soft-
max’ (see create_class_mlp). Otherwise, an error message is returned. classify_class_mlp cor-
responds to a call to evaluate_class_mlp and an additional step that extracts the best Num classes. As
described with evaluate_class_mlp, the output values of the MLP can be interpreted as probabilities of the
occurrence of the respective classes. However, here the posterior probability ClassProb is further normalized as
ClassProb = p(i|x)/p(x), where p(i|x) and p(x) are defined as in evaluate_class_gmm. In most cases
it should be sufficient to use Num = 1 in order to decide whether the probability of the best class is high enough.
In some applications it may be interesting to also take the second best class into account (Num = 2), particularly if
it can be expected that the classes show a significant degree of overlap.

HALCON 8.0.2
28 CHAPTER 1. CLASSIFICATION

Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Htuple . Hlong
MLP handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Feature vector.
. Num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Number of best classes to determine.
Default Value : 1
Suggested values : Num ∈ {1, 2, 3, 4, 5}
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong *
Result of classifying the feature vector with the MLP.
. Confidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Confidence(s) of the class(es) of the feature vector.
Result
If the parameters are valid, the operator classify_class_mlp returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
classify_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
train_class_mlp, read_class_mlp
Alternatives
evaluate_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

clear_all_class_mlp ( )
T_clear_all_class_mlp ( )

Clear all multilayer perceptrons.


clear_all_class_mlp clears all multilayer perceptrons (MLP) and frees all memory required for the MLPs.
After calling clear_all_class_mlp, no MLP can be used any longer.
Attention
clear_all_class_mlp exists solely for the purpose of implementing the “reset program” functionality in
HDevelop. clear_all_class_mlp must not be used in any application.
Result
clear_all_class_mlp always returns H_MSG_TRUE.
Parallelization Information
clear_all_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
classify_class_mlp, evaluate_class_mlp
Alternatives
clear_class_mlp
See also
create_class_mlp, read_class_mlp, write_class_mlp, train_class_mlp
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


1.3. NEURAL-NETS 29

clear_class_mlp ( Hlong MLPHandle )


T_clear_class_mlp ( const Htuple MLPHandle )

Clear a multilayer perceptron.


clear_class_mlp clears the multilayer perceptron (MLP) given by MLPHandle and frees all memory re-
quired for the MLP. After calling clear_class_mlp, the MLP can no longer be used. The handle MLPHandle
becomes invalid.
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong
MLP handle.
Result
If MLPHandle is valid, the operator clear_class_mlp returns the value H_MSG_TRUE. If necessary an
exception handling is raised.
Parallelization Information
clear_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
classify_class_mlp, evaluate_class_mlp
See also
create_class_mlp, read_class_mlp, write_class_mlp, train_class_mlp
Module
Foundation

clear_samples_class_mlp ( Hlong MLPHandle )


T_clear_samples_class_mlp ( const Htuple MLPHandle )

Clear the training data of a multilayer perceptron.


clear_samples_class_mlp clears all training samples that have been added to the multilayer
perceptron (MLP) MLPHandle with add_sample_class_mlp or read_samples_class_mlp.
clear_samples_class_mlp should only be used if the MLP is trained in the same process that uses the
MLP for evaluation with evaluate_class_mlp or for classification with classify_class_mlp. In
this case, the memory required for the training samples can be freed with clear_samples_class_mlp,
and hence memory can be saved. In the normal usage, in which the MLP is trained offline and written to a
file with write_class_mlp, it is typically unnecessary to call clear_samples_class_mlp because
write_class_mlp does not save the training samples, and hence the online process, which reads the MLP
with read_class_mlp, requires no memory for the training samples.
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong
MLP handle.
Result
If the parameters are valid, the operator clear_samples_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
clear_samples_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
train_class_mlp, write_samples_class_mlp
See also
create_class_mlp, clear_class_mlp, add_sample_class_mlp,
read_samples_class_mlp
Module
Foundation

HALCON 8.0.2
30 CHAPTER 1. CLASSIFICATION

create_class_mlp ( Hlong NumInput, Hlong NumHidden, Hlong NumOutput,


const char *OutputFunction, const char *Preprocessing,
Hlong NumComponents, Hlong RandSeed, Hlong *MLPHandle )

T_create_class_mlp ( const Htuple NumInput, const Htuple NumHidden,


const Htuple NumOutput, const Htuple OutputFunction,
const Htuple Preprocessing, const Htuple NumComponents,
const Htuple RandSeed, Htuple *MLPHandle )

Create a multilayer perceptron for classification or regression.


create_class_mlp creates a neural net in the form of a multilayer perceptron (MLP), which can be used for
classification or regression (function approximation), depending on how OutputFunction is set. The MLP
consists of three layers: an input layer with NumInput input variables (units, neurons), a hidden layer with
NumHidden units, and an output layer with NumOutput output variables. The MLP performs the following
steps to calculate the activations zj of the hidden units from the input data xi (the so-called feature vector):

ni
(1) (1) (1)
X
aj = wji xi + bj , j = 1, . . . , nh
i=1
(1) 
zj = tanh aj , j = 1, . . . , nh

(1) (1)
Here, the matrix wji and the vector bj are the weights of the input layer (first layer) of the MLP. In the hidden
layer (second layer), the activations zj are transformed in a first step by using linear combinations of the variables
in an analogous manner as above:

nh
(2) (2) (2)
X
ak = wkj zj + bk , k = 1, . . . , no
j=1

(2) (2)
Here, the matrix wkj and the vector bk are the weights of the second layer of the MLP.
The activation function used in the output layer can be determined by setting OutputFunction. For
OutputFunction = ’linear’, the data are simply copied:

(2)
yk = ak , k = 1, . . . , no

This type of activation function should be used for regression problems (function approximation). This activation
function is not suited for classification problems.
For OutputFunction = ’logistic’, the activations are computed as follows:

1
yk = (2) 
, k = 1, . . . , no
1 + exp − ak

This type of activation function should be used for classification problems with multiple (NumOutput) indepen-
dent logical attributes as output. This kind of classification problem is relatively rare in practice.
For OutputFunction = ’softmax’, the activations are computed as follows:

(2) 
exp ak
yk = Pno (2) , k = 1, . . . , no
l=1 al

This type of activation function should be used for common classification problems with multiple (NumOutput)
mutually exclusive classes as output. In particular, OutputFunction = ’softmax’ must be used for the classifi-
cation of pixel data with classify_image_class_mlp.

HALCON/C Reference Manual, 2008-5-13


1.3. NEURAL-NETS 31

The parameters Preprocessing and NumComponents can be used to specify a preprocessing of the feature
vectors. For Preprocessing = ’none’, the feature vectors are passed unaltered to the MLP. NumComponents
is ignored in this case.
For all other values of Preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification or evaluation.
For Preprocessing = ’normalization’, the feature vectors are normalized by subtracting the mean of the
training vectors and dividing the result by the standard deviation of the individual components of the training
vectors. Hence, the transformed feature vectors have a mean of 0 and a standard deviation of 1. The normalization
does not change the length of the feature vector. NumComponents is ignored in this case. This transformation
can be used if the mean and standard deviation of the feature vectors differs substantially from 0 and 1, respectively,
or for data in which the components of the feature vectors are measured in different units (e.g., if some of the data
are gray value features and some are region features, or if region features are mixed, e.g., ’circularity’ (unit: scalar)
and ’area’ (unit: pixel squared)). In these cases, the training of the net will typically require fewer iterations than
without normalization.
For Preprocessing = ’principal_components’, a principal component analysis is performed. First, the feature
vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space) that
decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is 0 and
the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that the
transformed features that contain the most variation is contained in the first components of the transformed feature
vector. With this, it is possible to omit the transformed features in the last components of the feature vector,
which typically are mainly influenced by noise, without losing a large amount of information. The parameter
NumComponents can be used to detemine how many of the transformed feature vector components should be
used. Up to NumInput components can be selected. The operator get_prep_info_class_mlp can be
used to determine how much information each transformed component contains. Hence, it aids the selection of
NumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated.
In contrast to the above three transformations, which can be used for all MLP types, the transformation spec-
ified by Preprocessing = ’canonical_variates’ can only be used if the MLP is used as a classifier with
OutputFunction = ’softmax’). The computation of the canonical variates is also called linear discrimi-
nant analysis. In this case, a transformation that first normalizes the training vectors and then decorrelates the
training vectors on average over all classes is computed. At the same time, the transformation maximally sepa-
rates the mean values of the individual classes. As for Preprocessing = ’principal_components’, the trans-
formed components are sorted by information content, and hence transformed components with little informa-
tion content can be omitted. For canonical variates, up to min(NumOutput − 1, NumInput) components can
be selected. Also in this case, the information content of the transformed components can be determined with
get_prep_info_class_mlp. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the actual number of
input units of the MLP is determined by NumComponents, whereas NumInput determines the dimensionality
of the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transfor-
mations, the number of input variables, and thus usually also the number of hidden units can be reduced. With this,
the time needed to train the MLP and to evaluate and classify a feature vector is typically reduced.
Usually, NumHidden should be selected in the order of magnitude of NumInput and NumOutput. In many
cases, much smaller values of NumHidden already lead to very good classification results. If NumHidden is
chosen too large, the MLP may overfit the training data, which typically leads to bad generalization properties, i.e.,
the MLP learns the training data very well, but does not return very good results on unknown data.
create_class_mlp initializes the above described weights with random numbers. To ensure that the results of
training the classifier with train_class_mlp are reproducible, the seed value of the random number generator
is passed in RandSeed. If the training results in a relatively large error, it sometimes may be possible to achieve
a smaller error by selecting a different value for RandSeed and retraining an MLP.
After the MLP has been created, typically training samples are added to the MLP by repeatedly calling
add_sample_class_mlp or read_samples_class_mlp. After this, the MLP is typically trained us-
ing train_class_mlp. Hereafter, the MLP can be saved using write_class_mlp. Alternatively, the

HALCON 8.0.2
32 CHAPTER 1. CLASSIFICATION

MLP can be used immediately after training to evaluate data using evaluate_class_mlp or, if the MLP is
used as a classifier (i.e., for OutputFunction = ’softmax’), to classify data using classify_class_mlp.
A comparison of the MLP and the support vector machine (SVM) (see create_class_svm) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameter

. NumInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Number of input variables (features) of the MLP.
Default Value : 20
Suggested values : NumInput ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumInput ≥ 1
. NumHidden (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of hidden units of the MLP.
Default Value : 10
Suggested values : NumHidden ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150}
Restriction : NumHidden ≥ 1
. NumOutput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of output variables (classes) of the MLP.
Default Value : 5
Suggested values : NumOutput ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150}
Restriction : NumOutput ≥ 1
. OutputFunction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of the activation function in the output layer of the MLP.
Default Value : "softmax"
List of values : OutputFunction ∈ {"linear", "logistic", "softmax"}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "normalization"
List of values : Preprocessing ∈ {"none", "normalization", "principal_components",
"canonical_variates"}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default Value : 10
Suggested values : NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumComponents ≥ 1
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Seed value of the random number generator that is used to initialize the MLP with random values.
Default Value : 42
. MLPHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong *
MLP handle.
Example (Syntax: HDevelop)

* Use the MLP for regression (function approximation)


create_class_mlp (1, NHidden, 1, ’linear’, ’none’, 1, 42, MLPHandle)
* Generate the training data
* D = [...]
* T = [...]
* Add the training data
for J := 0 to NData-1 by 1
add_sample_class_mlp (MLPHandle, D[J], T[J])
endfor
* Train the MLP
train_class_mlp (MLPHandle, 200, 0.001, 0.001, Error, ErrorLog)
* Generate test data

HALCON/C Reference Manual, 2008-5-13


1.3. NEURAL-NETS 33

* X = [...]
* Compute the output of the MLP on the test data
for J := 0 to N-1 by 1
evaluate_class_mlp (MLPHandle, X[J], Y)
endfor
clear_class_mlp (MLPHandle)

* Use the MLP for classification


create_class_mlp (NIn, NHidden, NOut, ’softmax’, ’normalization’, NIn,
42, MLPHandle)
* Generate and add the training data
for J := 0 to NData-1 by 1
* Generate training features and classes
* Data = [...]
* Class = [...]
add_sample_class_mlp (MLPHandle, Data, Class)
endfor
* Train the MLP
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Use the MLP to classify unknown data
for J := 0 to N-1 by 1
* Extract features
* Features = [...]
classify_class_mlp (MLPHandle, Features, 1, Class, Confidence)
endfor
clear_class_mlp (MLPHandle)

Result
If the parameters are valid, the operator create_class_mlp returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
create_class_mlp is processed completely exclusively without parallelization.
Possible Successors
add_sample_class_mlp
Alternatives
create_class_svm, create_class_gmm, create_class_box
See also
clear_class_mlp, train_class_mlp, classify_class_mlp, evaluate_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

T_evaluate_class_mlp ( const Htuple MLPHandle, const Htuple Features,


Htuple *Result )

Calculate the evaluation of a feature vector by a multilayer perceptron.


evaluate_class_mlp computes the result Result of evaluating the feature vector Features with
the multilayer perceptron (MLP) MLPHandle. The formulas used for the evaluation are described
with create_class_mlp. Before calling evaluate_class_mlp, the MLP must be trained with
train_class_mlp.
If the MLP is used for regression (function approximation), i.e., if (OutputFunction = ’linear’), Result
is the value of the function at the coordinate Features. For OutputFunction = ’logistic’ and ’softmax’,

HALCON 8.0.2
34 CHAPTER 1. CLASSIFICATION

the values in Result can be interpreted as probabilities. Hence, for OutputFunction = ’logistic’ the ele-
ments of Result represent the probabilities of the presence of the respective independent attributes. Typically,
a threshold of 0.5 is used to decide whether the attribute is present or not. Depending on the application, other
thresholds may be used as well. For OutputFunction = ’softmax’ usually the position of the maximum value
of Result is interpreted as the class of the feature vector, and the corresponding value as the probability of the
class. In this case, classify_class_mlp should be used instead of evaluate_class_mlp because
classify_class_mlp directly returns the class and corresponding probability.
Parameter

. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Htuple . Hlong


MLP handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Feature vector.
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Result of evaluating the feature vector with the MLP.
Result
If the parameters are valid, the operator evaluate_class_mlp returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
evaluate_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
train_class_mlp, read_class_mlp
Alternatives
classify_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

get_params_class_mlp ( Hlong MLPHandle, Hlong *NumInput,


Hlong *NumHidden, Hlong *NumOutput, char *OutputFunction,
char *Preprocessing, Hlong *NumComponents )

T_get_params_class_mlp ( const Htuple MLPHandle, Htuple *NumInput,


Htuple *NumHidden, Htuple *NumOutput, Htuple *OutputFunction,
Htuple *Preprocessing, Htuple *NumComponents )

Return the parameters of a multilayer perceptron.


get_params_class_mlp returns the parameters of a multilayer perceptron (MLP) that were specified when
the MLP was created with create_class_mlp. This is particularly useful if the MLP was read from a file with
read_class_mlp. The output of get_params_class_mlp can, for example, be used to check whether the
feature vectors and, if necessary, the target data to be used with the MLP have the correct lengths. For a description
of the parameters, see create_class_mlp.
Parameter

. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong


MLP handle.
. NumInput (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of input variables (features) of the MLP.
. NumHidden (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of hidden units of the MLP.

HALCON/C Reference Manual, 2008-5-13


1.3. NEURAL-NETS 35

. NumOutput (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *


Number of output variables (classes) of the MLP.
. OutputFunction (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Type of the activation function in the output layer of the MLP.
. Preprocessing (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Type of preprocessing used to transform the feature vectors.
. NumComponents (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Preprocessing parameter: Number of transformed features.
Result
If the parameters are valid, the operator get_params_class_mlp returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
get_params_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
create_class_mlp, read_class_mlp
Possible Successors
add_sample_class_mlp, train_class_mlp
See also
evaluate_class_mlp, classify_class_mlp
Module
Foundation

T_get_prep_info_class_mlp ( const Htuple MLPHandle,


const Htuple Preprocessing, Htuple *InformationCont,
Htuple *CumInformationCont )

Compute the information content of the preprocessed feature vectors of a multilayer perceptron.
get_prep_info_class_mlp computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_mlp.
The information content is derived from the variations of the transformed components of the feature vector, i.e.,
it is computed solely based on the training data, independent of any error rate on the training data. The informa-
tion content is computed for all relevant components of the transformed feature vectors (NumInput for ’princi-
pal_components’ and min(NumOutput−1, NumInput) for ’canonical_variates’, see create_class_mlp),
and is returned in InformationCont as a number between 0 and 1. To convert the information content into
a percentage, it simply needs to be multiplied by 100. The cumulative information content of the first n compo-
nents is returned in the n-th component of CumInformationCont, i.e., CumInformationCont contains
the sums of the first n elements of InformationCont. To use get_prep_info_class_mlp, a suffi-
cient number of samples must be added to the multilayer perceptron (MLP) given by MLPHandle by using
add_sample_class_mlp or read_samples_class_mlp.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_mlp. The call to get_prep_info_class_mlp al-
ready requires the creation of an MLP, and hence the setting of NumComponents in create_class_mlp to
an initial value. However, if get_prep_info_class_mlp is called it is typically not known how many com-
ponents are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step ap-
proach should typically be used to select NumComponents: In a first step, an MLP with the maximum number for
NumComponents is created (NumInput for ’principal_components’ and min(NumOutput − 1, NumInput)
for ’canonical_variates’). Then, the training samples are added to the MLP and are saved in a file using
write_samples_class_mlp. Subsequently, get_prep_info_class_mlp is used to determine the
information content of the components, and with this NumComponents. After this, a new MLP with the de-
sired number of components is created, and the training samples are read with read_samples_class_mlp.
Finally, the MLP is trained with train_class_mlp.

HALCON 8.0.2
36 CHAPTER 1. CLASSIFICATION

Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Htuple . Hlong
MLP handle.
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "principal_components"
List of values : Preprocessing ∈ {"principal_components", "canonical_variates"}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Cumulative information content of the transformed feature vectors.
Example (Syntax: HDevelop)

* Create the initial MLP


create_class_mlp (NIn, NHidden, NOut, ’softmax’, ’principal_components’,
NIn, 42, MLPHandle)
* Generate and add the training data
for J := 0 to NData-1 by 1
* Generate training features and classes
* Data = [...]
* Class = [...]
add_sample_class_mlp (MLPHandle, Data, Class)
endfor
write_samples_class_mlp (MLPHandle, ’samples.mtf’)
* Compute the information content of the transformed features
get_prep_info_class_mlp (MLPHandle, ’principal_components’,
InformationCont, CumInformationCont)
* Determine NComp by inspecting InformationCont and CumInformationCont
* NComp = [...]
clear_class_mlp (MLPHandle)
* Create the actual MLP
create_class_mlp (NIn, NHidden, NOut, ’softmax’, ’principal_components’,
NComp, 42, MLPHandle)
* Train the MLP
read_samples_class_mlp (MLPHandle, ’samples.mtf’)
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
write_class_mlp (MLPHandle, ’classifier.mlp’)
clear_class_mlp (MLPHandle)

Result
If the parameters are valid, the operator get_prep_info_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
get_prep_info_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
get_prep_info_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
clear_class_mlp, create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


1.3. NEURAL-NETS 37

T_get_sample_class_mlp ( const Htuple MLPHandle,


const Htuple IndexSample, Htuple *Features, Htuple *Target )

Return a training sample from the training data of a multilayer perceptron.


get_sample_class_mlp reads out a training sample from the multilayer perceptron (MLP) given by
MLPHandle that was added with add_sample_class_mlp or read_samples_class_mlp. The
index of the sample is specified with IndexSample. The index is counted from 0, i.e., IndexSample
must be a number between 0 and IndexSamples − 1, where IndexSamples can be determined with
get_sample_num_class_mlp. The training sample is returned in Features and Target. Features
is a feature vector of length NumInput, while Target is a target vector of length NumOutput (see
add_sample_class_mlp and create_class_mlp).
get_sample_class_mlp can, for example, be used to reclassify the training data with
classify_class_mlp in order to determine which training samples, if any, are classified incorrectly.
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Htuple . Hlong
MLP handle.
. IndexSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Number of stored training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Feature vector of the training sample.
. Target (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Target vector of the training sample.
Example (Syntax: HDevelop)

* Train an MLP
create_class_mlp (NIn, NHidden, NOut, ’softmax’, ’canonical_variates’,
NComp, 42, MLPHandle)
read_samples_class_mlp (MLPHandle, ’samples.mtf’)
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Reclassify the training samples
get_sample_num_class_mlp (MLPHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_mlp (MLPHandle, I, Data, Target)
classify_class_mlp (MLPHandle, Data, 1, Class, Confidence)
Result := gen_tuple_const(NOut,0)
Result[Class] := 1
Diffs := Target-Result
if (sum(fabs(Diffs)) > 0)
* Sample has been classified incorrectly
endif
endfor
clear_class_mlp (MLPHandle)

Result
If the parameters are valid, the operator get_sample_class_mlp returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
get_sample_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp, get_sample_num_class_mlp
Possible Successors
classify_class_mlp, evaluate_class_mlp
See also
create_class_mlp
Module
Foundation

HALCON 8.0.2
38 CHAPTER 1. CLASSIFICATION

get_sample_num_class_mlp ( Hlong MLPHandle, Hlong *NumSamples )


T_get_sample_num_class_mlp ( const Htuple MLPHandle,
Htuple *NumSamples )

Return the number of training samples stored in the training data of a multilayer perceptron.
get_sample_num_class_mlp returns in NumSamples the number of training samples that are stored in
the multilayer perceptron (MLP) given by MLPHandle. get_sample_num_class_mlp should be called
before the individual training samples are accessed with get_sample_class_mlp, e.g., for the purpose of
reclassifying the training data (see get_sample_class_mlp).
Parameter

. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong


MLP handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of stored training samples.
Result
If MLPHandle is valid, the operator get_sample_num_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
get_sample_num_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
get_sample_class_mlp
See also
create_class_mlp
Module
Foundation

read_class_mlp ( const char *FileName, Hlong *MLPHandle )


T_read_class_mlp ( const Htuple FileName, Htuple *MLPHandle )

Read a multilayer perceptron from a file.


read_class_mlp reads a multilayer perceptron (MLP) that has been stored with write_class_mlp.
Since the training of an MLP can consume a relatively long time, the MLP is typically trained in an of-
fline process and written to a file with write_class_mlp. In the online process the MLP is read with
read_class_mlp and subsequently used for evaluation with evaluate_class_mlp or for classification
with classify_class_mlp.
Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *


File name.
. MLPHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong *
MLP handle.
Result
If the parameters are valid, the operator read_class_mlp returns the value H_MSG_TRUE. If necessary an
exception handling is raised.
Parallelization Information
read_class_mlp is processed completely exclusively without parallelization.
Possible Successors
classify_class_mlp, evaluate_class_mlp

HALCON/C Reference Manual, 2008-5-13


1.3. NEURAL-NETS 39

See also
create_class_mlp, write_class_mlp
Module
Foundation

read_samples_class_mlp ( Hlong MLPHandle, const char *FileName )


T_read_samples_class_mlp ( const Htuple MLPHandle,
const Htuple FileName )

Read the training data of a multilayer perceptron from a file.


read_samples_class_mlp reads training samples from the file given by FileName and adds them to
the training samples that have already been added to the multilayer perceptron (MLP) given by MLPHandle.
The MLP must be created with create_class_mlp before calling read_samples_class_mlp.
As described with train_class_mlp and write_samples_class_mlp, the operators
read_samples_class_mlp, add_sample_class_mlp, and write_samples_class_mlp
can be used to build up a extensive set of training samples, and hence to improve the performance of the MLP by
retraining the MLP with extended data sets.
It should be noted that the training samples must have the correct dimensionality. The feature vectors and tar-
get vectors stored in FileName must have the lengths NumInput and NumOutput that were specified with
create_class_mlp. If this is not the case an error message is returned.
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong
MLP handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name.
Result
If the parameters are valid, the operator read_samples_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
read_samples_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp
Alternatives
add_sample_class_mlp
See also
write_samples_class_mlp, clear_samples_class_mlp
Module
Foundation

T_train_class_mlp ( const Htuple MLPHandle, const Htuple MaxIterations,


const Htuple WeightTolerance, const Htuple ErrorTolerance,
Htuple *Error, Htuple *ErrorLog )

Train a multilayer perceptron.


train_class_mlp trains the multilayer perceptron (MLP) given in MLPHandle. Before the MLP
can be trained, all training samples to be used for the training must be stored in the MLP using
add_sample_class_mlp or read_samples_class_mlp. If after the training new additional training
samples should be used a new MLP must be created with create_class_mlp, in which again all training sam-
ples to be used (i.e., the original ones and the additional ones) must be stored. In these cases, it is useful to save and

HALCON 8.0.2
40 CHAPTER 1. CLASSIFICATION

read the training data with write_samples_class_mlp and read_samples_class_mlp, respectively.


A second training with additional training samples is not explicitly forbidden by train_class_mlp. However,
this typically does not lead to good results because the training of an MLP is a complex nonlinear optimization
problem, and consequently the second training with new data will very likely lead to the fact that the optimization
gets stuck in a local minimum.
During the training, the error the MLP achieves on the stored training samples is minimized by using a nonlin-
ear optimization algorithm. With this, the MLP weights described in create_class_mlp are determined.
create_class_mlp initializes the weights with random values to make it very likely that the optimization
converges to the global minimum of the error function. Nevertheless, in rare cases it may happen that the random
values determined with RandSeed in create_class_mlp result in a relatively large optimum error, i.e., that
the optimization gets stuck in a local minimum. If it can be conjectured that this has happened the MLP should be
created anew with a different value for RandSeed in order to check whether a significantly smaller error can be
achieved.
The parameters MaxIterations, WeightTolerance, and ErrorTolerance control the nonlinear opti-
mization algorithm. MaxIterations specifies the maximum number of iterations of the optimization algorithm.
In practice, values between 100 and 200 should be sufficient for most problems. WeightTolerance specifies
a threshold for the change of the weights per iteration. Here, the absolute value of the change of the weights
between two iterations is summed. Hence, this value depends on the number of weights as well as the size of
the weights, which in turn depend on the scaling of the training data. Typically, values between 0.00001 and 1
should be used. ErrorTolerance specifies a threshold for the change of the error value per iteration. This
value depends on the number of training samples as well as the number of output variables of the MLP. Also here,
values between 0.00001 and 1 should typically be used. The optimization is terminated if the weight change is
smaller than WeightTolerance and the change of the error value is smaller than ErrorTolerance. In any
case, the optimization is terminated after at most MaxIterations iterations. It should be noted that, depending
on the size of the MLP and the number of training samples, the training can take from a few seconds to several
hours.
On output, train_class_mlp returns the error of the MLP with the optimal weights on the training samples
in Error. Furthermore, ErrorLog contains the error value as a function of the number of iterations. With
this, it is possible to decide whether a second training of the MLP with the same training data without creating
the MLP anew makes sense. If ErrorLog is regarded as a function, it should drop off steeply initially, while
leveling out very flatly at the end. If ErrorLog is still relatively steep at the end, it usually makes sense to call
train_class_mlp again. It should be noted, however, that this mechanism should not be used to train the
MLP successively with MaxIterations = 1 (or other small values for MaxIterations) because this will
substantially increase the number of iterations required to train the MLP.
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Htuple . Hlong
MLP handle.
. MaxIterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Maximum number of iterations of the optimization algorithm.
Default Value : 200
Suggested values : MaxIterations ∈ {20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 220, 240, 260, 280,
300}
. WeightTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm.
Default Value : 1.0
Suggested values : WeightTolerance ∈ {1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001}
Restriction : WeightTolerance ≥ 1.0e-8
. ErrorTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the
optimization algorithm.
Default Value : 0.01
Suggested values : ErrorTolerance ∈ {1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001}
Restriction : ErrorTolerance ≥ 1.0e-8
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Mean error of the MLP on the training data.
. ErrorLog (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Mean error of the MLP on the training data as a function of the number of iterations of the optimization

HALCON/C Reference Manual, 2008-5-13


1.3. NEURAL-NETS 41

algorithm.
Example (Syntax: HDevelop)

* Train an MLP
create_class_mlp (NIn, NHidden, NOut, ’softmax’, ’normalization’, 1,
42, MLPHandle)
read_samples_class_mlp (MLPHandle, ’samples.mtf’)
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
write_class_mlp (MLPHandle, ’classifier.mlp’)
clear_class_mlp (MLPHandle)

Result
If the parameters are valid, the operator train_class_mlp returns the value H_MSG_TRUE. If necessary an
exception handling is raised.
train_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing = ’canon-
ical_variates’ is used. This typically indicates that not enough training samples have been stored for each class.
Parallelization Information
train_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
evaluate_class_mlp, classify_class_mlp, write_class_mlp
Alternatives
read_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

write_class_mlp ( Hlong MLPHandle, const char *FileName )


T_write_class_mlp ( const Htuple MLPHandle, const Htuple FileName )

Write a multilayer perceptron to a file.


write_class_mlp writes the multilayer perceptron (MLP) MLPHandle to the file given by FileName.
write_class_mlp is typically called after the MLP has been trained with train_class_mlp. The MLP
can be read with read_class_mlp. write_class_mlp does not write any training samples that possibly
have been stored in the MLP. For this purpose, write_samples_class_mlp should be used.
Parameter

. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong


MLP handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the parameters are valid, the operator write_class_mlp returns the value H_MSG_TRUE. If necessary an
exception handling is raised.
Parallelization Information
write_class_mlp is reentrant and processed without parallelization.

HALCON 8.0.2
42 CHAPTER 1. CLASSIFICATION

Possible Predecessors
train_class_mlp
Possible Successors
clear_class_mlp
See also
create_class_mlp, read_class_mlp, write_samples_class_mlp
Module
Foundation

write_samples_class_mlp ( Hlong MLPHandle, const char *FileName )


T_write_samples_class_mlp ( const Htuple MLPHandle,
const Htuple FileName )

Write the training data of a multilayer perceptron to a file.


write_samples_class_mlp writes the training samples stored in the multilayer perceptron (MLP)
MLPHandle to the file given by FileName. write_samples_class_mlp can be used to build up
a database of training samples, and hence to improve the performance of the MLP by training it with an ex-
tended data set (see train_class_mlp). For other possible uses of write_samples_class_mlp see
get_prep_info_class_mlp.
The file FileName is overwritten by write_samples_class_mlp. Nevertheless, extending the database of
training samples is easy to do because read_samples_class_mlp and add_sample_class_mlp add
the training samples to the training samples that are already stored in memory with the MLP.
Parameter

. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong


MLP handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the parameters are valid, the operator write_samples_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
write_samples_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_mlp
Possible Successors
clear_samples_class_mlp
See also
create_class_mlp, get_prep_info_class_mlp, read_samples_class_mlp
Module
Foundation

1.4 Support-Vector-Machines

T_add_sample_class_svm ( const Htuple SVMHandle,


const Htuple Features, const Htuple Class )

Add a training sample to the training data of a support vector machine.


add_sample_class_svm adds a training sample to the support vector machine (SVM) given by SVMHandle.
The training sample is given by Features and Class. Features is the feature vector of the sample, and
consequently must be a real vector of length NumFeatures, as specified in create_class_svm. Class

HALCON/C Reference Manual, 2008-5-13


1.4. SUPPORT-VECTOR-MACHINES 43

is the target of the sample, which must be in the range of 0 to NumClasses-1 (see create_class_svm).
Before the SVM can be trained with train_class_svm, training samples must be added to the SVM with
add_sample_class_svm. The usage of support vectors of an already trained SVM as training samples is
described in train_class_svm.
The number of currently stored training samples can be queried with get_sample_num_class_svm. Stored
training samples can be read out again with get_sample_class_svm.
Normally, it is useful to save the training samples in a file with write_samples_class_svm to facilitate
reusing the samples and to facilitate that, if necessary, new training samples can be added to the data set, and hence
to facilitate that a newly created SVM can be trained with the extended data set.
Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Htuple . Hlong


SVM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Feature vector of the training sample to be stored.
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong / double
Class of the training sample to be stored.
Result
If the parameters are valid the operator add_sample_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
add_sample_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
create_class_svm
Possible Successors
train_class_svm, write_samples_class_svm, get_sample_num_class_svm,
get_sample_class_svm
Alternatives
read_samples_class_svm
See also
clear_samples_class_svm, get_support_vector_class_svm
Module
Foundation

T_classify_class_svm ( const Htuple SVMHandle, const Htuple Features,


const Htuple Num, Htuple *Class )

Classify a feature vector by a support vector machine.


classify_class_svm computes the best Num classes of the feature vector Features with the SVM
SVMHandle and returns them in Class. If the classifier was created in the Mode = ’one-versus-one’, the
classes are ordered by the number of votes of the sub-classifiers. If Mode = ’one-versus-all’ was used, the classes
are ordered by the value of each sub-classifier (see create_class_svm for more details). If the classifier was
created in the Mode = ’novelty-detection’, it determines whether the feature vector belongs to the same class as
the training data (Class = 1) or is regarded as outlier (Class = 0). In this case Num must be set to 1 as the
classifier only determines membership.
Before calling classify_class_svm, the SVM must be trained with train_class_svm.
Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Htuple . Hlong


SVM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Feature vector.

HALCON 8.0.2
44 CHAPTER 1. CLASSIFICATION

. Num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double


Number of best classes to determine.
Default Value : 1
Suggested values : Num ∈ {1, 2, 3, 4, 5}
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong *
Result of classifying the feature vector with the SVM.
Result
If the parameters are valid the operator classify_class_svm returns the value H_MSG_TRUE. If necessary,
an exception handling is raised.
Parallelization Information
classify_class_svm is reentrant and processed without parallelization.
Possible Predecessors
train_class_svm, read_class_svm
See also
create_class_svm
References
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Bernhard Schölkopf, Alexander J.Smola: “Lerning with Kernels”; MIT Press, London; 1999.
Module
Foundation

clear_all_class_svm ( )
T_clear_all_class_svm ( )

Clear all support vector machines.


clear_all_class_svm clears all support vector machines (SVM) and frees all memory required for the
SVMs. After calling clear_all_class_svm, no SVM can be used any longer.
Attention
clear_all_class_svm exists solely for the purpose of implementing the “reset program” functionality in
HDevelop. clear_all_class_svm must not be used in any application.
Result
clear_all_class_svm always returns H_MSG_TRUE.
Parallelization Information
clear_all_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
classify_class_svm
Alternatives
clear_class_svm
See also
create_class_svm, read_class_svm, write_class_svm, train_class_svm
Module
Foundation

clear_class_svm ( Hlong SVMHandle )


T_clear_class_svm ( const Htuple SVMHandle )

Clear a support vector machine.

HALCON/C Reference Manual, 2008-5-13


1.4. SUPPORT-VECTOR-MACHINES 45

clear_class_svm clears the support vector machine (SVM) given by SVMHandle and frees all memory
required for the SVM. After calling clear_class_svm, the SVM can no longer be used. The handle
SVMHandle becomes invalid.
Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong


SVM handle.
Result
If SVMHandle is valid the operator clear_class_svm returns the value H_MSG_TRUE. If necessary, an
exception handling is raised.
Parallelization Information
clear_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
classify_class_svm
See also
create_class_svm, read_class_svm, write_class_svm, train_class_svm
Module
Foundation

clear_samples_class_svm ( Hlong SVMHandle )


T_clear_samples_class_svm ( const Htuple SVMHandle )

Clear the training data of a support vector machine.


clear_samples_class_svm clears all training samples that have been added to the support vec-
tor machine (SVM) SVMHandle with add_sample_class_svm or read_samples_class_svm.
clear_samples_class_svm should only be used if the SVM is trained in the same process that uses the
SVM for classification with classify_class_svm. In this case, the memory required for the training sam-
ples can be freed with clear_samples_class_svm, and hence memory can be saved. In the normal usage,
in which the SVM is trained offline and written to a file with write_class_svm, it is typically unnecessary
to call clear_samples_class_svm because write_class_svm does not save the training samples, and
hence the online process, which reads the SVM with read_class_svm, requires no memory for the training
samples.
Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong


SVM handle.
Result
If the parameters are valid the operator clear_samples_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
clear_samples_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
train_class_svm, write_samples_class_svm
See also
create_class_svm, clear_class_svm, add_sample_class_svm,
read_samples_class_svm
Module
Foundation

HALCON 8.0.2
46 CHAPTER 1. CLASSIFICATION

create_class_svm ( Hlong NumFeatures, const char *KernelType,


double KernelParam, double Nu, Hlong NumClasses, const char *Mode,
const char *Preprocessing, Hlong NumComponents, Hlong *SVMHandle )

T_create_class_svm ( const Htuple NumFeatures,


const Htuple KernelType, const Htuple KernelParam, const Htuple Nu,
const Htuple NumClasses, const Htuple Mode,
const Htuple Preprocessing, const Htuple NumComponents,
Htuple *SVMHandle )

Create a support vector machine for pattern classification.


create_class_svm creates a support vector machine that can be used for pattern classification. The dimension
of the patterns to be classified is specified in NumFeatures, the number of different classes in NumClasses.
For a binary classification problem in which the classes are linearly separable the SVM algorithm selects data
vectors from the training set that are utilized to construct the optimal separating hyperplane between different
classes. This hyperplane is optimal in the sense that the margin between the convex hulls of the different classes
is maximized. The training patterns that are located at the margin define the hyperplane and are called support
vectors (SV).
Classification of a feature vector z is performed with the following formula:

nsv
!
X
f (z) = sign αi yi < xi , z > +b
i=1

Here, xi are the support vectors, yi encodes their class membership (±1) and αi the weight coefficients. The dis-
tance of the hyperplane to the origin is b. The α and b are determined during training with train_class_svm.
Note that only a subset of the original training set (nsv : number of support vectors) is necessary for the definition
of the decision boundary and therefore data vectors that are not support vectors are discarded. The classification
speed depends on the evaluation of the dot product between support vectors and the feature vector to be classified,
and hence depends on the length of the feature vector and the number nsv of support vectors.
For classification problems in which the classes are not linearly separable the algorithm is extended in two ways.
First, during training a certain amount of errors (overlaps) is compensated with the use of slack variables. This
means that the α are upper bounded by a regularization constant. To enable an intuitive control of the amount of
training errors, the Nu-SVM version of the training algorithm is used. Here, the regularization parameter Nu is an
asymptotic upper bound on the number of training errors and an asymptotic lower bound on the number of support
vectors. As a rule of thumb, the parameter Nu should be set to the prior expectation of the application’s specific
error ratio, e.g., 0.01 (corresponding to a maximum training error of 1%). Please note that a too big value for Nu
might lead to an infeasible training problem, i.e., the SVM cannot be trained correctly (see train_class_svm
for more details). Since this can only be determined during training, an exception can only be raised there. In this
case, a new SVM with Nu chosen smaller must be created.
Second, because the above SVM exclusively calculates dot products between the feature vectors, it is possible to
incorporate a kernel function into the training and testing algorithm. This means that the dot products are substi-
tuted by a kernel function, which implicitly performs the dot product in a higher dimensional feature space. Given
the appropriate kernel transformation, an originally not linearly separable classification task becomes linearly sep-
arable in the higher dimensional feature space.
Different kernel functions can be selected with the parameter KernelType. For KernelType = ’linear’ the
dot product, as specified in the above formula is calculated. This kernel should solely be used for linearly or nearly
linearly separable classification tasks. The parameter KernelParam is ignored here.
The radial basis function (RBF) KernelType = ’rbf’ is the best choice for a kernel function because it achieves
good results for many classification tasks. It is defined as:

2
−γ· x−z
K(x, z) = e

Here, the parameter KernelParam is used to select γ. The intuitive meaning of γ is the amount of influence of
a support vector upon its surroundings. A big value of γ (small influence on the surroundings) means that each

HALCON/C Reference Manual, 2008-5-13


1.4. SUPPORT-VECTOR-MACHINES 47

training vector becomes a support vector. The training algorithm learns the training data “by heart”, but lacks any
generalization ability (over-fitting). Additionally, the training/classification times grow significantly. A too small
value for γ (big influence on the surroundings) leads to few support vectors defining the separating hyperplane
(under-fitting). One typical strategy is to select a small γ-Nu pair and consecutively increase the values as long as
the recognition rate increases.
With KernelType = ’polynomial_homogeneous’ or ’polynomial_inhomogeneous’, polynomial kernels can be
selected. They are defined in the following way:

K(x, z) = (< x, z >)d


K(x, z) = (< x, z > +1)d

The degree of the polynomial kernel must be set with KernelParam. Please note that a too high degree polyno-
mial (d > 10) might result in numerical problems.
As a rule of thumb, the RBF kernel provides a good choice for most of the classification problems and should
therefore be used in almost all cases. Nevertheless, the linear and polynomial kernels might be better suited
for certain applications and can be tested for comparison. Please note that the novelty-detection Mode and the
reduce_class_svm operator are provided only for the RBF kernel.
Mode specifies the general classification task, which is either how to break down a multi-class decision problem to
binary sub-cases or whether to use a special classifier mode called ’novelty-detection’. Mode = ’one-versus-all’
creates a classifier where each class is compared to the rest of the training data. During testing the class with the
largest output (see the classification formula without sign) is chosen. Mode = ’one-versus-one’ creates a binary
classifier between each single class. During testing a vote is cast and the class with the majority of the votes
is selected. The optimal Mode for multi-class classification depends on the number of classes. Given n classes
’one-versus-all’ creates n classifiers, whereas ’one-versus-one’ creates n(n − 1)/2. Note that for a binary decision
task ’one-versus-one’ would create exactly one, whereas ’one-versus-all’ unnecessarily creates two symmetric
classifiers. For few classes (3-10) ’one-versus-one’ is faster for training and testing, because the sub-classifier all
consist of fewer training data and result in overall fewer support vectors. In case of many classes ’one-versus-all’
is preferable, because ’one-versus-one’ generates a prohibitively large amount of sub-classifiers, as their number
grows quadratically with the number of classes.
A special case of classification is Mode = 0 novelty − detection 0 , where the test data is classified with regard to
membership to the training data. The separating hyperplane lies around the training data and thereby implicitly
divides the training data from the rejection class. The advantage is that the rejection class is not defined explicitly,
which is difficult to do in certain applications like texture classification. The resulting support vectors are all lying
at the border. With the parameter Nu, the ratio of outliers in the training data set is specified.
The parameters Preprocessing and NumComponents can be used to specify a preprocessing of the feature
vectors. For Preprocessing = ’none’, the feature vectors are passed unaltered to the SVM. NumComponents
is ignored in this case.
For all other values of Preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification.
For Preprocessing = ’normalization’, the feature vectors are normalized. In case of a polynomial kernel, the
minimum and maximum value of the training data set is transformed to -1 and +1. In case of the RBF kernel, the
data is normalized by subtracting the mean of the training vectors and dividing the result by the standard deviation
of the individual components of the training vectors. Hence, the transformed feature vectors have a mean of 0 and
a standard deviation of 1. The normalization does not change the length of the feature vector. NumComponents
is ignored in this case. This transformation can be used if the mean and standard deviation of the feature vectors
differs substantially from 0 and 1, respectively, or for data in which the components of the feature vectors are
measured in different units (e.g., if some of the data are gray value features and some are region features, or if
region features are mixed, e.g., ’circularity’ (unit: scalar) and ’area’ (unit: pixel squared)). The normalization
transformation should be performed in general, because it increases the numerical stability during training/testing.
For Preprocessing = ’principal_components’, a principal component analysis (PCA) is performed. First, the
feature vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space)
that decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is
0 and the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that
the transformed features that contain the most variation is contained in the first components of the transformed
feature vector. With this, it is possible to omit the transformed features in the last components of the feature vector,

HALCON 8.0.2
48 CHAPTER 1. CLASSIFICATION

which typically are mainly influenced by noise, without losing a large amount of information. The parameter
NumComponents can be used to determine how many of the transformed feature vector components should be
used. Up to NumFeatures components can be selected. The operator get_prep_info_class_svm can
be used to determine how much information each transformed component contains. Hence, it aids the selection of
NumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated. Please note that the RBF kernel is very robust against the dimensionality reduction
performed by PCA and should therefore be the first choice when speeding up the classification time.
The transformation specified by Preprocessing = ’canonical_variates’ first normalizes the training vectors
and then decorrelates the training vectors on average over all classes. At the same time, the transformation maxi-
mally separates the mean values of the individual classes. As for Preprocessing = ’principal_components’,
the transformed components are sorted by information content, and hence transformed components with little infor-
mation content can be omitted. For canonical variates, up to min(NumClasses−1, NumFeatures) components
can be selected. Also in this case, the information content of the transformed components can be determined with
get_prep_info_class_svm. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction. The computation of the canonical variates is also called linear discriminant
analysis.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the length of input
data of the SVM is determined by NumComponents, whereas NumFeatures determines the dimensionality of
the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transforma-
tions, the size of the SVM with respect to data length is reduced, leading to shorter training/classification times by
the SVM.
After the SVM has been created with create_class_svm, typically training samples are added to the SVM
by repeatedly calling add_sample_class_svm or read_samples_class_svm. After this, the SVM is
typically trained using train_class_svm. Hereafter, the SVM can be saved using write_class_svm.
Alternatively, the SVM can be used immediately after training to classify data using classify_class_svm.
A comparison of the SVM and the multi-layer perceptron (MLP) (see create_class_mlp) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameter

. NumFeatures (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Number of input variables (features) of the SVM.
Default Value : 10
Suggested values : NumFeatures ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumFeatures ≥ 1
. KernelType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
The kernel type.
Default Value : "rbf"
List of values : KernelType ∈ {"linear", "rbf", "polynomial_inhomogeneous",
"polynomial_homogeneous"}
. KernelParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Additional parameter for the kernel function. In case of RBF kernel the value for γ. For polynomial kernel the
degree
Default Value : 0.02
Suggested values : KernelParam ∈ {0.01, 0.02, 0.05, 0.1, 0.5}
. Nu (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Regularisation constant of the SVM.
Default Value : 0.05
Suggested values : Nu ∈ {0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.3}
Restriction : (Nu > 0.0) ∧ (Nu < 1.0)

HALCON/C Reference Manual, 2008-5-13


1.4. SUPPORT-VECTOR-MACHINES 49

. NumClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Number of classes.
Default Value : 5
Suggested values : NumClasses ∈ {2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : NumClasses ≥ 1
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
The mode of the SVM.
Default Value : "one-versus-one"
List of values : Mode ∈ {"novelty-detection", "one-versus-all", "one-versus-one"}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "normalization"
List of values : Preprocessing ∈ {"none", "normalization", "principal_components",
"canonical_variates"}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default Value : 10
Suggested values : NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumComponents ≥ 1
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong *
SVM handle.
Example (Syntax: HDevelop)

create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,


’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
* Generate and add the training data
for J := 0 to NData-1 by 1
* Generate training features and classes
* Data = [...]
* Class = ...
add_sample_class_svm (SVMHandle, Data, Class)
endfor
* Train the SVM
train_class_svm (SVMHandle, 0.001, ’default’)
* Use the SVM to classify unknown data
for J := 0 to N-1 by 1
* Extract features
* Features = [...]
classify_class_svm (SVMHandle, Features, 1, Class)
endfor
clear_class_svm (SVMHandle)

Result
If the parameters are valid the operator create_class_svm returns the value H_MSG_TRUE. If necessary,
an exception handling is raised.
Parallelization Information
create_class_svm is processed completely exclusively without parallelization.
Possible Successors
add_sample_class_svm
Alternatives
create_class_mlp, create_class_gmm, create_class_box
See also
clear_class_svm, train_class_svm, classify_class_svm
References
Bernhard Schölkopf, Alexander J.Smola: “Learning with Kernels”; MIT Press, London; 1999.

HALCON 8.0.2
50 CHAPTER 1. CLASSIFICATION

John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Module
Foundation

get_params_class_svm ( Hlong SVMHandle, Hlong *NumFeatures,


char *KernelType, double *KernelParam, double *Nu, Hlong *NumClasses,
char *Mode, char *Preprocessing, Hlong *NumComponents )

T_get_params_class_svm ( const Htuple SVMHandle, Htuple *NumFeatures,


Htuple *KernelType, Htuple *KernelParam, Htuple *Nu,
Htuple *NumClasses, Htuple *Mode, Htuple *Preprocessing,
Htuple *NumComponents )

Return the parameters of a support vector machine.


get_params_class_svm returns the parameters of a support vector machine (SVM) that were specified when
the SVM was created with create_class_svm. This is particularly useful if the SVM was read from a file with
read_class_svm. The output of get_params_class_svm can, for example, be used to check whether the
feature vectors and, if necessary, the target data to be used with the SVM have the correct lengths. For a description
of the parameters, see create_class_svm.
Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong


SVM handle.
. NumFeatures (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of input variables (features) of the SVM.
. KernelType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
The kernel type.
. KernelParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Additional parameter for the kernel.
. Nu (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Regularization constant of the SVM.
. NumClasses (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of classes of the test data.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
The mode of the SVM.
. Preprocessing (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Type of preprocessing used to transform the feature vectors.
. NumComponents (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Result
If the parameters are valid the operator get_params_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
get_params_class_svm is reentrant and processed without parallelization.
Possible Predecessors
create_class_svm, read_class_svm
Possible Successors
add_sample_class_svm, train_class_svm
See also
classify_class_svm
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


1.4. SUPPORT-VECTOR-MACHINES 51

T_get_prep_info_class_svm ( const Htuple SVMHandle,


const Htuple Preprocessing, Htuple *InformationCont,
Htuple *CumInformationCont )

Compute the information content of the preprocessed feature vectors of a support vector machine
get_prep_info_class_svm computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_svm.
The information content is derived from the variations of the transformed components of the feature vec-
tor, i.e., it is computed solely based on the training data, independent of any error rate on the training
data. The information content is computed for all relevant components of the transformed feature vec-
tors (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’, see create_class_svm), and is returned in InformationCont as a number between 0 and
1. To convert the information content into a percentage, it simply needs to be multiplied by 100. The cumulative
information content of the first n components is returned in the n-th component of CumInformationCont,
i.e., CumInformationCont contains the sums of the first n elements of InformationCont. To use
get_prep_info_class_svm, a sufficient number of samples must be added to the support vector machine
(SVM) given by SVMHandle by using add_sample_class_svm or read_samples_class_svm.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_svm. The call to get_prep_info_class_svm al-
ready requires the creation of an SVM, and hence the setting of NumComponents in create_class_svm
to an initial value. However, when get_prep_info_class_svm is called, it is typically not known how
many components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-
step approach should typically be used to select NumComponents: In a first step, an SVM with the maximum
number for NumComponents is created (NumFeatures for ’principal_components’ and min(NumClasses−
1, NumFeatures) for ’canonical_variates’). Then, the training samples are added to the SVM and are saved in
a file using write_samples_class_svm. Subsequently, get_prep_info_class_svm is used to deter-
mine the information content of the components, and with this NumComponents. After this, a new SVM with the
desired number of components is created, and the training samples are read with read_samples_class_svm.
Finally, the SVM is trained with train_class_svm.
Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Htuple . Hlong


SVM handle.
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "principal_components"
List of values : Preprocessing ∈ {"principal_components", "canonical_variates"}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Cumulative information content of the transformed feature vectors.
Example (Syntax: HDevelop)

* Create the initial SVM


create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
* Generate and add the training data
for J := 0 to NData-1 by 1
* Generate training features and classes
* Data = [...]
* Class = ...
add_sample_class_svm (SVMHandle, Data, Class)

HALCON 8.0.2
52 CHAPTER 1. CLASSIFICATION

endfor
write_samples_class_svm (SVMHandle, ’samples.mtf’)
* Compute the information content of the transformed features
get_prep_info_class_svm (SVMHandle, ’principal_components’,
InformationCont, CumInformationCont)
* Determine NComp by inspecting InformationCont and CumInformationCont
* NComp = [...]
clear_class_svm (SVMHandle)
* Create the actual SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’principal_components’, NComp, SVMHandle)
* Train the SVM
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
write_class_svm (SVMHandle, ’classifier.svm’)
clear_class_svm (SVMHandle)

Result
If the parameters are valid the operator get_prep_info_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
get_prep_info_class_svm may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
get_prep_info_class_svm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
clear_class_svm, create_class_svm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

T_get_sample_class_svm ( const Htuple SVMHandle,


const Htuple IndexSample, Htuple *Features, Htuple *Target )

Return a training sample from the training data of a support vector machine.
get_sample_class_svm reads out a training sample from the support vector machine (SVM) given by
SVMHandle that was added with add_sample_class_svm or read_samples_class_svm. The
index of the sample is specified with IndexSample. The index is counted from 0, i.e., IndexSample
must be a number between 0 and IndexSamples − 1, where IndexSamples can be determined with
get_sample_num_class_svm. The training sample is returned in Features and Target. Features
is a feature vector of length NumFeatures (see create_class_svm), while Target is the index of the
class, ranging between 0 and NumClasses-1 (see add_sample_class_svm).
get_sample_class_svm can, for example, be used to reclassify the training data with
classify_class_svm in order to determine which training samples, if any, are classified incorrectly.
Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Htuple . Hlong


SVM handle.
. IndexSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Number of the stored training sample.

HALCON/C Reference Manual, 2008-5-13


1.4. SUPPORT-VECTOR-MACHINES 53

. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *


Feature vector of the training sample.
. Target (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Target vector of the training sample.
Example (Syntax: HDevelop)

* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
* Reclassify the training samples
get_sample_num_class_svm (SVMHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_svm (SVMHandle, I, Data, Target)
classify_class_svm (SVMHandle, Data, 1, Class)
if (Class # Target)
* Sample has been classified incorrectly
endif
endfor
clear_class_svm (SVMHandle)

Result
If the parameters are valid the operator get_sample_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
get_sample_class_svm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm, get_sample_num_class_svm,
get_support_vector_class_svm
Possible Successors
classify_class_svm
See also
create_class_svm
Module
Foundation

get_sample_num_class_svm ( Hlong SVMHandle, Hlong *NumSamples )


T_get_sample_num_class_svm ( const Htuple SVMHandle,
Htuple *NumSamples )

Return the number of training samples stored in the training data of a support vector machine.
get_sample_num_class_svm returns in NumSamples the number of training samples that are stored in
the support vector machine (SVM) given by SVMHandle. get_sample_num_class_svm should be called
before the individual training samples are accessed with get_sample_class_svm, e.g., for the purpose of
reclassifying the training data (see get_sample_class_svm).
Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong


SVM handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of stored training samples.

HALCON 8.0.2
54 CHAPTER 1. CLASSIFICATION

Result
If SVMHandle is valid the operator get_sample_num_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
get_sample_num_class_svm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
get_sample_class_svm
See also
create_class_svm
Module
Foundation

T_get_support_vector_class_svm ( const Htuple SVMHandle,


const Htuple IndexSupportVector, Htuple *Index )

Return the index of a support vector from a trained support vector machine.
The operator get_support_vector_class_svm maps support vectors of a trained SVM (given
in SVMHandle) to the original training data set. The index of the SV is specified with
IndexSupportVector. The index is counted from 0, i.e., IndexSupportVector must be a number
between 0 and IndexSupportVectors − 1, where IndexSupportVectors can be determined with
get_support_vector_num_class_svm. The index of this SV in the training data is returned in Index.
This Index can be used for a query with get_sample_class_svm to obtain the feature vectors that become
support vectors. get_sample_class_svm can, for example, be used to visualize the support vectors.
Note that when using train_class_svm with a mode different from ’default’ or reducing the SVM with
reduce_class_svm, the returned Index will always be -1, i.e., it will be invalid. The reason for this is that a
consistent mapping between SV and training data becomes impossible.
Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Htuple . Hlong


SVM handle.
. IndexSupportVector (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Number of stored support vectors.
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Index of the support vector in the training set.
Result
If the parameters are valid the operator get_sample_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
get_support_vector_class_svm is reentrant and processed without parallelization.
Possible Predecessors
train_class_svm, get_support_vector_num_class_svm
Possible Successors
get_sample_class_svm
See also
create_class_svm
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


1.4. SUPPORT-VECTOR-MACHINES 55

T_get_support_vector_num_class_svm ( const Htuple SVMHandle,


Htuple *NumSupportVectors, Htuple *NumSVPerSVM )

Return the number of support vectors of a support vector machine.


get_support_vector_num_class_svm returns in NumSupportVectors the number of
support vectors that are stored in the support vector machine (SVM) given by SVMHandle.
get_support_vector_num_class_svm should be called before the labels of individual support
vectors are read out with get_support_vector_class_svm, e.g., for visualizing which the training data
become a SV (see get_support_vector_class_svm). The number of SVs in each classifier is listed
in NumSVPerSVM. The reason that its sum differs from the Number obtained in NumSupportVectors is
that SV evaluations are reused throughout different sub-classifiers. NumSVPerSVM provides the possibility for
controlling the process of speeding up SVM classification time with the operator reduce_class_svm.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Htuple . Hlong
SVM handle.
. NumSupportVectors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Total number of support vectors.
. NumSVPerSVM (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Number of SV of each sub-SVM.
Result
If SVMHandle is valid the operator get_sample_num_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
get_support_vector_num_class_svm is reentrant and processed without parallelization.
Possible Predecessors
train_class_svm
Possible Successors
get_sample_class_svm
See also
create_class_svm
Module
Foundation

read_class_svm ( const char *FileName, Hlong *SVMHandle )


T_read_class_svm ( const Htuple FileName, Htuple *SVMHandle )

Read a support vector machine from a file.


read_class_svm reads a support vector machine (SVM) that has been stored with write_class_svm.
Since the training of an SVM can consume a relatively long time, the SVM is typically trained in an offline process
and written to a file with write_class_svm. In the online process the SVM is read with read_class_svm
and subsequently used for classification with classify_class_svm.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name.
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong *
SVM handle.
Result
If the parameters are valid the operator read_class_svm returns the value H_MSG_TRUE. If necessary, an
exception handling is raised.
Parallelization Information
read_class_svm is processed completely exclusively without parallelization.

HALCON 8.0.2
56 CHAPTER 1. CLASSIFICATION

Possible Successors
classify_class_svm
See also
create_class_svm, write_class_svm
Module
Foundation

read_samples_class_svm ( Hlong SVMHandle, const char *FileName )


T_read_samples_class_svm ( const Htuple SVMHandle,
const Htuple FileName )

Read the training data of a support vector machine from a file.


read_samples_class_svm reads training samples from the file given by FileName and adds them to
the training samples that have already been added to the support vector machine (SVM) given by SVMHandle.
The SVM must be created with create_class_svm before calling read_samples_class_svm.
As described with train_class_svm and write_samples_class_svm, the operators
read_samples_class_svm, add_sample_class_svm, and write_samples_class_svm
can be used to build up a extensive set of training samples, and hence to improve the performance of the SVM by
retraining the SVM with extended data sets.
It should be noted that the training samples must have the correct dimensionality. The feature vectors and tar-
get vectors stored in FileName must have the lengths NumFeatures and NumClasses that were specified
with create_class_svm. The target is stored in vector form for compability reason with the MLP (see
read_samples_class_mlp). If the dimensions are incorrect an error message is returned.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong
SVM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name.
Result
If the parameters are valid the operator read_samples_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
read_samples_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
create_class_svm
Possible Successors
train_class_svm
Alternatives
add_sample_class_svm
See also
write_samples_class_svm, clear_samples_class_svm
Module
Foundation

reduce_class_svm ( Hlong SVMHandle, const char *Method,


Hlong MinRemainingSV, double MaxError, Hlong *SVMHandleReduced )

T_reduce_class_svm ( const Htuple SVMHandle, const Htuple Method,


const Htuple MinRemainingSV, const Htuple MaxError,
Htuple *SVMHandleReduced )

Approximate a trained support vector machine by a reduced support vector machine for faster classification.

HALCON/C Reference Manual, 2008-5-13


1.4. SUPPORT-VECTOR-MACHINES 57

As described in create_class_svm, the classification time of a SVM depends on the number of kernel
evaluations between the support vectors and the feature vectors. While the length of the data vectors can be
reduced in a preprocessing step like ’pricipal_components’ or ’canonical_variates’ (see create_class_svm
for details), the number of resulting SV depends on the complexity of the classification problem. The number
of SVs is determined during training. To further reduce classification time, the number of SVs can be reduced
by approximating the original separating hyperplane with fewer SVs than originally required. For this purpose, a
copy of the original SVM provided by SVMHandle is created and returned in SVMHandleReduced. This new
SVM has the same parametrization as the original SVM, but a different SV expansion. The training samples that
are included in SVMHandle are not copied. The original SVM is not modified by reduce_class_svm.
The reduction method is selected with Method. Currently, only a bottom up approch is supported, which itera-
tively merges SVs. The algorithm stops if either the minimum number of SVs is reached (MinRemainingSV)
or if the accumulated maximum error exceeds the threshold MaxError. Note that the approximation reduces the
complexity of the hyperplane and thereby leads to a deteriorated classification rate. A common approch is therefore
to start from a small MaxError e.g., 0.001, and to increase its value step by step. To control the reduction ratio,
at each step the number of remaining SVs is determined with get_support_vector_num_class_svm and
the classification rate is checked on a separate test data set with classify_class_svm.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong
Original SVM handle.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of postprocessing to reduce number of SV.
Default Value : "bottom_up"
List of values : Method ∈ {"bottom_up"}
. MinRemainingSV (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Minimum number of remaining SVs.
Default Value : 2
Suggested values : MinRemainingSV ∈ {2, 3, 4, 5, 7, 10, 15, 20, 30, 50}
Restriction : MinRemainingSV ≥ 2
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Maximum allowed error of reduction.
Default Value : 0.001
Suggested values : MaxError ∈ {0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05}
Restriction : MaxError > 0.0
. SVMHandleReduced (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong *
SVMHandle of reduced SVM.
Example (Syntax: HDevelop)

* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
* Create a reduced SVM
reduce_class_svm (SVMHandle, ’bottom_up’, 2, 0.01, SVMHandleReduced)
write_class_svm (SVMHandleReduced, ’classifier.svm)
clear_class_svm (SVMHandleReduced)
clear_class_svm (SVMHandle)

Result
If the parameters are valid the operator train_class_svm returns the value H_MSG_TRUE. If necessary, an
exception handling is raised.
Parallelization Information
reduce_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
train_class_svm, get_support_vector_num_class_svm

HALCON 8.0.2
58 CHAPTER 1. CLASSIFICATION

Possible Successors
classify_class_svm, write_class_svm, get_support_vector_num_class_svm
See also
train_class_svm
Module
Foundation

train_class_svm ( Hlong SVMHandle, double Epsilon,


const char *TrainMode )

T_train_class_svm ( const Htuple SVMHandle, const Htuple Epsilon,


const Htuple TrainMode )

Train a support vector machine.


train_class_svm trains the support vector machine (SVM) given in SVMHandle. Before the SVM
can be trained, the training samples to be used for the training must be added to the SVM using
add_sample_class_svm or read_samples_class_svm.
Technically, training an SVM means solving a convex quadratic optimization problem. This implies that it can
be assured that training terminates after finite steps at the global optimum. In order to recognize termination,
the gradient of the function that is optimized intenally must fall below a threshold, which is set in Epsilon.
By default, a value of 0.001 should be used for Epsilon since this yields the best results in practice. A too
big value leads to a too early termination and might result in suboptimal solutions. With a too small value the
optimization requires a longer time, often without changing the recognition rate significantly. Nevertheless, if
longer training times are possible, a smaller value than 0.001 might be chosen. There are two common reasons
for changing Epsilon: First, if you specified a very small value for Nu when calling ( create_class_svm),
e.g., Nu = 0.001, a smaller Epsilon might significantly improve the recognition rate. A second case is the
determination of the optimal kernel function and its parameterization (e.g., the KernelParam-Nu pair for the
RBF kernel) with the computationally intensive n-fold cross validation. Here, choosing a bigger Epsilon reduces
the computational time without changing the parameters of the optimal kernel that would be obtained when using
the default Epsilon. After the optimal KernelParam-Nu pair is obtained, the final training is conducted with
a small Epsilon.
The duration of the training depends on the training data, in particular on the number of resulting support vectors
(SVs), and Epsilon. It can lie between seconds and several hours. It is therefore recommended to choose the
SVM parameter Nu in create_class_svm so that as few SVs as possible are generated without decreasing
the recognition rate. Special care must be taken with the parameter Nu in create_class_svm so that the
optimization starts from a feasible region. If too many training errors are chosen with a too big Nu, an exception
handling is raised. In this case, an SVM with the same training data, but with smaller Nu must be trained.
With the parameter TrainMode you can choose between different training modes. Normally, you train an SVM
without additional information and TrainMode is set to ’default’. If multiple SVMs for the same data set but with
different kernels are trained, subsequent training runs can reuse optimization results and thus speedup the overall
training time of all runs. For this mode, in TrainMode a SVM handle of a previously trained SVM is passed.
Note that the SVM handle passed in SVMHandle and the SVMHandle passed in TrainMode must have the
same training data, the same mode and the same number of classes (see create_class_svm). The application
for this training mode is the evaluation of different kernel functions given the same training set. In the literature
this is referred to as alpha seeding.
With TrainMode = ’add_sv_to_train_set’ it is possible to append the support vectors that were generated by a
previous call of train_class_svm to the currently saved training set. This mode has two typical application
areas: First, it is possible to gradually train a SVM. For this, the complete training set is divided into disjunctive
chunks. The first chunk is trained normally using TrainMode = ’default’. Afterwards, the previous training set is
removed with clear_samples_class_svm, the next chunk is added with add_sample_class_svm and
trained with TrainMode = ’add_sv_to_train_set’. This is repeated until all chunks are trained. This approach has
the advantage that even huge training data sets can be trained efficiently with respect to memory consumption. A
second application area for this mode is that a general purpose classifier can be specialized by adding characteristic
training samples and then retraining it. Please note that the preprocessing (as described in create_class_svm)
is not changed when training with TrainMode = ’add_sv_to_train_set’.

HALCON/C Reference Manual, 2008-5-13


1.4. SUPPORT-VECTOR-MACHINES 59

Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong
SVM handle.
. Epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Stop parameter for training.
Default Value : 0.001
Suggested values : Epsilon ∈ {0.00001, 0.0001, 0.001, 0.01, 0.1}
. TrainMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; const char * / Hlong
Mode of training. For normal operation: ’default’. If SVs already included in the SVM should be used for
training: ’add_sv_to_train_set’. For alpha seeding: the respective SVM handle.
Default Value : "default"
List of values : TrainMode ∈ {"default", "add_sv_to_train_set"}
Example (Syntax: HDevelop)

* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
write_class_svm (SVMHandle, ’classifier.svm)
clear_class_svm (SVMHandle)

Result
If the parameters are valid the operator train_class_svm returns the value H_MSG_TRUE. If necessary, an
exception handling is raised.
Parallelization Information
train_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
classify_class_svm, write_class_svm
Alternatives
read_class_svm
See also
create_class_svm
References
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Bernhard Schölkopf, Alexander J.Smola: “Lerning with Kernels”; MIT Press, London; 1999.
Module
Foundation

write_class_svm ( Hlong SVMHandle, const char *FileName )


T_write_class_svm ( const Htuple SVMHandle, const Htuple FileName )

Write a support vector machine to a file.


write_class_svm writes the support vector machine (SVM) SVMHandle to the file given by FileName.
write_class_svm is typically called after the SVM has been trained with train_class_svm. The SVM
can be read with read_class_svm. write_class_svm does not write any training samples that possibly
have been stored in the SVM. For this purpose, write_samples_class_svm should be used.

HALCON 8.0.2
60 CHAPTER 1. CLASSIFICATION

Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong


SVM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the parameters are valid the operator write_class_svm returns the value H_MSG_TRUE. If necessary, an
exception handling is raised.
Parallelization Information
write_class_svm is reentrant and processed without parallelization.
Possible Predecessors
train_class_svm
Possible Successors
clear_class_svm
See also
create_class_svm, read_class_svm, write_samples_class_svm
Module
Foundation

write_samples_class_svm ( Hlong SVMHandle, const char *FileName )


T_write_samples_class_svm ( const Htuple SVMHandle,
const Htuple FileName )

Write the training data of a support vector machine to a file.


write_samples_class_svm writes the training samples currently stored in the support vector machine
(SVM) SVMHandle to the file given by FileName. write_samples_class_svm can be used to build up
a database of training samples, and hence to improve the performance of the SVM by training it with an extended
data set (see train_class_svm). The file FileName is overwritten by write_samples_class_svm.
Nevertheless, extending the database of training samples is easy to do because read_samples_class_svm
and add_sample_class_svm add the training samples to the training samples that are already stored in mem-
ory with the SVM.
Parameter

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong


SVM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the parameters are valid the operator write_samples_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
write_samples_class_svm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_svm
Possible Successors
clear_samples_class_svm
See also
create_class_svm, get_prep_info_class_svm, read_samples_class_svm
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


Chapter 2

File

2.1 Images

read_image ( Hobject *Image, const char *FileName )


T_read_image ( Hobject *Image, const Htuple FileName )

Read an image with different file formats.


The operator read_image reads the indicated image files from the background storage and generates the image.
One or more file names can be passed in FileName. If more than one file name is passed, an image object tuple
with the corresponding number of image objects is returned.
For the image formats PNG and JPEG-2000, binary alpha channels are interpreted as regions. Otherwise the region
of the generated image object (= all pixels of the matrix) is chosen maximal.
All images files written by the operator write_image (format ’ima’) have the extension ’.ima’. A description
file can be available for every image in HALCON format (same file name with extension ’.exp’). The type of the
pixel data (byte, int4, real) can also be taken from the description file. If this information is not available the type
byte is used as presetting.
Besides the HALCON format TIFF, GIF, BMP, JPEG, JPEG-2000, PNG, PCX, SUN-Raster, PGM, PPM, PBM
and XWD files can also be read. The gray values of PBM images are set at the values 0 and 255. The file formats
are either recognized by the extension (if indicated) or because of the internal structure of the files. If the extension
is indicated the image can be found faster. If no extension is indicated, files with extension are preferred to files
without extension. In case of PGM, PPM and PBM the corresponding extension (e.g. ’pgm’) or the general value
’pnm’ can be used. In case of TIFF ’tiff’ and ’tif’ are accepted. In case of JPEG-2000 only ’jp2’ is accepted. In
case of colored images an image with three color channels (matrices) is created, the red channel being stored in
the first, the blue channel in the second and the green channel in the third component (channel number).
Image files are searched in the current directory (determined by the environment variable) and in the image direc-
tory of HALCON . The image directory of HALCON is preset at ’.’ and ’/usr/local/halcon/images’ in a UNIX
environment and can be set via the operator set_system. More than one image directory can be indicated. This
is done by separating the individual directories by a colon.
Furthermore the search path can be set via the environment variable HALCONIMAGES (same structure as ’im-
age_dir’). Example:

setenv HALCONIMAGES "/usr/images:/usr/local/halcon/images"\\*

HALCON also searches images in the subdirectory "‘images"’ (Images for the program examples). The environ-
ment variable HALCONROOT is used for the HALCON directory.
Attention
If CMYK or YCCK JPEG files are read, HALCON assumes that these files follow the Adobe Photoshop convention
that the CMYK channels are stored inverted, i.e., 0 represents 100% ink coverage, rather than 0% ink as one would
expect. The images are converted to RGB images using this convention. If the JPEG file does not follow this

61
62 CHAPTER 2. FILE

convention, but stores the CMYK channels in the usual fashion, invert_image must be called after reading
the image.
If PNG images that contain an alpha channel are read, the alpha channel is returned as the second or fourth channel
of the output image, unless the alpha channel contains exactly two different gray values, in which case a one or
three channel image with a reduced domain is returned, in which the points in the domain correspond to the points
with the higher gray value in the alpha channel.
Parameter

. Image (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4
/ real
Read image.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; (Htuple .) const char *
Name of the image to be read.
Default Value : "fabrik"
Suggested values : FileName ∈ {"monkey", "fabrik", "mreut"}
Example

/* Reading an image: */
read_image(&Image,"mreut") ;

/* Reading 3 images into an image array: */


Htuple Files ;
create_tuple(&Files,3) ;
set_s(Files,"ic_0",0) ;
set_s(Files,"ic_1",1) ;
set_s(Files,"ic_2",2) ;
T_read_image(&Images,Files) ;

/* Setting of search path for images on ’/mnt/images’ and ’/home/images’:


*/
set_system("image_dir","/mnt/images:/home/images") ;

Result
If the parameters are correct the operator read_image returns the value H_MSG_TRUE. Otherwise an exception
handling is raised.
Parallelization Information
read_image is reentrant and processed without parallelization.
Possible Successors
disp_image, threshold, regiongrowing, count_channels, decompose3,
class_ndim_norm, gauss_image, fill_interlace, zoom_image_size,
zoom_image_factor, crop_part, write_image, rgb1_to_gray
Alternatives
read_sequence
See also
set_system, write_image
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


2.1. IMAGES 63

read_sequence ( Hobject *Image, Hlong HeaderSize, Hlong SourceWidth,


Hlong SourceHeight, Hlong StartRow, Hlong StartColumn,
Hlong DestWidth, Hlong DestHeight, const char *PixelType,
const char *BitOrder, const char *ByteOrder, const char *Pad,
Hlong Index, const char *FileName )

T_read_sequence ( Hobject *Image, const Htuple HeaderSize,


const Htuple SourceWidth, const Htuple SourceHeight,
const Htuple StartRow, const Htuple StartColumn,
const Htuple DestWidth, const Htuple DestHeight,
const Htuple PixelType, const Htuple BitOrder, const Htuple ByteOrder,
const Htuple Pad, const Htuple Index, const Htuple FileName )

Read images.
The operator read_sequence reads unformatted image data, from a file and returns a “suitable” image. The
image data must be filled consecutively pixel by pixel and line by line.
Any file headers (with the length HeaderSize bytes) are skipped. The parameters SourceWidth and
SourceHeight indicate the size of the filled image. DestWidth and DestHeight indicate the size of the
image. In the simplest case these parameters are the same. However, areas can also be read. The upper left corner
of the required image area can be determined via StartRow and StartColumn.
The pixel types ’bit’, ’byte’, ’short’ (16 bits, unsigned), ’signed_short’ (16 bits, signed), ’long’ (32 bits, signed),
’swapped_long’ (32 bits, with swapped segments), and ’real’ (32 bit floating point numbers) are supported. Fur-
thermore, the operator read_sequence enables the extraction of components of a RBG image, if a triple of
three bytes (in the sequence “red”, “green”, “blue”) was filed in the image file. For the red component the pixel type
’r_byte’ must be chosen, and correspondingly for the green and blue components ’g_byte’ or ’b_byte’, respectively.
’MSBFirst’ (most significant bit first) or the inversion thereof (’LSBFirst’) can be chosen for the bit order
(BitOrder). The byte orders (ByteOrder) ’MSBFirst’ (most significant byte first) or ’LSBFirst’, respectively,
are processed analogously. Finally an alignment (Pad) can be set at the end of the line: ’byte’, ’short’ or ’long’. If
a whole image sequence is stored in the file a single image (beginning at Index 1) can be chosen via the parameter
Index.
Image files are searched in the current directory (determined by the environment variable) and in the image direc-
tory of HALCON . The image directory of HALCON is preset at ’.’ and ’/usr/local/halcon/images’ in a UNIX
environment and can be set via the operator set_system. More than one image directory can be indicated. This
is done by separating the individual directories by a colon.
Furthermore the search path can be set via the environment variable HALCONIMAGES (same structure as ’im-
age_dir’). Example:

setenv HALCONIMAGES "/usr/images:/usr/local/halcon/images"\\*

HALCON also searches images in the subdirectory "‘images"’ (Images for the program examples). The environ-
ment variable HALCONROOT is used for the HALCON directory.
Attention
If files of pixel type ’real’ are read and the byte order is chosen incorrectly (i.e., differently from the byte order in
which the data is stored in the file) program error and even crashes because of floating point exceptions may result.
Parameter

. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / int2 / uint2 / int4


Image read.
. HeaderSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of bytes for file header.
Default Value : 0
Typical range of values : 0 ≤ HeaderSize
. SourceWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Number of image columns of the filed image.
Default Value : 512
Typical range of values : 1 ≤ SourceWidth

HALCON 8.0.2
64 CHAPTER 2. FILE

. SourceHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong


Number of image lines of the filed image.
Default Value : 512
Typical range of values : 1 ≤ SourceHeight
. StartRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Starting point of image area (line).
Default Value : 0
Typical range of values : 0 ≤ StartRow
Restriction : StartRow < SourceHeight
. StartColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Starting point of image area (column).
Default Value : 0
Typical range of values : 0 ≤ StartColumn
Restriction : StartColumn < SourceWidth
. DestWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Number of image columns of output image.
Default Value : 512
Typical range of values : 1 ≤ DestWidth
Restriction : DestWidth ≤ SourceWidth
. DestHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Number of image lines of output image.
Default Value : 512
Typical range of values : 1 ≤ DestHeight
Restriction : DestHeight ≤ SourceHeight
. PixelType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of pixel values.
Default Value : "byte"
List of values : PixelType ∈ {"bit", "byte", "r_byte", "g_byte", "b_byte", "short", "signed_short", "long",
"swapped_long", "real"}
. BitOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Sequence of bits within one byte.
Default Value : "MSBFirst"
List of values : BitOrder ∈ {"MSBFirst", "LSBFirst"}
. ByteOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Sequence of bytes within one ’short’ unit.
Default Value : "MSBFirst"
List of values : ByteOrder ∈ {"MSBFirst", "LSBFirst"}
. Pad (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Data units within one image line (alignment).
Default Value : "byte"
List of values : Pad ∈ {"byte", "short", "long"}
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of images in the file.
Default Value : 1
Typical range of values : 1 ≤ Index (lin)
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of input file.
Result
If the parameter values are correct the operator read_sequence returns the value H_MSG_TRUE. Otherwise
an exception handling is raised.
Parallelization Information
read_sequence is reentrant and processed without parallelization.
Possible Successors
disp_image, count_channels, decompose3, write_image, rgb1_to_gray
Alternatives
read_image

HALCON/C Reference Manual, 2008-5-13


2.1. IMAGES 65

See also
read_image
Module
Foundation

write_image ( const Hobject Image, const char *Format, Hlong FillColor,


const char *FileName )

T_write_image ( const Hobject Image, const Htuple Format,


const Htuple FillColor, const Htuple FileName )

Write images in graphic formats.


The operator write_image returns the indicated image (Image) in different image formats in files. Pixels
outside the region receive the color defined by FillColor. For gray value images a value between 0 (black)
and 255 (white) must be passed, with RGB color images the RGB values can be passed directly as a hexadecimal
value: e.g., 0xffff00 for a yellow background (red=255, green=255, blue=0).
The following formats are currently supported:

’tiff’ TIFF format, 3-channel-images (RGB): 3 samples per pixel; other images (grayvalue images): 1 sample per
pixel, 8 bits per sample, uncompressed,72 dpi; file extension: *.tif
’bmp’ Windows-BMP format, 3-channel-images (RGB): 3 bytes per pixel; other images (gray value image): 1
byte per pixel; file extension: *.bmp
’jpeg’ JPEG format, with lost of information; together with the format string the quality value determining the
compression rate can be provided: e.g., ’jpeg 30’. Attention: images stored for being processed later should
not be compressed with the jpeg format according to the lost of information.
’jp2’ : JPEG-2000 format (lossless and lossy compression); together with the format string the quality value
determing the compression rate can be provided (e.g., ’jp2 40’). This value corresponds to the ratio of the
size of the compressed image and the size of the uncompressed image (in percent). Since lossless JPEG-
2000 compression already reduces the file size significantly, only smaller values (typically smaller than 50)
influence the file size. If no value is provided for the compression (and only then), the image is compressed
lossless. The image can contain an arbitrary number of channels. Possible types are byte, cyclic, direction,
int1, uint2, int2, and int4. In the case of int4 it is only possible to store images with less or equal to 24
bits precision (otherwise an exception handling is raised). If an image with a reduced domain is written, the
region is stored as 1 bit alpha channel.
’png’ PNG format (lossless compression); together with the format string, a compresion level between 0 and 9 can
be specified, where 0 corresponds to no compression and 9 to the best possible compression. Alternatively,
the compression can be selected with the following strings: ’best’, ’fastest’, and ’none’. Hence, examples for
correct parameters are ’png’, ’png 7’, and ’png none’. Images of type byte and uint2 can be stored in PNG
files. If an image with a reduced domain is written, the region is stored as the alpha channel, where the points
within the domain are stored as the maximum gray value of the image type and the points outside the domain
are stored as the gray value 0. If an image with a full domain is written, no alpha channel is stored.
’ima’ The data is written binary line by line (without header or carriage return). The size of the image and the
pixel type are stored in the description file "’FileName.exp"’. All HALCON pixel types except complex
and vector_field can be written. Only the first channel of the image is stored in the file. The file extension
is: ’.ima’

Parameter

. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Output image(s).
. Format (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Graphic format.
Default Value : "tiff"
List of values : Format ∈ {"tiff", "bmp", "jpeg", "ima", "jpeg 100", "jpeg 80", "jpeg 60", "jpeg 40", "jpeg
20", "jp2", "jp2 50", "jp2 40", "jp2 30", "jp2 20", "png", "png best", "png fastest", "png none"}

HALCON 8.0.2
66 CHAPTER 2. FILE

. FillColor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong


Fill gray value for pixels not belonging to the image region.
Default Value : 0
Suggested values : FillColor ∈ {-1, 0, 255, "0xff0000", "0xff00"}
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write(-array) ; (Htuple .) const char *
Name of graphic file.
Result
If the parameter values are correct the operator write_image returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
write_image is reentrant and processed without parallelization.
Possible Predecessors
open_window, read_image
Module
Foundation

2.2 Misc
delete_file ( const char *FileName )
T_delete_file ( const Htuple FileName )

Delete a file.
delete_file deletes the file given by FileName.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename ; const char *
Name of file to be checked.
Result
delete_file returns the value H_MSG_TRUE if the file exists and could be deleted. Otherwise, an exception
is raised.
Parallelization Information
delete_file is reentrant and processed without parallelization.
Module
Foundation

file_exists ( const char *FileName, Hlong *FileExists )


T_file_exists ( const Htuple FileName, Htuple *FileExists )

Check whether file exists.


The operator file_exists checks whether the indicated file already exists. If this is the case, the parameter
FileExists is set to TRUE, otherwise to FALSE.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename ; const char *
Name of file to be checked.
Default Value : "/bin/cc"
. FileExists (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
boolean number.
Result
If the parameters are correct the operator file_exists returns the value H_MSG_TRUE. Otherwise, an ex-
ception is raised.

HALCON/C Reference Manual, 2008-5-13


2.2. MISC 67

Parallelization Information
file_exists is reentrant and processed without parallelization.
Possible Successors
open_file
Alternatives
open_file
Module
Foundation

T_list_files ( const Htuple Directory, const Htuple Options,


Htuple *Files )

List all files in a directory.


list_files returns all files in the directory given by Directory in the parameters Files. The current
directory can be specified with ” or ’.’. The parameter Options can be used to specify different processing
options by passing a tuple of values. If Options contains ’files’ only the files present in Directory are
returned. If ’directories’ is passed, only the directories present in Directory are returned. Directories are
marked by a trailing ’\’ (Windows) or a trailing ’/’ (Unix). If files as well as directories should be returned,
[’files’,’directories’] must be passed. If neither ’files’ nor ’directories’ is passed, list_files returns an empty
tuple. By passing ’recursive’, it can be specified that the directory tree should be searched recursively by examining
all sub-directories. On Unix systems, ’follow_links’ can be used to specify that symbolic links to files or directories
should be followed. In the default setting, symbolic links are not dereferenced, and hence are not searched if they
point to directories, and not returned if they point to files. For the recursive search, a maximum search depth can be
specified with ’max_depth <d>’, where ’<d>’ is a number that specifies the maximum depth. Hence, ’max_depth
2’ specifies that Directory and all immediate sub-directories should be searched. If symbolic links should be
followed it might happen that the search does not terminate if the symbolic links lead to a cycle in the directory
structure. Because of this, at most 1000000 files (and directories) are returned in Files. By specifying a different
number with ’max_files <d>’, this value can be reduced.
Parameter

. Directory (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.dir ; Htuple . const char *


Name of directory to be listed.
. Options (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Processing options.
Default Value : "files"
Suggested values : Options ∈ {"files", "directories", "recursive", "follow_links", "max_depth 5",
"max_files 1000"}
. Files (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Found files (and directories).
Result
list_files returns the value H_MSG_TRUE if the directory exists and could be read. Otherwise, an exception
is raised.
Parallelization Information
list_files is reentrant and processed without parallelization.
Possible Successors
tuple_regexp_select
Module
Foundation

T_read_world_file ( const Htuple FileName,


Htuple *WorldTransformation )

Read the geo coding from an ARC/INFO world file.

HALCON 8.0.2
68 CHAPTER 2. FILE

read_world_file reads a geocoding from an ARC/INFO world file with the file name FileName
and returns it as a homogeneous 2D transformation matrix in WorldTransformation. To find the file
FileName, all directories contained in the HALCON system variable ’image_dir’ (usually this is the con-
tent of the environment variable HALCONIMAGES) are searched (see read_image). This transforma-
tion matrix can be used to transform XLD contours to the world coordinate system before writing them
with write_contour_xld_arc_info. If the matrix WorldTransformation is inverted by call-
ing hom_mat2d_invert, the resulting matrix can be used to transform contours that have been read with
read_contour_xld_arc_info to the image coordinate system.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
Name of the ARC/INFO world file.
. WorldTransformation (output_control) . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Transformation matrix from image to world coordinates.
Result
If the parameters are correct and the world file could be read, the operator read_world_file returns the value
H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
read_world_file is reentrant and processed without parallelization.
Possible Successors
hom_mat2d_invert, affine_trans_contour_xld, affine_trans_polygon_xld
See also
write_contour_xld_arc_info, read_contour_xld_arc_info,
write_polygon_xld_arc_info, read_polygon_xld_arc_info
Module
Foundation

2.3 Region

read_region ( Hobject *Region, const char *FileName )


T_read_region ( Hobject *Region, const Htuple FileName )

Read binary images or HALCON regions.


The operator read_region reads regions from a binary file. The data is stored in packed form.
Tiff: Binary Tiff images with extension ’tiff’ or ’tif’. The result is always one region. The color black is used as
foreground.
BMP: Binary Windows bitmap images with extension ’bmp’. The result is always one region. The color black is
used as foreground.
HALCON regions: File format of HALCON for regions. Several images can be stored (in one file) or read
simultaneously via the operators write_region and read_region. All region files have the extension
’.reg’, which is not indicated when reading or writing the file.
A search path (’image_dir’) can be defined analogous to the operator read_image.
Attention
The clipping based on the current image format is set via the operator set_system
(’clip_region’,<’true’/’false’>). Consequently, if no image of suffcient size has been cre-
ated before the call to read_region, set_system(’clip_region’,’false’) should be called before
calling read_region to ensure that the region is not being clipped.
Parameter
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Read region.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of the region to be read.

HALCON/C Reference Manual, 2008-5-13


2.3. REGION 69

Example

/* Reading of regions and giving them gray values. */


read_image(&Img,"bild_test5") ;
read_region(&Regs,"reg_test5") ;
reduce_domain(Img,Regs,&Res) ;

Result
If the parameter values are correct the operator read_region returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
read_region is reentrant and processed without parallelization.
Possible Predecessors
read_image
Possible Successors
reduce_domain, disp_region
See also
write_region, read_image
Module
Foundation

write_region ( const Hobject Region, const char *FileName )


T_write_region ( const Hobject Region, const Htuple FileName )

Write regions on file.


The operator write_region writes the regions of the input images (in runlength coding) to a binary file. The
data is stored in packed form. The output data can be read via the operator read_region. If no extension has
been specified in FileName, the extension ’.reg’ is appended to FileName.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region of the images which are returned.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of region file.
Default Value : "region.reg"
Example

regiongrowing(Img,&Segmente,3,3,5,10) ;
write_region(Segmente,"result1") ;

Result
If the parameter values are correct the operator write_region returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
write_region is reentrant and processed without parallelization.
Possible Predecessors
open_window, read_image, read_region, threshold, regiongrowing
See also
read_region
Module
Foundation

HALCON 8.0.2
70 CHAPTER 2. FILE

2.4 Text
close_all_files ( )
T_close_all_files ( )

Close all open files.


close_all_files closes all open files.
Attention
close_all_files exists solely for the purpose of implementing the “reset program” functionality in HDe-
velop. close_all_files must not be used in any application.
Result
If it is possible to close the files the operator close_all_files returns the value H_MSG_TRUE. Otherwise
an exception handling is raised.
Parallelization Information
close_all_files is processed completely exclusively without parallelization.
Alternatives
close_file
Module
Foundation

close_file ( Hlong FileHandle )


T_close_file ( const Htuple FileHandle )

Closing a text file.


The operator close_file closes a file which was opened via the operator open_file.
Parameter

. FileHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . file ; Hlong


File handle.
Example

open_file("/tmp/data.txt","input",&FileHandle) ;
/* ... */
close_file(FileHandle) ;

Result
If the file handle is correct close_file returns the value H_MSG_TRUE. Otherwise an exception handling is
raised.
Parallelization Information
close_file is processed completely exclusively without parallelization.
Possible Predecessors
open_file
See also
open_file
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


2.4. TEXT 71

fnew_line ( Hlong FileHandle )


T_fnew_line ( const Htuple FileHandle )

Create a line feed.


The operator fnew_line puts out a line feed into the output file. At the same time the output buffer is cleaned.
Parameter

. FileHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . file ; Hlong


File handle.
Example

fwrite_string(FileHandle,"Good Morning") ;
fnew_line(FileHandle) ;

Result
If an output file is open and it can be written to the file the operator fnew_line returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
fnew_line is reentrant and processed without parallelization.
Possible Predecessors
fwrite_string
See also
fwrite_string
Module
Foundation

fread_char ( Hlong FileHandle, char *Char )


T_fread_char ( const Htuple FileHandle, Htuple *Char )

Read a character from a text file.


The operator fread_char reads a character from the current input file. If no character can be read because the
end of the file is reached, fread_char returns the character sequence ’eof’. At the end of a line the value ’nl’
is returned.
Parameter

. FileHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . file ; Hlong


File handle.
. Char (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Read character or control string (’nl’,’eof’).
Example

do {
fread_char(FileHandle,&Char) ;
if (!strcmp(Char,"nl")) fnew_line(FileHandle) ;
if (!strcmp(Char,"nl")) fwrite_string(FileHandle,Char)) ;
} while(strcmp(Char,"eof")) ;

Result
If an input file is open the operator fread_char returns H_MSG_TRUE. Otherwise an exception handling is raised.

HALCON 8.0.2
72 CHAPTER 2. FILE

Parallelization Information
fread_char is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
fread_string, read_string, fread_line
See also
open_file, close_file, fread_string, fread_line
Module
Foundation

fread_line ( Hlong FileHandle, char *OutLine, Hlong *IsEOF )


T_fread_line ( const Htuple FileHandle, Htuple *OutLine, Htuple *IsEOF )

Read a line from a text file.


The operator fread_line reads a line from the current input file (including the line skip). If the end of the file
is reached IsEOF returns the value 1, otherwise 0.
Attention
The maximum string length is 1024 character (including the end of string character).
Parameter

. FileHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . file ; Hlong


File handle.
. OutLine (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Read line.
. IsEOF (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Reached end of file.
Example

do {
fread_line(FileHandle,&Line,&IsEOF) ;
} while(IsEOF==0) ;

Result
If the file is open and a suitable line is read fread_line returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
fread_line is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
fread_char, fread_string
See also
open_file, close_file, fread_char, fread_string
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


2.4. TEXT 73

fread_string ( Hlong FileHandle, char *OutString, Hlong *IsEOF )


T_fread_string ( const Htuple FileHandle, Htuple *OutString,
Htuple *IsEOF )

Read strings from a text file.


The operator fread_string reads a string from the current input file. A string begins with the first representable
character: letters, numbers, additional characters (except blanks). A string ends when a blank or a line skip is
reached. Several successive line skips are ignored. If the end of the file is reached IsEOF return the value 1,
otherwise 0.
Parameter
. FileHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . file ; Hlong
File handle.
. OutString (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Read character sequence.
. IsEOF (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Reached end of file.
Example

fwrite_string(FileHandle,"Please enter text and return: ..") ;


fread_string(FileHandle,&String,&IsEOF) ;
fwrite_string(FileHandle,"here it is again: ") ;
fwrite_string(FileHandle,String) ;
fnew_line(FileHandle) ;

Result
If a file is open and a suitable string is read fread_string returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
fread_string is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
fread_char, read_string, fread_line
See also
open_file, close_file, fread_char, fread_line
Module
Foundation

fwrite_string ( Hlong FileHandle, const char *String )


T_fwrite_string ( const Htuple FileHandle, const Htuple String )

Write values in a text file.


The operator fwrite_string puts out a string or numbers on the output file. The operator open_file opens
a file. The call set_system(’flush_file’, <boolean-value>) determines whether the output char-
acters are put out directly on the output medium. If the value ’flush_file’ is set to ’false’ the characters (especially
in case of screen output) show up only after the operator fnew_line is called.
Strings as well as whole numbers and floating point numbers can be used as arguments. If more than one value
serves as input the values are put out consecutively without blanks.

HALCON 8.0.2
74 CHAPTER 2. FILE

Parameter
. FileHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . file ; (Htuple .) Hlong
File handle.
. String (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char * / Hlong / double
Values to be put out on the text file.
Default Value : "hallo"
Example

fwrite_string(FileHandle,"text with numbers: ") ;


fwrite_string(FileHandle,"5") ;
fwrite_string(FileHandle," and ") ;
fwrite_string(FileHandle,"1.0") ;
/* results in the following output: */
/* ’text with numbers: 5 and 1.00000’ */

/* Tupel Version */
int i;
double d;
Htuple Tuple ;

create_tuple(&Tuple,4) ;
i = 5 ;
d = 10.0 ;
set_s(Tuple,"text with numbers: ",0) ;
set_i(Tuple,i,1) ;
set_s(Tuple," and ",2) ;
set_d(Tuple,d,3) ;
T_fwrite_string(FileHandle,HilfsTuple) ;

Result
If the writing procedure was carried out successfully the operator fwrite_string returns the value
H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
fwrite_string is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
write_string
See also
open_file, close_file, set_system
Module
Foundation

open_file ( const char *FileName, const char *FileType,


Hlong *FileHandle )

T_open_file ( const Htuple FileName, const Htuple FileType,


Htuple *FileHandle )

Open text file.


The operator open_file opens a file. FileType determines whether this file is an input (’input’) or output file
(’output’ or ’append’). open_file creates files which can be accessed either by reading (’input’) or by writing

HALCON/C Reference Manual, 2008-5-13


2.5. TUPLE 75

(’output’ or ’append’) are created. For terminal input and output the file names ’standard’ (’input’ and ’output’)
and ’error’ (only ’output’) are reserved.
Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename ; const char *


Name of file to be opened.
Default Value : "standard"
Suggested values : FileName ∈ {"standard", "error", "/tmp/dat.dat"}
. FileType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of file.
Default Value : "output"
List of values : FileType ∈ {"input", "output", "append"}
. FileHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . file ; Hlong *
File handle.
Example

/* Creating of an outputfile with the name ’/tmp/log.txt’ and writing */


/* of one string: */
open_file("/tmp/log.txt","output",&FileHandle) ;
fwrite_string(FileHandle,"these are the first and last lines") ;
fnew_line(FileHandle) ;
close_file(FileHandle);

Result
If the parameters are correct the operator open_file returns the value H_MSG_TRUE. Otherwise an exception
handling is raised.
Parallelization Information
open_file is processed completely exclusively without parallelization.
Possible Successors
fwrite_string, fread_char, fread_string, fread_line, close_file
See also
close_file, fwrite_string, fread_char, fread_string, fread_line
Module
Foundation

2.5 Tuple

read_tuple ( const char *FileName, double *Tuple )


T_read_tuple ( const Htuple FileName, Htuple *Tuple )

Read a tuple from a file.


The operator read_tuple reads the contents of FileName and converts it into Tuple. The file has to be
generated by write_tuple.
Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; (Htuple .) const char *


Name of the file to be read.
. Tuple (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double * / Hlong * / char *
Tuple with any kind of data.
Result
If the parameters are correct the operator read_tuple returns the value H_MSG_TRUE. Otherwise an exception
handling is raised.

HALCON 8.0.2
76 CHAPTER 2. FILE

Parallelization Information
read_tuple is reentrant and processed without parallelization.
Alternatives
fwrite_string
See also
write_tuple, gnuplot_plot_ctrl, write_image, write_region, open_file
Module
Foundation

write_tuple ( double Tuple, const char *FileName )


T_write_tuple ( const Htuple Tuple, const Htuple FileName )

Write a tuple to a file.


The operator write_tuple writes the contents of Tuple to a file. The data is written in an ASCII format.
Therefore, the file can be exchanged between different architectures. There is no specific extension for this kind of
file.
Parameter

. Tuple (input_control) . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong / const char *


Tuple with any kind of data.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .filename.write ; (Htuple .) const char *
Name of the file to be written.
Result
If the parameters are correct the operator write_tuple returns the value H_MSG_TRUE. Otherwise an excep-
tion handling is raised.
Parallelization Information
write_tuple is reentrant and processed without parallelization.
Alternatives
fwrite_string
See also
read_tuple, write_image, write_region, open_file
Module
Foundation

2.6 XLD
read_contour_xld_arc_info ( Hobject *Contours, const char *FileName )
T_read_contour_xld_arc_info ( Hobject *Contours,
const Htuple FileName )

Read XLD contours to a file in ARC/INFO generate format.


read_contour_xld_arc_info reads the lines stored in ARC/INFO generate format in the file FileName,
and returns them as XLD contours in Contours. To find the file FileName, all directories contained in the
HALCON system variable ’image_dir’ (usually this is the content of the environment variable HALCONIMAGES)
are searched (see read_image). The returned contours are in world coordinates. They can be transformed to
the image coordinate system with the operator affine_trans_contour_xld. The necessary transformation
matrix can be generated by using read_world_file to read the transformation matrix from image to world
coordinates, and inverting this matrix by calling hom_mat2d_invert.

HALCON/C Reference Manual, 2008-5-13


2.6. XLD 77

Parameter

. Contours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *


Read XLD contours.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of the ARC/INFO file.
Example (Syntax: HDevelop)

/* Read the transformation and invert it */


read_world_file (’image.tfw’, WorldTransformation)
hom_mat2d_invert (WorldTransformation, ImageTransformation)
/* Read the image */
read_image (Image, ’image.tif’)
/* Read the line data */
read_contour_xld_arc_info (LinesWorld, ’lines.gen’)
/* Transform the line data to image coordinates */
affine_trans_contour_xld (LinesWorld, Lines, ImageTransformation)

Result
If the parameters are correct and the file could be read, the operator read_contour_xld_arc_info returns
the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
read_contour_xld_arc_info is reentrant and processed without parallelization.
Possible Successors
hom_mat2d_invert, affine_trans_contour_xld
See also
read_world_file, write_contour_xld_arc_info, read_polygon_xld_arc_info
Module
Foundation

read_contour_xld_dxf ( Hobject *Contours, const char *FileName,


const char *GenParamNames, double GenParamValues, char *DxfStatus )

T_read_contour_xld_dxf ( Hobject *Contours, const Htuple FileName,


const Htuple GenParamNames, const Htuple GenParamValues,
Htuple *DxfStatus )

Read XLD contours from a DXF file.


read_contour_xld_dxf reads the contents of the DXF file FileName (DXF version AC1009, AutoCAD
Release 12) and converts them to the XLD contours Contours. If no absolute path is given in FileName the
DXF file is searched in the current directory of the HALCON process.
The output parameter DxfStatus contains information about the number of contours that were read and, if
necessary, warnings that parts of the DXF file could not be interpreted.
The operator read_contour_xld_dxf supports the following DXF entities:

• POLYLINE
– 2D curves made up of line segments
– Closed 2D curves made up of line segments
• LWPOLYLINE
• LINE
• POINT
• CIRCLE
• ARC

HALCON 8.0.2
78 CHAPTER 2. FILE

• ELLIPSE
• SPLINE
• BLOCK
• INSERT

The x and y coordinates of the DXF entities are stored in the column and row coordinates, respectively, of the XLD
contours Contours.
If the file has been created with the operator write_contour_xld_dxf, all attributes and global attributes that
were originally defined for the XLD contours are read. This means that read_contour_xld_dxf supports all
the extended data written by the operator write_contour_xld_dxf. The reading of these attributes can be
switched off by setting the generic parameter ’read_attributes’ to ’false’. Generic parameters are set by specifying
the parameter name(s) in GenParamNames and the corresponding value(s) in GenParamValues.
DXF entities of the type CIRCLE, ARC, ELLIPSE, and SPLINE are approximated by XLD contours. The
accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’ and
’max_approx_error’ (for SPLINE only ’max_approx_error’). The parameter ’min_num_points’ defines the mini-
mum number of sampling points that are used for the approximation. Note that the parameter ’min_num_points’
always refers to the full circle or ellipse, respectively, even for ARCs or elliptical arcs, i.e., if ’min_num_points’ is
set to 50 and a DXF entity of the type ARC is read that represents a semi-circle, this semi-circle is approximated
by at least 25 sampling points. The parameter ’max_approx_error’ defines the maximum deviation of the XLD
contour from the ideal circle or ellipse, respectively (unit: pixel). For the determination of the accuracy of the
approximation both criteria are evaluated. Then, the criterion that leads to the more accurate approximation is
used.
Internally, the following default values are used for the generic parameters:

’read_attributes’ = ’true’
’min_num_points’ = 20
’max_approx_error’ = 0.25

To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
Parameter

. Contours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *


Read XLD contours.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; (Htuple .) const char *
Name of the DXF file.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that can be adjusted for the DXF input.
Default Value : []
List of values : GenParamNames ∈ {"read_attributes", "min_num_points", "max_approx_error"}
. GenParamValues (input_control) . . . . . . attribute.value(-array) ; (Htuple .) double / Hlong / const char *
Values of the generic parameters that can be adjusted for the DXF input.
Default Value : []
Suggested values : GenParamValues ∈ {"true", "false", 0.1, 0.25, 0.5, 1, 2, 5, 10, 20}
. DxfStatus (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Status information.
Result
If the parameters are correct and the file could be read the operator read_contour_xld_dxf returns the value
H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
read_contour_xld_dxf is reentrant and processed without parallelization.
Possible Predecessors
write_contour_xld_dxf
See also
write_contour_xld_dxf, read_polygon_xld_dxf, query_contour_attribs_xld,

HALCON/C Reference Manual, 2008-5-13


2.6. XLD 79

query_contour_global_attribs_xld, get_contour_attrib_xld,
get_contour_global_attrib_xld
Module
Foundation

read_polygon_xld_arc_info ( Hobject *Polygons, const char *FileName )


T_read_polygon_xld_arc_info ( Hobject *Polygons,
const Htuple FileName )

Read XLD polygons from a file in ARC/INFO generate format.


read_polygon_xld_arc_info reads the lines stored in ARC/INFO generate format in the file FileName,
and returns them as XLD polygons in Polygons. To find the file FileName, all directories contained in the
HALCON system variable ’image_dir’ (usually this is the content of the environment variable HALCONIMAGES)
are searched (see read_image). The returned polygons are in world coordinates. They can be transformed to
the image coordinate system with the operator affine_trans_polygon_xld. The necessary transformation
matrix can be generated by using read_world_file to read the transformation matrix from image to world
coordinates, and inverting this matrix by calling hom_mat2d_invert.
Parameter
. Polygons (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_poly(-array) ; Hobject *
Read XLD polygons.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of the ARC/INFO file.
Example (Syntax: HDevelop)

/* Read the transformation and invert it */


read_world_file (’image.tfw’, WorldTransformation)
hom_mat2d_invert (WorldTransformation, ImageTransformation)
/* Read the image */
read_image (Image, ’image.tif’)
/* Read the line data */
read_polygon_xld_arc_info (LinesWorld, ’lines.gen’)
/* Transform the line data to image coordinates */
affine_trans_polygon_xld (LinesWorld, Lines, ImageTransformation)

Result
If the parameters are correct and the file could be read, the operator read_polygon_xld_arc_info returns
the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
read_polygon_xld_arc_info is reentrant and processed without parallelization.
Possible Successors
hom_mat2d_invert, affine_trans_polygon_xld
See also
read_world_file, write_polygon_xld_arc_info, read_contour_xld_arc_info
Module
Foundation

read_polygon_xld_dxf ( Hobject *Polygons, const char *FileName,


const char *GenParamNames, double GenParamValues, char *DxfStatus )

T_read_polygon_xld_dxf ( Hobject *Polygons, const Htuple FileName,


const Htuple GenParamNames, const Htuple GenParamValues,
Htuple *DxfStatus )

Read XLD polygons from a DXF file.

HALCON 8.0.2
80 CHAPTER 2. FILE

read_polygon_xld_dxf reads the contents of the DXF file FileName (DXF version AC1009, AutoCAD
Release 12) and converts them to the XLD polygons Polygons. If no absolute path is given in FileName the
DXF file is searched in the current directory of the HALCON process.
The output parameter DxfStatus contains information about the number of polygons that were read and, if
necessary, warnings that parts of the DXF file could not be interpreted.
The operator read_polygon_xld_dxf supports the following DXF entities:

• POLYLINE
– 2D curves made up of line segments
– Closed 2D curves made up of line segments
• LWPOLYLINE
• LINE
• POINT
• CIRCLE
• ARC
• ELLIPSE
• SPLINE
• BLOCK
• INSERT

The x and y coordinates of the DXF entities are stored in the column and row coordinates, respectively, of the XLD
polygons Polygons.
DXF entities of the type CIRCLE, ARC, ELLIPSE, and SPLINE are approximated by XLD polygons. The
accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’ and
’max_approx_error’ (for SPLINE only ’max_approx_error’). Generic parameters are set by specifying the pa-
rameter name(s) in GenParamNames and the corresponding value(s) in GenParamValues. The parameter
’min_num_points’ defines the minimum number of sampling points that are used for the approximation. Note that
the parameter ’min_num_points’ always refers to the full circle or ellipse, respectively, even for ARCs or elliptical
arcs, i.e., if ’min_num_points’ is set to 50 and a DXF entity of the type ARC is read that represents a semi-circle,
this semi-circle is approximated by at least 25 sampling points. The parameter ’max_approx_error’ defines the
maximum deviation of the XLD polygon from the ideal circle or ellipse, respectively (unit: pixel). For the deter-
mination of the accuracy of the approximation both criteria are evaluated. Then, the criterion that leads to the more
accurate approximation is used.
Internally, the following default values are used for the generic parameters:

’min_num_points’ = 20
’max_approx_error’ = 0.25

To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
Note that reading a DXF file with read_polygon_xld_dxf results in exactly the same geometric information
as reading the file with read_contour_xld_dxf. However, the resulting data structure is different.
Parameter
. Polygons (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_poly(-array) ; Hobject *
Read XLD polygons.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; (Htuple .) const char *
Name of the DXF file.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that can be adjusted for the DXF input.
Default Value : []
List of values : GenParamNames ∈ {"min_num_points", "max_approx_error"}
. GenParamValues (input_control) . . . . . .attribute.value(-array) ; (Htuple .) double / Hlong / const char *
Values of the generic parameters that can be adjusted for the DXF input.
Default Value : []
Suggested values : GenParamValues ∈ {0.1, 0.25, 0.5, 1, 2, 5, 10, 20}

HALCON/C Reference Manual, 2008-5-13


2.6. XLD 81

. DxfStatus (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *


Status information.
Result
If the parameters are correct and the file could be read the operator read_polygon_xld_dxf returns the value
H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
read_polygon_xld_dxf is reentrant and processed without parallelization.
Possible Predecessors
write_polygon_xld_dxf
See also
write_polygon_xld_dxf, read_contour_xld_dxf
Module
Foundation

write_contour_xld_arc_info ( const Hobject Contours,


const char *FileName )

T_write_contour_xld_arc_info ( const Hobject Contours,


const Htuple FileName )

Write XLD contours to a file in ARC/INFO generate format.


write_contour_xld_arc_info writes the XLD contours Contours to an ARC/INFO generate format
file with name FileName. If no absolute path is given in FileName, the output file is created in the current
directory of the HALCON process. The contours must have been transformed to the world coordinate system with
affine_trans_contour_xld beforehand. The necessary transformation can be read from an ARC/INFO
world file with read_world_file.
Parameter
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject
XLD contours to be written.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of the ARC/INFO file.
Example (Syntax: HDevelop)

/* Read transformation and image */


read_world_file (’image.tfw’, WorldTransformation)
read_image (Image, ’image.tif’)
/* Segment image */
...
/* Write result */
affine_trans_contour_xld (Contours, ContoursWorld, WorldTransformation)
write_contour_xld_arc_info (ContoursWorld, ’result.gen’)

Result
If the parameters are correct and the file could be written, the operator write_contour_xld_arc_info
returns the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
write_contour_xld_arc_info is reentrant and processed without parallelization.
Possible Predecessors
affine_trans_contour_xld
See also
read_world_file, read_contour_xld_arc_info, write_polygon_xld_arc_info
Module
Foundation

HALCON 8.0.2
82 CHAPTER 2. FILE

write_contour_xld_dxf ( const Hobject Contours, const char *FileName )


T_write_contour_xld_dxf ( const Hobject Contours,
const Htuple FileName )

Write XLD contours to a file in DXF format.


write_contour_xld_dxf writes the XLD contours Contours to the file FileName in DXF format. If no
absolute path is given in FileName the output file is created in the current directory of the HALCON process.
Besides the geometry of the Contours, all attributes and global attributes that are defined for the Contours are
written to the file.
write_contour_xld_dxf writes the file according to the DXF version AC1009 (AutoCAD Release 12).
Each contour is stored as a POLYLINE. The attribute values are stored as extended data of each VERTEX of the
POLYLINE. The global attribute values are stored as extended data of the POLYLINE. All attribute names are also
stored as extended data of the POLYLINE.
The operator read_contour_xld_dxf can be used to read the XLD contours together with their attributes.
Other applications that are able to read DXF files only import the contour geometry, but they ignore the attribute
information.
Description of the format of the extended data
Each block of extended data starts with the following DXF group:
1001
HALCON
The attributes are written in the following format as extended data of each VERTEX:

DXF Explanation
1000 Meaning
contour attributes
1002 Beginning of the value list
{
1070 Number of attributes (here: 3)
3
1040 Value of the first attribute
5.00434303
1040 Value of the second attribute
126.8638916
1040 Value of the third attribute
4.99164152
1002 End of the value list
}

The global attributes are written in the following format as extended data of each POLYLINE:

HALCON/C Reference Manual, 2008-5-13


2.6. XLD 83

DXF Explanation
1000 Meaning
global contour attributes
1002 Beginning of the value list
{
1070 Number of global attributes (here: 5)
5
1040 Value of the first global attribute
0.77951831
1040 Value of the second global attribute
0.62637949
1040 Value of the third global attribute
103.94314575
1040 Value of the fourth global attribute
0.21434096
1040 Value of the fifth global attribute
0.21921949
1002 End of the value list
}

The names of the attributes are written in the following format as extended data of each POLYLINE:

DXF Explanation
1000 Meaning
names of contour attributes
1002 Beginning of the value list
{
1070 Number of attribute names (here: 3)
3
1000 Name of the first attribute
angle
1000 Name of the second attribute
response
1000 Name of the third attribute
edge_direction
1002 End of the value list
}

The names of the global attributes are written in the following format as extended data of each POLYLINE:

DXF Explanation
1000 Meaning
names of global contour attributes
1002 Beginning of the value list
{
1070 Number of global attribute names (here: 5)
5
1000 Name of the first global attribute
regr_norm_row
1000 Name of the second global attribute
regr_norm_col
1000 Name of the third global attribute
regr_dist
1000 Name of the fourth global attribute
regr_mean_dist
1000 Name of the fifth global attribute
regr_dev_dist
1002 End of the value list
}

HALCON 8.0.2
84 CHAPTER 2. FILE

Parameter
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject
XLD contours to be written.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of the DXF file.
Result
If the parameters are correct and the file could be written the operator write_contour_xld_dxf returns the
value H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
write_contour_xld_dxf is reentrant and processed without parallelization.
Possible Predecessors
edges_sub_pix
See also
read_contour_xld_dxf, write_polygon_xld_dxf, query_contour_attribs_xld,
query_contour_global_attribs_xld, get_contour_attrib_xld,
get_contour_global_attrib_xld
Module
Foundation

write_polygon_xld_arc_info ( const Hobject Polygons,


const char *FileName )

T_write_polygon_xld_arc_info ( const Hobject Polygons,


const Htuple FileName )

Write XLD polygons to a file in ARC/INFO generate format.


write_polygon_xld_arc_info writes the XLD polygons Polygons to an ARC/INFO generate format
file with name FileName. If no absolute path is given in FileName, the output file is created in the current
directory of the HALCON process. The polygons must have been transformed to the world coordinate system with
affine_trans_polygon_xld beforehand. The necessary transformation can be read from an ARC/INFO
world file with read_world_file.
Attention
The XLD contours that are possibly referenced by Polygons are not stored in the ARC/INFO file, since this
is not possible with the ARC/INFO generate file format. Therefore, when the polygons are read again using
read_polygon_xld_arc_info, this information is lost, and no references to contours are generated for the
polygons. Hence, operators that access the contours associated with a polygon, e.g., split_contours_xld
will not work correctly.
Parameter
. Polygons (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_poly(-array) ; Hobject
XLD polygons to be written.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of the ARC/INFO file.
Example (Syntax: HDevelop)

/* Read transformation and image */


read_world_file (’image.tfw’, WorldTransformation)
read_image (Image, ’image.tif’)
/* Segment image */
...
/* Write result */
affine_trans_polygon_xld (Polygons, PolygonsWorld, WorldTransformation)
write_polygon_xld_arc_info (PolygonsWorld, ’result.gen’)

HALCON/C Reference Manual, 2008-5-13


2.6. XLD 85

Result
If the parameters are correct and the file could be written, the operator write_polygon_xld_arc_info
returns the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
write_polygon_xld_arc_info is reentrant and processed without parallelization.
Possible Predecessors
affine_trans_polygon_xld
See also
read_world_file, read_polygon_xld_arc_info, write_contour_xld_arc_info
Module
Foundation

write_polygon_xld_dxf ( const Hobject Polygons, const char *FileName )


T_write_polygon_xld_dxf ( const Hobject Polygons,
const Htuple FileName )

Write XLD polygons to a file in DXF format.


write_polygon_xld_dxf writes the XLD polygons Polygons to the file FileName in DXF format. If no
absolute path is given in FileName the output file is created in the current directory of the HALCON process.
write_polygon_xld_dxf writes the file according to the DXF version AC1009 (AutoCAD Release 12).
Each polygon is stored as a POLYLINE.
The operator read_polygon_xld_dxf can be used to read the XLD polygon.
Attention
The XLD contours that are possibly referenced by Polygons are not stored in the DXF file. Therefore, when
the polygons are read again using read_polygon_xld_dxf, this information is lost, and no references to
contours are generated for the polygons. Hence, operators that access the contours associated with a polygon, e.g.,
split_contours_xld will not work correctly.
Parameter

. Polygons (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_poly(-array) ; Hobject


XLD polygons to be written.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of the DXF file.
Result
If the parameters are correct and the file could be written the operator write_polygon_xld_dxf returns the
value H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
write_polygon_xld_dxf is reentrant and processed without parallelization.
Possible Predecessors
gen_polygons_xld
See also
read_polygon_xld_dxf, write_contour_xld_dxf
Module
Foundation

HALCON 8.0.2
86 CHAPTER 2. FILE

HALCON/C Reference Manual, 2008-5-13


Chapter 3

Filter

3.1 Arithmetic
abs_image ( const Hobject Image, Hobject *ImageAbs )
T_abs_image ( const Hobject Image, Hobject *ImageAbs )

Calculate the absolute value (modulus) of an image.


The operator abs_image calculates the absolute gray values of images of any type and stores the result in
ImageAbs. The power spectrum of complex images is calculated as a ’real’ image. The operator abs_image
generates a logical copy of unsigned images.
Parameter
. Image (input_object) . . . . . . . . . . (multichannel-)image(-array) ; Hobject : int1 / int2 / int4 / real / complex
Image(s) for which the absolute gray values are to be calculated.
. ImageAbs (output_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : int1 / int2 / int4 / real
Result image(s).
Example (Syntax: HDevelop)

convert_image_type (Image, ImageInt2, ’int2’)


texture_laws (ImageInt2, ImageTexture, ’el’, 2, 5)
abs_image (ImageTexture, ImageTexture)

Result
The operator abs_image returns the value H_MSG_TRUE. The behavior in case of empty input (no input
images available) is set via the operator set_system(’no_object_result’,<Result>)
Parallelization Information
abs_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
See also
convert_image_type, power_byte
Module
Foundation

add_image ( const Hobject Image1, const Hobject Image2,


Hobject *ImageResult, double Mult, double Add )

T_add_image ( const Hobject Image1, const Hobject Image2,


Hobject *ImageResult, const Htuple Mult, const Htuple Add )

Add two images.

87
88 CHAPTER 3. FILTER

The operator add_image adds two images. The gray values (g1, g2) of the input images (Image1 and Image2)
are transformed as follows:

g 0 := (g1 + g2) ∗ Mult + Add

If an overflow or an underflow occurs the values are clipped. This is not the case with int2 images if Mult is equal
to 1 and Add is equal to 0. To reduce the runtime the underflow and overflow check is skipped. The resulting
image is stored in ImageResult.
It is possible to add byte images with int2, uint2 or int4 images and to add int4 to int2 or uint2 images. In this case
the result will be of type int2 or int4 respectively.
Several images can be processed in one call. In this case both input parameters contain the same number of images
which are then processed in pairs. An output image is generated for every pair.

Please note that the runtime of the operator varies with different control parameters. For frequently used combina-
tions special optimizations are used. Additionally, for byte, int2, uint2, and int4 images special optimizations are
implemented that use SIMD technology. The actual application of these special optimizations is controlled by the
system parameter ’mmx_enable’ (see set_system). If ’mmx_enable’ is set to ’true’ (and the SIMD instruction
set is available), the internal calculations are performed using SIMD technology.
Attention
Note that SIMD technology performs best on large, compact input regions. Depending on the input region and
the capabilities of the hardware the execution of add_image might even take significantly more time with
SIMD technology than without. In this case, the use of SIMD technology can be avoided by set_system
(’mmx_enable’,’false’).
Parameter
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 2.
. ImageResult (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2
/ int4 / real / direction / cyclic / com-
plex
Result image(s) by the addition.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Factor for gray value adaption.
Default Value : 0.5
Suggested values : Mult ∈ {0.2, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 5.0}
Typical range of values : -255.0 ≤ Mult ≤ 255.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Value for gray value range adaption.
Default Value : 0
Suggested values : Add ∈ {0, 64, 128, 255, 512}
Typical range of values : -512.0 ≤ Add ≤ 512.0
Minimum Increment : 0.01
Recommended Increment : 1.0
Example

read_image(&Image0,"fabrik");
disp_image(Image0,WindowHandle);
read_image(&Image1,"Affe");
disp_image(Image1,WindowHandle);
add_image(Image0,Image1,&Result,2.0,10.0);
disp_image(Result,WindowHandle);

HALCON/C Reference Manual, 2008-5-13


3.1. ARITHMETIC 89

Result
The operator add_image returns the value H_MSG_TRUE if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
add_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
sub_image, mult_image
See also
sub_image, mult_image
Module
Foundation

div_image ( const Hobject Image1, const Hobject Image2,


Hobject *ImageResult, double Mult, double Add )

T_div_image ( const Hobject Image1, const Hobject Image2,


Hobject *ImageResult, const Htuple Mult, const Htuple Add )

Divide two images.


The operator div_image divides two images. The gray values (g1, g2) of the input images (Image1) are
transformed as follows:

g 0 := g1/g2 ∗ Mult + Add

If an overflow or an underflow occurs the values are clipped.


Several images can be processed in one call. In this case both input parameters contain the same number of images
which are then processed in pairs. An output image is generated for every pair.
Parameter

. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ complex
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ complex
Image(s) 2.
. ImageResult (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2
/ int4 / real / complex
Result image(s) by the division.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Factor for gray range adaption.
Default Value : 255
Suggested values : Mult ∈ {0.1, 0.2, 0.5, 1.0, 2.0, 3.0, 10, 100, 500, 1000}
Typical range of values : -1000 ≤ Mult ≤ 1000
Minimum Increment : 0.001
Recommended Increment : 1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Value for gray range adaption.
Default Value : 0
Suggested values : Add ∈ {0.0, 128.0, 256.0, 1025}
Typical range of values : -1000 ≤ Add ≤ 1000
Minimum Increment : 0.01
Recommended Increment : 1.0
Example

HALCON 8.0.2
90 CHAPTER 3. FILTER

read_image(&Image0,"fabrik");
disp_image(Image0,WindowHandle);
read_image(&Image1,"Affe");
disp_image(Image1,WindowHandle);
div_image(Image0,Image1,&Result,2.0,10.0);
disp_image(Result,WindowHandle);

Result
The operator div_image returns the value H_MSG_TRUE if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
div_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
add_image, sub_image, mult_image
See also
add_image, sub_image, mult_image
Module
Foundation

invert_image ( const Hobject Image, Hobject *ImageInvert )


T_invert_image ( const Hobject Image, Hobject *ImageInvert )

Invert an image.
The operator invert_image inverts the gray values of an image. For images of the ’byte’ and ’cyclic’ type the
result is calculated as:

g 0 = 255 − g

Images of the ’direction’ type are transformed by

g 0 = (g + 90) modulo 180

In the case of signed types the values are negated. The resulting image has the same pixel type as the input image.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Input image(s).
. ImageInvert (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic
/ int1 / int2 / uint2 / int4 / real
Image(s) with inverted gray values.
Example

read_image(&Orig,"fabrik");
invert_image(Orig,&Invert);
disp_image(Invert,WindowHandle);

Parallelization Information
invert_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
watersheds

HALCON/C Reference Manual, 2008-5-13


3.1. ARITHMETIC 91

Alternatives
scale_image
See also
scale_image, add_image, sub_image
Module
Foundation

max_image ( const Hobject Image1, const Hobject Image2,


Hobject *ImageMax )

T_max_image ( const Hobject Image1, const Hobject Image2,


Hobject *ImageMax )

Calculate the maximum of two images pixel by pixel.


max_image calculates the maximum of the images Image1 and Image2 (pixel by pixel). The result is stored in
the image ImageMax. The resulting image has the same pixel type as the input image. If several (pairs of) images
are processed in one call, every i-th image from Image1 is compared to the i-th image from Image2. Thus the
number of images in both input parameters must be the same. An output image is generated for every input pair.
Attention
The two input images must be of the same type and size.
Parameter

. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 2.
. ImageMax (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2 / int4
/ real / direction / cyclic
Result image(s) by the maximization.
Example

read_image(&Bild1,"affe");
read_image(&Bild2,"fabrik");
max_image(Bild1,Bild2,&Max);
disp_image(Max,WindowHandle);

Result
If the parameter values are correct the operator max_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
max_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
max_image
See also
min_image
Module
Foundation

HALCON 8.0.2
92 CHAPTER 3. FILTER

min_image ( const Hobject Image1, const Hobject Image2,


Hobject *ImageMin )

T_min_image ( const Hobject Image1, const Hobject Image2,


Hobject *ImageMin )

Calculate the minimum of two images pixel by pixel.


The operator min_image determines the minimum (pixel by pixel) of the images Image1 and Image2. The
result is stored in the image ImageMin. The resulting image has the same pixel type as the input image. If several
(pairs of) images are processed in one call, every i-th image from Image1 is compared to the i-th image from
Image2. Thus the number of images in both input parameters must be the same. An output image is generated
for every input pair.
Parameter

. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 2.
. ImageMin (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2 / int4
/ real / direction / cyclic
Result image(s) by the minimization.
Result
If the parameter values are correct the operator min_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
min_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_erosion
See also
max_image, min_image
Module
Foundation

mult_image ( const Hobject Image1, const Hobject Image2,


Hobject *ImageResult, double Mult, double Add )

T_mult_image ( const Hobject Image1, const Hobject Image2,


Hobject *ImageResult, const Htuple Mult, const Htuple Add )

Multiply two images.


mult_image multiplies two images. The gray values (g1, g2) of the input images (Image1) are transformed as
follows:

g 0 := g1 ∗ g2 ∗ Mult + Add

If an overflow or an underflow occurs the values are clipped.


Several images can be processed in one call. In this case both input parameters contain the same number of images
which are then processed in pairs. An output image is generated for every pair.

HALCON/C Reference Manual, 2008-5-13


3.1. ARITHMETIC 93

Parameter

. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 2.
. ImageResult (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2
/ int4 / real / direction / cyclic / com-
plex
Result image(s) by the product.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Factor for gray range adaption.
Default Value : 0.005
Suggested values : Mult ∈ {0.001, 0.01, 0.5, 1.0, 2.0, 3.0, 5.0, 10.0}
Typical range of values : -255.0 ≤ Mult ≤ 255.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Value for gray range adaption.
Default Value : 0
Suggested values : Add ∈ {0.0, 128.0, 256.0}
Typical range of values : -512.0 ≤ Add ≤ 512.0
Minimum Increment : 0.01
Recommended Increment : 1.0
Example

read_image(&Image0,"fabrik");
disp_image(Image0,WindowHandle);
read_image(&Image1,"Affe");
disp_image(Image1,WindowHandle);
mult_image(Image0,Image1,&Result,2.0,10.0);
disp_image(Result,WindowHandle);

Result
The operator mult_image returns the value H_MSG_TRUE if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
mult_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
add_image, sub_image, div_image
See also
add_image, sub_image, div_image
Module
Foundation

scale_image ( const Hobject Image, Hobject *ImageScaled, double Mult,


double Add )

T_scale_image ( const Hobject Image, Hobject *ImageScaled,


const Htuple Mult, const Htuple Add )

Scale the gray values of an image.

HALCON 8.0.2
94 CHAPTER 3. FILTER

The operator scale_image scales the input images (Image) by the following transformation:

g 0 := g ∗ Mult + Add

If an overflow or an underflow occurs the values are clipped.


This operator can be applied, e.g., to map the gray values of an image, i.e., the interval [GMin,GMax], to the
maximum range [0:255]. For this, the parameters are chosen as follows:

255
Mult = Add = −Mult ∗ GMin
GMax − GMin
The values for GMin and GMax can be determined, e.g., with the operator min_max_gray.
Please note that the runtime of the operator varies with different control parameters. For frequently used combi-
nations special optimizations are used. Additionally, special optimizations are implemented that use fixed point
arithmetic (for int2 and uint2 images), and further optimizations that use SIMD technology (for byte, int2, and uint2
images). The actual application of these special optimizations is controlled by the system parameters ’int_zooming’
and ’mmx_enable’ (see set_system). If ’int_zooming’ is set to ’true’, the internal calculation is performed us-
ing fixed point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed
gray values is slightly lower in this mode. The difference to the more accurate calculation (using ’int_zooming’
= ’false’) is typically less than two gray levels. If ’mmx_enable’ is set to ’true’(and the SIMD instruction set is
available), the internal calculations are performed using fixed point arithmetic and SIMD technology. In this case
the setting of ’int_zooming’ is ignored.
Attention
Note that SIMD technology performs best on large, compact input regions. Depending on the input region and
the capabilities of the hardware the execution of scale_image might even take significantly more time with
SIMD technology than without. In this case, the use of SIMD technology can be avoided by set_system
(’mmx_enable’,’false’).
Parameter

. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real /
direction / cyclic / complex
Image(s) whose gray values are to be scaled.
. ImageScaled (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2
/ int4 / real / direction / cyclic / com-
plex
Result image(s) by the scale.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Scale factor.
Default Value : 0.01
Suggested values : Mult ∈ {0.001, 0.003, 0.005, 0.008, 0.01, 0.02, 0.03, 0.05, 0.08, 0.1, 0.5, 1.0}
Minimum Increment : 0.001
Recommended Increment : 0.1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Offset.
Default Value : 0
Suggested values : Add ∈ {0, 10, 50, 100, 200, 500}
Minimum Increment : 0.01
Recommended Increment : 1.0
Example

/* simulation of invert for type ’byte’ */


byte_invert(Hobject In, Hobject *out)
{
scale_image(In,Out,-1.0,255.0);
}

HALCON/C Reference Manual, 2008-5-13


3.1. ARITHMETIC 95

Result
The operator scale_image returns the value H_MSG_TRUE if the parameters are correct. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) Otherwise an exception treatment is carried out.
Parallelization Information
scale_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
min_max_gray
Alternatives
mult_image, add_image, sub_image
See also
min_max_gray
Module
Foundation

sqrt_image ( const Hobject Image, Hobject *SqrtImage )


T_sqrt_image ( const Hobject Image, Hobject *SqrtImage )

Calculate the square root of an image.


sqrt_image calculates the square root of an input image Image and stores the result in the image SqrtImage
of the same pixel type. In case the picture Image is of a signed pixel type, negative pixel values will be mapped
to zero in SqrtImage.
Parameter
. Image (input_object) . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image
. SqrtImage (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2 /
int4 / real
Output image
Parallelization Information
sqrt_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Module
Foundation

sub_image ( const Hobject ImageMinuend, const Hobject ImageSubtrahend,


Hobject *ImageSub, double Mult, double Add )

T_sub_image ( const Hobject ImageMinuend, const Hobject ImageSubtrahend,


Hobject *ImageSub, const Htuple Mult, const Htuple Add )

Subtract two images.


The operator sub_image subtracts two images. The gray values (g1, g2) of the input images (ImageMinuend
and ImageSubtrahend) are transformed as follows:

g 0 := (g1 − g2) ∗ Mult + Add

If an overflow or an underflow occurs the values are clipped.


Several images can be processed in one call. In this case both input parameters contain the same number of images
which are then processed in pairs. An output image is generated for every pair.
Please note that the runtime of the operator varies with different control parameters. For frequently used com-
binations special optimizations are used. Additionally, for byte, int2, and uint2 images special optimizations are
implemented that use SIMD technology. The actual application of these special optimizations is controlled by the

HALCON 8.0.2
96 CHAPTER 3. FILTER

system parameter ’mmx_enable’ (see set_system). If ’mmx_enable’ is set to ’true’ (and the SIMD instruction
set is available), the internal calculations are performed using SIMD technology.
Attention
Note that SIMD technology performs best on large, compact input regions. Depending on the input region and
the capabilities of the hardware the execution of sub_image might even take significantly more time with
SIMD technology than without. In this case, the use of SIMD technology can be avoided by set_system
(’mmx_enable’,’false’).
Parameter
. ImageMinuend (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 /
int4 / real / direction / cyclic / com-
plex
Minuend(s).
. ImageSubtrahend (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 /
uint2 / int4 / real / direction /
cyclic / complex
Subtrahend(s).
. ImageSub (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2 / int4
/ real / direction / cyclic / complex
Result image(s) by the subtraction.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Correction factor.
Default Value : 1.0
Suggested values : Mult ∈ {0.0, 1.0, 2.0, 3.0, 4.0}
Typical range of values : -255.0 ≤ Mult ≤ 255.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Correction value.
Default Value : 128.0
Suggested values : Add ∈ {0.0, 128.0, 256.0}
Typical range of values : -512.0 ≤ Add ≤ 512.0
Minimum Increment : 0.01
Recommended Increment : 1.0
Example

read_image(&Image0,"fabrik");
disp_image(Image0,WindowHandle);
read_image(&Image1,"Affe");
disp_image(Image1,WindowHandle);
sub_image(Image0,Image1,&Result,2.0,10.0);
disp_image(Result,WindowHandle);

Result
The operator sub_image returns the value H_MSG_TRUE if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
sub_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
dual_threshold
Alternatives
mult_image, add_image, sub_image
See also
add_image, mult_image, dyn_threshold, check_difference
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.2. BIT 97

3.2 Bit
bit_and ( const Hobject Image1, const Hobject Image2, Hobject *ImageAnd )
T_bit_and ( const Hobject Image1, const Hobject Image2,
Hobject *ImageAnd )

Bit-by-bit AND of all pixels of the input images.


The operator bit_and calculates the “and” of all pixels of the input images bit by bit. The semantics of the
“and” operation corresponds to that of C for the respective types (signed char, unsigned char, short, unsigned short,
int/long). The images must have the same size and pixel type. The pixels within the definition range of the image
in the first parameter are processed.
Several images can be processed in one call. In this case both input parameters contain the same number of images
which are then processed in pairs. An output image is generated for every pair.
Parameter

. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2


/ uint2 / int4
Input image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4
Input image(s) 2.
. ImageAnd (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4
Result image(s) by AND-operation.
Example

read_image(&Image0,"affe");
disp_image(Image0,WindowHandle);
read_image(&Image1,"fabrik");
disp_image(Image1,WindowHandle);
bit_and(Image0,Image1,&ImageBitA);
disp_image(ImageBitA,WindowHandle);

Result
If the images are correct (type and number) the operator bit_and returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_and is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_mask, add_image, max_image
See also
bit_mask, add_image, max_image
Module
Foundation

bit_lshift ( const Hobject Image, Hobject *ImageLShift, Hlong Shift )


T_bit_lshift ( const Hobject Image, Hobject *ImageLShift,
const Htuple Shift )

Left shift of all pixels of the image.


The operator bit_lshift calculates a “left shift” of all pixels of the input image bit by bit. The semantics of
the “left shift” operation corresponds to that of C (“«“) for the respective types (signed char, unsigned char, short,

HALCON 8.0.2
98 CHAPTER 3. FILTER

unsigned short, int/long). If an overflow occurs the result is limited to the maximum value of the respective pixel
type. Only the pixels within the definition range of the image are processed.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4
Input image(s).
. ImageLShift (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic
/ int1 / int2 / uint2 / int4
Result image(s) by shift operation.
. Shift (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Shift value.
Default Value : 3
Suggested values : Shift ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20, 24, 30, 31}
Typical range of values : 0 ≤ Shift ≤ 31
Minimum Increment : 1
Recommended Increment : 1
Restriction : (Shift ≥ 1) ∧ (Shift ≤ 31)
Example

read_image(&ByteImage,"fabrik");
convert_image_type(ByteImage,&Int2Image,"int2");
bit_lshift(Int2Image,&FullInt2Image,8);

Result
If the images are correct (type) and if Shift has a valid value the operator bit_lshift returns the
value H_MSG_TRUE. The behavior in case of empty input (no input images available) is set via the operator
set_system(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_lshift is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
scale_image
See also
bit_rshift
Module
Foundation

bit_mask ( const Hobject Image, Hobject *ImageMask, Hlong BitMask )


T_bit_mask ( const Hobject Image, Hobject *ImageMask,
const Htuple BitMask )

Logical “AND” of each pixel using a bit mask.


The operator bit_mask carries out an “and” operation of each pixel with a fixed mask. The semantics of the
“and” operation corresponds to that of C for the respective types (signed char, unsigned char, unsigned short, short,
int/long). Only the pixels within the definition range of the image are processed.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4
Input image(s).
. ImageMask (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic /
int1 / int2 / uint2 / int4
Result image(s) by combination with mask.

HALCON/C Reference Manual, 2008-5-13


3.2. BIT 99

. BitMask (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong


Bit field
Default Value : 128
List of values : BitMask ∈ {1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096}
Suggested values : BitMask ∈ {1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096}
Result
If the images are correct (type) the operator bit_mask returns the value H_MSG_TRUE. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_mask is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold, bit_or
Alternatives
bit_slice
See also
bit_and, bit_lshift
Module
Foundation

bit_not ( const Hobject Image, Hobject *ImageNot )


T_bit_not ( const Hobject Image, Hobject *ImageNot )

Complement all bits of the pixels.


The operator bit_not calculates the “complement” of all pixels of the input image bit by bit. The semantics of
the “complement” operation corresponds to that of C (“∼”) for the respective types (signed char, unsigned char,
short, unsigned short, int/long). Only the pixels within the definition range of the image are processed.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4
Input image(s).
. ImageNot (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4
Result image(s) by complement operation.
Example

read_image(&Image0,"affe");
disp_image(Image0,WindowHandle);
bit_not(Image0,&ImageBitN);
disp_image(ImageBitN,WindowHandle);

Result
If the images are correct (type) the operator bit_not returns the value H_MSG_TRUE. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_not is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_or, bit_and, add_image
See also
bit_slice, bit_mask

HALCON 8.0.2
100 CHAPTER 3. FILTER

Module
Foundation

bit_or ( const Hobject Image1, const Hobject Image2, Hobject *ImageOr )


T_bit_or ( const Hobject Image1, const Hobject Image2,
Hobject *ImageOr )

Bit-by-bit OR of all pixels of the input images.


The operator bit_or calculates the “or” of all pixels of the input images bit by bit. The semantics of the
“or”operation corresponds to that of C for the respective types (signed char, unsigned char, short, unsigned short,
int/long). The images must have the same size and pixel type. The pixels within the definition range of the image
in the first parameter are processed.
Several images can be processed in one call. In this case both input parameters contain the same number of images
which are then processed in pairs. An output image is generated for every pair.
Parameter

. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2


/ uint2 / int4
Input image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4
Input image(s) 2.
. ImageOr (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4
Result image(s) by OR-operation.
Example

read_image(&Image0,"affe");
disp_image(Image0,WindowHandle);
read_image(&Image1,"fabrik");
disp_image(Image1,WindowHandle);
bit_or(Image0,Image1,&ImageBitO);
disp_image(ImageBitO,WindowHandle);

Result
If the images are correct (type and number) the operator bit_or returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_or is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_and, add_image
See also
bit_xor, bit_and
Module
Foundation

bit_rshift ( const Hobject Image, Hobject *ImageRShift, Hlong Shift )


T_bit_rshift ( const Hobject Image, Hobject *ImageRShift,
const Htuple Shift )

Right shift of all pixels of the image.

HALCON/C Reference Manual, 2008-5-13


3.2. BIT 101

The operator bit_rshift calculates a “right shift” of all pixels of the input image bit by bit. The semantics
of the “right shift” operation corresponds to that of C (“»”) for the respective types (signed char, unsigned char,
short, unsigned short, int/long). Only the pixels within the definition range of the image are processed.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4
Input image(s).
. ImageRShift (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic
/ int1 / int2 / uint2 / int4
Result image(s) by shift operation.
. Shift (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
shift value
Default Value : 3
Suggested values : Shift ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20, 24, 30, 31}
Typical range of values : 0 ≤ Shift ≤ 31
Minimum Increment : 1
Recommended Increment : 1
Restriction : (Shift ≥ 1) ∧ (Shift ≤ 31)
Example

bit_rshift(Int2Image,&ReducedInt2Image,8);
convert_image_type(ReducedInt2Image,&ByteImage,"byte");

Result
If the images are correct (type) and Shift has a valid value the operator bit_rshift returns the value
H_MSG_TRUE. The behavior in case of empty input (no input images available) is set via the operator
set_system(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_rshift is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
scale_image
See also
bit_lshift
Module
Foundation

bit_slice ( const Hobject Image, Hobject *ImageSlice, Hlong Bit )


T_bit_slice ( const Hobject Image, Hobject *ImageSlice,
const Htuple Bit )

Extract a bit from the pixels.


The operator bit_slice extracts a bit level from the input image. The semantics of the “and” operation
corresponds to that of C for the respective types (signed char, unsigned char, short, unsigned short, int/long). Only
the pixels within the definition range of the image are processed.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4
Input image(s).
. ImageSlice (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic /
int1 / int2 / uint2 / int4
Result image(s) by extraction.

HALCON 8.0.2
102 CHAPTER 3. FILTER

. Bit (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Bit to be selected.
Default Value : 8
Suggested values : Bit ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20, 24, 30, 32}
Typical range of values : 1 ≤ Bit ≤ 32
Minimum Increment : 1
Recommended Increment : 1
Restriction : (Bit ≥ 1) ∧ (Bit ≤ 32)
Example

read_image(&ByteImage,"fabrik");
for (bit=1; bit<=8; i++)
{
bit_slice(ByteImage,&Slice,bit);
threshold(Slice,&Region,0,255);
disp_region(Region,WindowHandle);
clear(bit_slice); clear(Slice); clear(Region);
}

Result
If the images are correct (type) and Bit has a valid value, the operator bit_slice returns the value
H_MSG_TRUE. The behavior in case of empty input (no input images available) is set via the operator
set_system(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_slice is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold, bit_or
Alternatives
bit_mask
See also
bit_and, bit_lshift
Module
Foundation

bit_xor ( const Hobject Image1, const Hobject Image2, Hobject *ImageXor )


T_bit_xor ( const Hobject Image1, const Hobject Image2,
Hobject *ImageXor )

Bit-by-bit XOR of all pixels of the input images.


The operator bit_xor calculates the “xor” of all pixels of the input images bit by bit. The semantics of the
“xor” operation corresponds to that of C for the respective types (signed char, unsigned char, short, unsigned short,
int/long). The images must have the same size and pixel type. The pixels within the definition range of the image
in the first parameter are processed.
Several images can be processed in one call. In this case both input parameters contain the same number of images
which are then processed in pairs. An output image is generated for every pair.
Parameter
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4
Input image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4
Input image(s) 2.
. ImageXor (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4
Result image(s) by XOR-operation.

HALCON/C Reference Manual, 2008-5-13


3.3. COLOR 103

Example

read_image(&Image0,"affe");
disp_image(Image0,WindowHandle);
read_image(&Image1,"fabrik");
disp_image(Image1,WindowHandle);
bit_xor(Image0,Image1,&ImageBitX);
disp_image(ImageBitX,WindowHandle);

Result
If the parameter values are correct the operator bit_xor returns the value H_MSG_TRUE. The behav-
ior in case of empty input (no input images available) can be determined by the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_xor is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_or, bit_and, add_image
See also
bit_or, bit_and
Module
Foundation

3.3 Color
cfa_to_rgb ( const Hobject CFAImage, Hobject *RGBImage,
const char *CFAType, const char *Interpolation )

T_cfa_to_rgb ( const Hobject CFAImage, Hobject *RGBImage,


const Htuple CFAType, const Htuple Interpolation )

Convert a single-channel color filter array image into an RGB image.


cfa_to_rgb converts a single-channel color filter array image CFAImage into an RGB image RGBImage.
Color filter array images are typically generated by single-chip CCD cameras. The conversion from color filter
array image to RGB image is typically done on the camera itself or is performed by the device driver of the
frame grabber that is used to grab the image. In some cases, however, the device driver simply passes the color
filter array image through unchanged. In this case, the corresponding HALCON frame grabber interface typically
converts the image into an RGB image. Hence, the operator cfa_to_rgb is normally used if the images are not
being grabbed using the HALCON frame grabber interface ( grab_image or grab_image_async), but are
grabbed using function calls from the frame grabber SDK, and are passed to HALCON using gen_image1 or
gen_image1_extern.
In single-chip CCD cameras, a color filter array in front of the sensor provides (subsampled) color information.
The most frequently used filter is the so called Bayer filter. The color filter array has the following layout in this
case:

G B G B G B ···

R G R G R G ···

G B G B G B ···

R G R G R G ···
.. .. .. .. .. .. ..
. . . . . . .

Each gray value of the input image CFAImage corresponds to the brightness of the pixel behind the corresponding
color filter. Hence, in the above layout, the pixel (0,0) corresponds to a green color value, while the pixel (0,1)

HALCON 8.0.2
104 CHAPTER 3. FILTER

corresponds to a blue color value. The layout of the Bayer filter is completely determined by the first two elements
of the first row of the image, and can be chosen with the parameter CFAType. In particular, this enables the correct
conversion of color filter array images that have been cropped out of a larger image (e.g., using crop_part or
crop_rectangle1). The algorithm that is used to interpolate the RGB values is determined by the parameter
Interpolation. Currently, the only possible choice is ’bilinear’.
Parameter
. CFAImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Input image.
. RGBImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / uint2
Output image.
. CFAType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Color filter array type.
Default Value : "bayer_gb"
List of values : CFAType ∈ {"bayer_gb", "bayer_gr", "bayer_bg", "bayer_rg"}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Interpolation type.
Default Value : "bilinear"
List of values : Interpolation ∈ {"bilinear"}
Result
cfa_to_rgb returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
cfa_to_rgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
gen_image1_extern, gen_image1, grab_image
Possible Successors
decompose3
See also
trans_from_rgb
Module
Foundation

T_gen_principal_comp_trans ( const Hobject MultichannelImage,


Htuple *Trans, Htuple *TransInv, Htuple *Mean, Htuple *Cov,
Htuple *InfoPerComp )

Compute the transformation matrix of the principal component analysis of multichannel images.
gen_principal_comp_trans computes the transformation matrix of a principal components analysis of
multichannel images. This is useful for images obtained, e.g., with the thematic mapper of the Landsat satellite.
Because the spectral bands are highly correlated, it is desirable to transform them to uncorrelated images. This can
be used to save storage, since the bands containing little information can be discarded, and with respect to a later
classification step.
The operator gen_principal_comp_trans takes one or more multichannel images
MultichannelImage and computes the transformation matrix Trans for the principal components
analysis, as well as its inverse TransInv. All input images must have the same number of channels.
The principal components analysis is performed based on the collection of data of all images. Hence,
gen_principal_comp_trans facilitates using the statistics of multiple images.
If n is the number of channels, Trans and TransInv are matrices of dimension n × (n + 1), which describe
an affine transformation of the multichannel gray values. They can be used to transform a multichannel image
with linear_trans_color. For information purposes, the mean gray value of the channels and the n × n
covariance matrix of the channels are returned in Mean and Cov, respectively. The parameter InfoPerComp
contains the relative information content of each output channel.

HALCON/C Reference Manual, 2008-5-13


3.3. COLOR 105

Parameter
. MultichannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / uint2 / int4
/ real
Multichannel input image.
. Trans (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Transformation matrix for the computation of the PCA.
. TransInv (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Transformation matrix for the computation of the inverse PCA.
. Mean (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Mean gray value of the channels.
. Cov (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Covariance matrix of the channels.
. InfoPerComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Information content of the transformed channels.
Result
The operator gen_principal_comp_trans returns the value H_MSG_TRUE if the parameters are correct.
Otherwise an exception is raised.
Parallelization Information
gen_principal_comp_trans is reentrant and processed without parallelization.
Possible Successors
linear_trans_color
Alternatives
principal_comp
Module
Foundation

T_linear_trans_color ( const Hobject Image, Hobject *ImageTrans,


const Htuple TransMat )

Compute an affine transformation of the color values of a multichannel image.


linear_trans_color performs an affine transformation of the color values of the multichannel image Image
and returns the result in ImageTrans. The affine transformation of the color values is described by the transfor-
mation matrix TransMat. If n is the number of channels in Image, TransMat is a homogeneous n × (n + 1)
that is stored row by row. Homogeneous means that the left n × n submatrix of TransMat describes a linear
transformation of the color values, while the last column of TransMat describes a constant offset of the color val-
ues. The transformation matrix is typically computed with gen_principal_comp_trans. It can, however,
also be specified directly. For example, a transformation from RGB to YIQ, which is described by the following
transformation
      
Y 0.299 0.587 0.144 R 0
 I  =  0.595 −0.276 −0.333   G  +  128 
Q 0.209 −0.522 0.287 B 128

can be achieved by setting TransMat to

[0.299, 0.587, 0.144, 0.0, 0.595, −0.276, −0.333, 128.0, 0.209, −0.522, 0.287, 128.0]

Here, it should be noted that the above transformation is unnormalized, i.e., the resulting color values can lie
outside the range [0, 255]. The transformation ’yiq’ in trans_from_rgb additionally scales the rows of the
matrix (except for the constant offset) appropriately.
To avoid a loss of information, linear_trans_color returns an image of type real. If a different image type
is desired, the image can be transformed with convert_image_type.

HALCON 8.0.2
106 CHAPTER 3. FILTER

Parameter
. Image (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Multichannel input image.
. ImageTrans (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject * : real
Multichannel output image.
. TransMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Transformation matrix for the color values.
Result
The operator linear_trans_color returns the value H_MSG_TRUE if the parameters are correct. Otherwise
an exception is raised.
Parallelization Information
linear_trans_color is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
gen_principal_comp_trans
Possible Successors
convert_image_type
Alternatives
principal_comp, trans_from_rgb, trans_to_rgb
Module
Foundation

T_principal_comp ( const Hobject MultichannelImage, Hobject *PCAImage,


Htuple *InfoPerComp )

Compute the principal components of multichannel images.


principal_comp does a principal components analysis of multichannel images. This is useful for images
obtained, e.g., with the thematic mapper of the Landsat satellite. Because the spectral bands are highly correlated,
it is desirable to transform them to uncorrelated images. This can be used to save storage, since the bands containing
little information can be discarded, and with respect to a later classification step.
The operator principal_comp takes a (multichannel) image MultichannelImage and transforms it to the
output image PCAImage, which contains the same number of channels, using the principal components analysis.
The parameter InfoPerComp contains the relative information content of each output channel.
Parameter
. MultichannelImage (input_object) . . . . . . multichannel-image ; Hobject : byte / direction / cyclic /
int1 / int2 / uint2 / int4 / real
Multichannel input image.
. PCAImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; Hobject * : real
Multichannel output image.
. InfoPerComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Information content of each output channel.
Result
The operator principal_comp returns the value H_MSG_TRUE if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
principal_comp is reentrant and processed without parallelization.
Alternatives
gen_principal_comp_trans
See also
linear_trans_color
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.3. COLOR 107

rgb1_to_gray ( const Hobject RGBImage, Hobject *GrayImage )


T_rgb1_to_gray ( const Hobject RGBImage, Hobject *GrayImage )

Transform an RGB image into a gray scale image.


rgb1_to_gray transforms an RGB image into a gray scale image. The three channels of the RGB image are
passed as the first three channels of the input image. The image is transformed according to the following formula:

k = 0.299r + 0.587g + 0.114b .

Parameter
. RGBImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Three-channel RBG image.
. GrayImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / int2 / uint2
Gray scale image.
Example

/* Tranformation from rgb to gray */


read_image(Image,"patras") ;
disp_color(Image,WindowHandle) ;
rgb1_to_gray(Image,&GrayImage) ;
disp_image(GrayImage,WindowHandle);

Parallelization Information
rgb1_to_gray is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
compose3
Alternatives
trans_from_rgb, rgb3_to_gray
Module
Foundation

rgb3_to_gray ( const Hobject ImageRed, const Hobject ImageGreen,


const Hobject ImageBlue, Hobject *ImageGray )

T_rgb3_to_gray ( const Hobject ImageRed, const Hobject ImageGreen,


const Hobject ImageBlue, Hobject *ImageGray )

Transform an RGB image to a gray scale image.


rgb3_to_gray transforms an RGB image into a gray scale image. The three channels of the RGB image are
passed as three separate images. The image is transformed according to the following formula:

k = 0.299r + 0.587g + 0.114b .

Parameter
. ImageRed (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Input image (red channel).
. ImageGreen (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Input image (green channel).
. ImageBlue (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Input image (blue channel).
. ImageGray (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / int2 / uint2
Gray scale image.

HALCON 8.0.2
108 CHAPTER 3. FILTER

Example

/* Tranformation from rgb to gray */


read_image(Image,"patras") ;
disp_color(Image,WindowHandle) ;
decompose3(Image,&Rimage,&Gimage,&Bimage) ;
rgb3_to_gray(Rimage,Gimage,Bimage,&GrayImage) ;
disp_image(GrayImage,WindowHandle);

Parallelization Information
rgb3_to_gray is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
decompose3
Alternatives
rgb1_to_gray, trans_from_rgb
Module
Foundation

trans_from_rgb ( const Hobject ImageRed, const Hobject ImageGreen,


const Hobject ImageBlue, Hobject *ImageResult1, Hobject *ImageResult2,
Hobject *ImageResult3, const char *ColorSpace )

T_trans_from_rgb ( const Hobject ImageRed, const Hobject ImageGreen,


const Hobject ImageBlue, Hobject *ImageResult1, Hobject *ImageResult2,
Hobject *ImageResult3, const Htuple ColorSpace )

Transform an image from the RGB color space to an arbitrary color space.
trans_from_rgb transforms an image from the RGB color space to an arbitrary color space (ColorSpace).
The three channels of the image are passed as three separate images on input and output.
The operator trans_from_rgb supports the image types byte, uint2, int4, and real. In the case of int4 images,
the images should not contain negative values. In the case of real images, all values should lay within 0 and 1. If
not, the results of the transformation may not be reasonable.
Certain scalings are performed accordingly to the image type:

• Considering byte and uint2 images, the domain of color space values is generally mapped to the full domain
of [0..255] resp. [0..65535]. Because of this, the origin of signed values (e.g., CIELab or YIQ) may not be at
the center of the domain.
• Hue values are represented by angles of [0..2π] and are coded for the particular image types differently:
– byte-images map the angle domain on [0..255].
– uint2/int4-images are coded in angle minutes [0..21600].
– real-images are coded in radians [0..2π] .
• Saturation values are represented by percentages of [0..100] and are coded for the particular image type
differently:
– byte-images map the saturation values to [0..255].
– uint2/int4-images map the the saturation values to [0..10000].
– real-images map the saturation values to [0..1].
The following transformations are supported:
(All range of values are based on RGB values scaled to [0;1]. To obtain the range of values for a certain image
type, they must be multiplied with the maximum of the image type, e.g., 255 in the case of a byte image)
’yiq’
    
Y 0.299 0.587 0.144 R
 I  =  0.595 −0.276 −0.333   G 
Q 0.209 −0.522 0.287 B

HALCON/C Reference Manual, 2008-5-13


3.3. COLOR 109

Range of values:
Y ∈ [0; 1.03], I ∈ [−0.609; 0.595], Q ∈ [−0.522; 0.496]

Point of origin for image type byte:


I0 = 128.89, Q0 = 130.71
’yuv’
    
Y 0.299 0.587 0.114 R
 U  =  −0.147 −0.289 0.436   G 
V 0.615 −0.515 0.100 B

Range of values:
Y ∈ [0; 1], U ∈ [−0.436; 0.436], V ∈ [−0.615; 0.496]

’argyb’
    
A 0.30 0.59 0.11 R
 Rg  =  0.50 −0.50 0.00   G 
Yb 0.25 0.25 −0.50 B

Range of values:
A ∈ [0; 1], Rg ∈ [−0.5; 0.5], Y b ∈ [−0.5; 0.5]

’ciexyz’
    
X 0.412453 0.357580 0.180423 R
 Y  =  0.212671 0.715160 0.072169   G 
Z 0.019334 0.119193 0.950227 B

The primary colors used correspond to sRGB respectively CIE Rec. 709. D65 is used as white point.
Used primary
 colors (x, y, z):      
0.6400 0.3000 0.1500 0.3127
red:= , green:= , blue:= , white65 :=
0.3300 0.6000 0.0600 0.3290
Range of values:
X ∈ [0; 0.950456], Y ∈ [0; 1], Z ∈ [0; 1.088754]
’hls’ min = min(R,G,B)
max = max(R,G,B)
L = (min + max) / 2
if (max == min)
H = 0
S = 0
else
if (L > 0.5)
S = (max - min) / (2 - max - min)
else
S = (max - min) / (max + min)
fi
if (R == max)
H = ((G - B) / (max - min)) * 60
elif (G == max)
H = (2 + (B - R) / (max - min)) * 60
elif (B == max)
H = (4 + (R - G) / (max - min)) * 60
fi
fi
Range of values:
H ∈ [0; 2π], L ∈ [0; 1], S ∈ [0; 1]

HALCON 8.0.2
110 CHAPTER 3. FILTER

’hsi’
  √2 −1 −1

√ √
 
M1 6 6 6 R
√1 −1
 M2  = 
 0 2

2

 G 
I1 √1 √1 √1 B
3 3 3
   M2 
H √arctan M 1
 S  =  M 12 + M 22 
I1
I √
3
Range of values: q
2
H ∈ [0; 2π], S ∈ [0; 3 ], I ∈ [0; 1]

’hsv’ min = min(R,G,B)


max = max(R,G,B)
V = max
if (max == min)
S = 0
H = 0
else
S = (max - min) / max
if (R == max)
H = ((G - B) / (max - min)) * 60
elif (G == max)
H = (2 + (B - R) / (max - min)) * 60
elif (B == max)
H = (4 + (R - G) / (max - min)) * 60
fi
fi
Range of values:
H ∈ [0; 2π], S ∈ [0; 1], V ∈ [0; 1]

’ihs’ min = min(R,G,B)


max = max(R,G,B)
I = (R + G + B) / 3
if (I == 0)
H = 0
S = 1
else
S = 1 - min / I
if (S == 0)
H = 0
else
A = (R + R - G - B) / 2
B = (R - G) * (R - G) + (R - B) * (G - B)
C = sqrt(B)
if (C == 0)
H = 0
else
H = acos(A / C)
fi
if (B > G)
H = 2 * pi - H
fi
fi
fi
Range of values:
I ∈ [0; 1], H ∈ [0; 2π], S ∈ [0; 1]

HALCON/C Reference Manual, 2008-5-13


3.3. COLOR 111

’cielab’
    
X 0.412453 0.357580 0.180423 R
 Y  =  0.212671 0.715160 0.072169   G 
Z 0.019334 0.119193 0.950227 B

Y
L = 116 ∗ f ( ) − 16
Yw
X Y
a = 500 ∗ (f ( ) − f ( ))
Xw Yw
Y Z
b = 200 ∗ (f ( ) − f ( ))
Yw Zw
where 1 24 3
f (t) = t 3 , t > ( 116 )
841 16
f (t) = 108 ∗ t + 116 , otherwise
Black point B:
(Rb , Gb , Bb ) = (0, 0, 0)
White point W = (Rw , Gw , Bw ), according to image type:
Wbyte = (255, 255, 255), Wuint2 = (216 − 1, 216 − 1, 216 − 1),
Wint4 = (231 − 1, 231 − 1, 231 − 1), Wreal = (1.0, 1.0, 1.0)
Range of values:
L ∈ [0; 100], a ∈ [−86.1813; 98.2352], b ∈ [−107.8617; 94.4758]
(Scaled to the maximum gray value in the case of byte and uint2. In the case of int4 L and a are scaled
to the maximum gray value, b is scaled to the minimum gray value, such that the origin stays at 0.)
’i1i2i3’
    
I1 0.333 0.333 0.333 R
 I2  =  1.0 0.0 −1.0   G 
I3 −0.5 1.0 −0.5 B

Range of values:
I1 ∈ [0; 1], I2 ∈ [−1; 1], I3 ∈ [−1; 1]

’ciexyz2’
    
X 0.620 0.170 0.180 R
 Y  =  0.310 0.590 0.110   G 
Z 0.000 0.066 1.020 B

Range of values:
X ∈ [0; 0.970], Y ∈ [0; 1.010], Z ∈ [0; 1.086]

’ciexyz3’
    
X 0.618 0.177 0.205 R
 Y  =  0.299 0.587 0.114   G 
Z 0.000 0.056 0.944 B

Range of values:
X ∈ [0; 1], Y ∈ [0; 1], Z ∈ [0; 1]

’ciexyz4’
    
X 0.476 0.299 0.175 R
 Y  =  0.262 0.656 0.082   G 
Z 0.020 0.161 0.909 B

 colors(x, y, z):
Used primary      
0.628 0.268 0.150 0.313
red:=  0.346  , green:=  0.588  , blue:=  0.070  , white65 :=  0.329 
0.026 0.144 0.780 0.358

HALCON 8.0.2
112 CHAPTER 3. FILTER

Range of values:
X ∈ [0; 0.951], Y ∈ [0; 1], Z ∈ [0; 1.088]

Parameter

. ImageRed (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real


Input image (red channel).
. ImageGreen (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (green channel).
. ImageBlue (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (blue channel).
. ImageResult1 (output_object) . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2 / int4 / real
Color-transformed output image (channel 1).
. ImageResult2 (output_object) . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2 / int4 / real
Color-transformed output image (channel 1).
. ImageResult3 (output_object) . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2 / int4 / real
Color-transformed output image (channel 1).
. ColorSpace (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Color space of the output image.
Default Value : "hsv"
List of values : ColorSpace ∈ {"cielab", "hsv", "hsi", "yiq", "yuv", "argyb", "ciexyz", "ciexyz2",
"ciexyz3", "ciexyz4", "hls", "ihs", "i1i2i3"}
Example

/* Tranformation from rgb to hsv and conversely */


read_image(Image,"patras") ;
disp_color(Image,WindowHandle) ;
decompose3(Image,&Rimage,&Gimage,&Bimage) ;
trans_from_rgb(Rimage,Gimage,Bimage,&Image1,&Image2,&Image3,"hsv") ;
trans_to_rgb(Image1,Image2,Image3,&ImageRed,&ImageGreen,&ImageBlue,"hsv") ;
compose3(ImageRed,ImageGreen,ImageBlue,&Multichannel) ;
disp_color(Multichannel,WindowHandle);

Result
trans_from_rgb returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
trans_from_rgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
decompose3
Possible Successors
compose3
Alternatives
rgb1_to_gray, rgb3_to_gray
See also
trans_to_rgb
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.3. COLOR 113

trans_to_rgb ( const Hobject ImageInput1, const Hobject ImageInput2,


const Hobject ImageInput3, Hobject *ImageRed, Hobject *ImageGreen,
Hobject *ImageBlue, const char *ColorSpace )

T_trans_to_rgb ( const Hobject ImageInput1, const Hobject ImageInput2,


const Hobject ImageInput3, Hobject *ImageRed, Hobject *ImageGreen,
Hobject *ImageBlue, const Htuple ColorSpace )

Transform an image from an arbitrary color space to the RGB color space.
trans_to_rgb transforms an image from an arbitrary color space (ColorSpace) to the RGB color space.
The three channels of the image are passed as three separate images on input and output.
The operator trans_to_rgb supports the image types byte, uint2, int4, and real. The domain of the input
images must match the domain provided by a corresponding transformation with trans_from_rgb. If not, the
results of the transformation may not be reasonable.
This includes some scalings in the case of certain image types and transformations:

• Considering byte and uint2 images, the domain of color space values is expected to be spread to the full
domain of [0..255] resp. [0..65535]. This includes a shift in the case of signed values, such that the origin of
signed values (e.g. CIELab or YIQ) may not be at the center of the domain.
• Hue values are represented by angles of [0..2π] and are coded for the particular image types differently:
– byte-images map the angle domain on [0..255].
– uint2/int4-images are coded in angle minutes [0..21600].
– real-images are coded in radians [0..2π] .
• Saturation values are represented by percentages of [0..100] and are coded for the particular image type
differently:
– byte-images map the saturation values to [0..255].
– uint2/int4-images map the the saturation values to [0..10000].
– real-images map the saturation values to [0..1].

The following transformations are supported:


(All domains are based on RGB values scaled to [0;1]. To obtain the domains for a certain image type, they must
be multiplied with the maximum of the image type, e.g. 255 in the case of a byte image)
’yiq’
    
R 0.999 0.962 0.615 Y
 G  =  0.949 −0.220 −0.732   I 
B 0.999 −1.101 1.706 Q

Domain:
Y ∈ [0; 1.03], I ∈ [−0.609; 0.595], Q ∈ [−0.522; 0.496]

Point of origin for image type byte:


I0 = 128.89, Q0 = 130.71
’yuv’
    
R 1.0 0.0 1.140 Y
 G  =  1.0 −0.394 −0.581   U 
B 1.0 2.032 0.0 V

Domain:
Y ∈ [0; 1], U ∈ [−0.436; 0.436], V ∈ [−0.615; 0.496]

’argyb’
    
R 1.00 1.29 0.22 A
 G  =  1.00 −0.71 0.22   Rg 
B 1.00 0.29 −1.78 Yb

HALCON 8.0.2
114 CHAPTER 3. FILTER

Domain:
A ∈ [0; 1], Rg ∈ [−0.5; 0.5], Y b ∈ [−0.5; 0.5]

’ciexyz’
    
R 3.240479 −1.53715 −0.498535 X
 G  =  −0.969256 1.875991 0.041556   Y 
B 0.055648 −0.204043 1.057311 Z

The primary colors used correspond to sRGB respectively CIE Rec. 709. D65 is used as white point.
Used primary
 colors (x, y, z):      
0.6400 0.3000 0.1500 0.3127
red:= , green:= , blue:= , white65 :=
0.3300 0.6000 0.0600 0.3290
Domain:
X ∈ [0; 0.950456], Y ∈ [0; 1], Z ∈ [0; 1.088754]
’cielab’
fy = (L + 16)/116
fx = a/500 + fy
fz = b/200 − fy

24
X = Xw ∗ fx3 , fx > 116
16 108
X = (fx − 116 ) ∗ Xw ∗ 841 , otherwise

24
Y = Yw ∗ fy3 , fy > 116
16 108
Y = (fy − 116 ) ∗ Yw ∗ 841 , otherwise

24
Z = Zw ∗ fz3 , fz > 116
16 108
Z = (fz − 116 ) ∗ Zw ∗ 841 , otherwise
    
R 3.240479 −1.53715 −0.498535 X
 G  =  −0.969256 1.875991 0.041556   Y 
B 0.055648 −0.204043 1.057311 Z

Black point B:
(Rb , Gb , Bb ) = (0, 0, 0)
White point W = (Rw , Gw , Bw ), according to image type:
Wbyte = (255, 255, 255), Wuint2 = (216 − 1, 216 − 1, 216 − 1),
Wint4 = (231 − 1, 231 − 1, 231 − 1), Wreal = (1.0, 1.0, 1.0)
Domain:
L ∈ [0; 100], a ∈ [−94.3383; 90.4746], b ∈ [−101.3636; 84.4473]
(Scaled to the maximum gray value in the case of byte and uint2. In the case of int4 L and a are scaled
to the maximum gray value, b is scaled to the minimum gray value, such that the origin stays at 0.)
’hls’ Hi = integer(H * 6)
Hf = fraction(H * 6)
if (L <= 0.5)
max = L * (S + 1)
else
max = L + S - (L * S)
fi
min = 2 * L - max
if (S == 0)
R = L
G = L
B = L
else
if (Hi == 0)
R = max
G = min + Hf * (max - min)
B = min

HALCON/C Reference Manual, 2008-5-13


3.3. COLOR 115

elif (Hi == 1)
R = min + (1 - Hf) * (max - min)
G = max
B = min
elif (Hi == 2)
R = min
G = max
B = min + Hf * (max - min)
elif (Hi == 3)
R = min
G = min + (1 - Hf) * (max - min)
B = max
elif (Hi == 4)
R = min + Hf * (max - min)
G = min
B = max
elif (Hi == 5)
R = max
G = min
B = min + (1 - Hf) * (max - min)
fi
fi
Domain:
H ∈ [0; 2π], L ∈ [0; 1], S ∈ [0; 1]

’hsi’

M 1 = S ∗ sin H

M 2 = S ∗ cos H
I
I1 = √
3
  √2
0 √13
  
R 6 M1
 −1
 G = √ √1 1
√   M2 

6 2 3
B −1
√ −1
√ √1 I1
6 2 3

’hsv’ Domain: q
H ∈ [0; 2π], S ∈ [0; 23 ], I ∈ [0; 1]

if (S == 0)
R = V
G = V
B = V
else
Hi = integer(H)
Hf = fraction(H)
if (Hi == 0)
R = V
G = V * (1 - (S * (1 - Hf)))
B = V * (1 - S)
elif (Hi == 1)
R = V * (1 - (S * Hf))
G = V
B = V * (1 - S)
elif (Hi == 2)
R = V * (1 - S)
G = V
B = V * (1 - (S * (1 - Hf)))

HALCON 8.0.2
116 CHAPTER 3. FILTER

elif (Hi == 3)
R = V * (1 - S)
G = V * (1 - (S * Hf))
B = V
elif (Hi == 4)
R = V * (1 - (S * (1 - Hf)))
G = V * (1 - S)
B = V
elif (Hi == 5)
R = V
G = V * (1 - S)
B = V * (1 - (S * Hf))
fi
fi
Domain:
H ∈ [0; 2π], S ∈ [0; 1], V ∈ [0; 1]

’ciexyz4’
    
R 2.750 −1.149 −0.426 X
 G  =  −1.118 2.026 0.033   Y 
B 0.138 −0.333 1.104 Z

Domain:
X ∈ [0; 0.951], Y ∈ [0; 1], Z ∈ [0; 1.088]

Parameter
. ImageInput1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (channel 1).
. ImageInput2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (channel 2).
. ImageInput3 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (channel 3).
. ImageRed (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2 / int4 / real
Red channel.
. ImageGreen (output_object) . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2 / int4 / real
Green channel.
. ImageBlue (output_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2 / int4 / real
Blue channel.
. ColorSpace (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Color space of the input image.
Default Value : "hsv"
List of values : ColorSpace ∈ {"hsi", "yiq", "yuv", "argyb", "ciexyz", "ciexyz4", "cielab", "hls", "hsv"}
Example

/* Tranformation from rgb to hsv and conversely */


read_image(Image,"patras") ;
disp_color(Image,WindowHandle) ;
decompose3(Image,&Rimage,&Gimage,&Bimage) ;
trans_from_rgb(Rimage,Gimage,Bimage,&Image1,&Image2,&Image3,"hsv") ;
trans_to_rgb(Image1,Image2,Image3,&ImageRed,&ImageGreen,&ImageBlue,"hsv") ;
compose3(ImageRed,ImageGreen,ImageBlue,&Multichannel) ;
disp_color(Multichannel,WindowHandle);

Result
trans_to_rgb returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 117

Parallelization Information
trans_to_rgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
decompose3
Possible Successors
compose3, disp_color
See also
decompose3
Module
Foundation

3.4 Edges

close_edges ( const Hobject Edges, const Hobject EdgeImage,


Hobject *RegionResult, Hlong MinAmplitude )

T_close_edges ( const Hobject Edges, const Hobject EdgeImage,


Hobject *RegionResult, const Htuple MinAmplitude )

Close edge gaps using the edge amplitude image.


close_edges closes gaps in the output of an edge detector, and thus tries to produce complete object contours.
This is done by examining the neighbors of each edge point to determine the point with maximum amplitude (i.e.,
maximum gradient), and adding the point to the edge if its amplitude is larger than the minimum amplitude passed
in MinAmplitude. This operator expects as input the edges (Edges) and amplitude image (EdgeImage)
returned by typical edge operators, such as edges_image or sobel_amp. close_edges does not take into
account the edge directions that may be returned by an edge operator. Thus, in areas where the gradient is almost
constant the edges may become rather “wiggly.”
Parameter

. Edges (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region containing one pixel thick edges.
. EdgeImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / int4
Edge amplitude (gradient) image.
. RegionResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Region containing closed edges.
. MinAmplitude (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Minimum edge amplitude.
Default Value : 16
Suggested values : MinAmplitude ∈ {5, 8, 10, 12, 16, 20, 25, 30, 40, 50}
Typical range of values : 1 ≤ MinAmplitude ≤ 255
Minimum Increment : 1
Recommended Increment : 1
Restriction : MinAmplitude ≥ 0
Example

sobel_amp(Image,&EdgeAmp,"sum_abs",5);
threshold(EdgeAmp,&EdgeRegion,40.0,255.0);
skeleton(EdgeRegion,&ThinEdge);
close_edges(ThinEdge,EdgeAmp,&CloseEdges,15);
skeleton(CloseEdges,&ThinCloseEdges);

Result
close_edges returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.

HALCON 8.0.2
118 CHAPTER 3. FILTER

Parallelization Information
close_edges is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
edges_image, sobel_amp, threshold, skeleton
Possible Successors
skeleton
Alternatives
close_edges_length, dilation1, closing
See also
gray_skeleton
Module
Foundation

close_edges_length ( const Hobject Edges, const Hobject Gradient,


Hobject *ClosedEdges, Hlong MinAmplitude, Hlong MaxGapLength )

T_close_edges_length ( const Hobject Edges, const Hobject Gradient,


Hobject *ClosedEdges, const Htuple MinAmplitude,
const Htuple MaxGapLength )

Close edge gaps using the edge amplitude image.


close_edges_length closes gaps in the output of an edge detector, and thus tries to produce complete object
contours. This operator expects as input the edges (Edges) and amplitude image (Gradient) returned by typical
edge operators, such as edges_image or sobel_amp.
Contours are closed in two steps: First, one pixel wide gaps in the input contours are closed, and isolated points are
eliminated. After this, open contours are extended by up to MaxGapLength points by adding edge points until
either the contour is closed or no more significant edge points can be found. A gradient is regarded as significant if
it is larger than MinAmplitude. The neighboring points examined as possible new edge points are the point in
the direction of the contour and its two adjacent points in an 8-neighborhood. For each of these points, the sum of
its gradient and the maximum gradient of that points three possible neighbors is calculated (look ahead of length
1). The point with the maximum sum is then chosen as the new edge point.
Parameter
. Edges (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region containing one pixel thick edges.
. Gradient (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Edge amplitude (gradient) image.
. ClosedEdges (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Region containing closed edges.
. MinAmplitude (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Minimum edge amplitude.
Default Value : 16
Suggested values : MinAmplitude ∈ {5, 8, 10, 12, 16, 20, 25, 30, 40, 50}
Typical range of values : 1 ≤ MinAmplitude ≤ 255
Minimum Increment : 1
Recommended Increment : 1
Restriction : MinAmplitude ≥ 0
. MaxGapLength (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximal number of points by which edges are extended.
Default Value : 3
Suggested values : MaxGapLength ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 30, 40, 50, 70, 100}
Typical range of values : 1 ≤ MaxGapLength ≤ 127
Minimum Increment : 1
Recommended Increment : 1
Restriction : (MaxGapLength > 0) ∧ (MaxGapLength ≤ 127)

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 119

Example

sobel_amp(Image,&EdgeAmp,"sum_abs",5);
threshold(EdgeAmp,&EdgeRegion,40.0,255.0);
skeleton(EdgeRegion,&ThinEdge);
close_edges_length(ThinEdge,EdgeAmp,&CloseEdges,15,3);

Result
close_edges_length returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
close_edges_length is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
edges_image, sobel_amp, threshold, skeleton
Alternatives
close_edges, dilation1, closing
References
M. Üsbeck: “Untersuchungen zur echtzeitfähigen Segmentierung”; Studienarbeit, Bayerisches Forschungszentrum
für Wissensbasierte Systeme (FORWISS), Erlangen, 1993.
Module
Foundation

derivate_gauss ( const Hobject Image, Hobject *DerivGauss,


double Sigma, const char *Component )

T_derivate_gauss ( const Hobject Image, Hobject *DerivGauss,


const Htuple Sigma, const Htuple Component )

Convolve an image with derivatives of the Gaussian.


derivate_gauss convolves an image with the derivatives of a Gaussian and calculates various features derived
therefrom. Sigma is the parameter of the Gaussian (i.e., the amount of smoothing). If one value is passed in
Sigma the amount of smoothing in the column and row direction is identical. If two values are passed in Sigma
the first value specifies the amount of smoothing in the column direction, while the second value specifies the
amount of smoothing in the row direction. The possible values for Component are:

’none’ Smoothing only.


’x’ First derivative along x.
∂g(x, y)
g 0 (x, y) =
∂x
’y’ First derivative along y.
∂g(x, y)
g 0 (x, y) =
∂y
’gradient’ Absolute value of the gradient.
s
∂g(x, y)2 ∂g(x, y)2
g 0 (x, y) =
∂x ∂y

’gradient_dir’ Gradient direction in radians


∂g(x, y) ∂g(x, y)
φ = atan2( , )
∂y ∂x
’xx’ Second derivative along x.
∂ 2 g(x, y)
g 0 (x, y) =
∂x2

HALCON 8.0.2
120 CHAPTER 3. FILTER

’yy’ Second derivative along y.


∂ 2 g(x, y)
g 0 (x, y) =
∂y 2
’xy’ Second derivative along x and y.
∂ 2 g(x, y)
g 0 (x, y) =
∂x∂y
’xxx’ Third derivative along x.
∂ 3 g(x, y)
g 0 (x, y) =
∂x3
’yyy’ Third derivative along y.
∂ 3 g(x, y)
g 0 (x, y) =
∂y 3
’xxy’ Third derivative along x, x and y.
∂ 3 g(x, y)
g 0 (x, y) =
∂x2 ∂y
’xyy’ Third derivative along x, y and y.
∂ 3 g(x, y)
g 0 (x, y) =
∂x∂y 2
’det’ Determinant of the Hessian matrix:
2
∂ 2 g(x, y) ∂ 2 g(x, y) ∂ 2 g(x, y)

DET = −
∂x2 ∂y 2 ∂y∂x

’laplace’ Laplace operator (trace of the Hessian matrix):


∂ 2 g(x, y) ∂ 2 g(x, y)
TR = +
∂x2 ∂y 2

’mean_curvatue’ Mean curvature H


∂g(x, y)2 ∂ 2 g(x, y)
a = (1 + )
∂x ∂y 2
∂g(x, y) ∂g(x, y) ∂ 2 g(x, y)
b = 2
∂x ∂y ∂y∂x
∂g(x, y)2 ∂ 2 g(x, y)
c = (1 + )
∂y ∂x2
2
∂g(x, y) ∂g(x, y)2 3
d = (1 + + )2
∂x ∂y
a−b+c
H =
d
1
H = (κmin + κmax )
2
’gauss_curvatue’ Gaussian curvature K
DET
K= ∂g(x,y)2 ∂g(x,y)2 2
(1 + ∂x + ∂y )

’area’ Differential Area A

A = EG − F 2
2
∂g(x, y)
E = 1+
∂x
∂g(x, y) ∂g(x, y)
F =
∂x ∂y
2
∂g(x, y)
G = 1+
∂y

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 121

’eigenvalue1’ First eigenvalue


∂ 2 g(x,y) ∂ 2 g(x,y)
∂x2 + ∂y 2
a =
s 2
∂ 2 g(x, y) ∂ 2 g(x, y) ∂ 2 g(x, y)2
λ1 = a+ a2 − ( − )
∂x2 ∂y 2 ∂y∂x

’eigenvalue2’ Second eigenvalue


∂ 2 g(x,y) ∂ 2 g(x,y)
∂x2 + ∂y 2
a =
s 2
∂ 2 g(x, y) ∂ 2 g(x, y) ∂ 2 g(x, y)2
λ2 = a− a2 − ( − )
∂x2 ∂y 2 ∂y∂x

’eigenvec_dir’ Direction of the eigenvector corresponding to the first eigenvalue in radians


’main1_curvature’ First principal curvature
p
κmax = H + H2 − K

’main2_curvature’ Second principal curvature


p
κmin = H − H2 − K

’kitchen_rosenfeld’ Second derivative perpendicular to the gradient


∂ 2 g(x,y) ∂g(x,y)2 ∂ 2 g(x,y) ∂g(x,y)2 2
g(x,y) ∂g(x,y)2 ∂g(x,y)2
∂x2 ∂y + ∂y 2 ∂x − 2 ∂ ∂y∂x ∂x ∂y
k= ∂g(x,y)2 ∂g(x,y)2
∂x + ∂y

’zuniga_haralick’ Normalized second derivative perpendicular to the gradient


∂ 2 g(x,y) ∂g(x,y)2 ∂ 2 g(x,y) ∂g(x,y)2 2 2
∂g(x,y)2
∂x2 ∂y + ∂y 2 ∂x − 2 ∂ ∂y∂x
g(x,y) ∂g(x,y)
∂x ∂y
k=   3
∂g(x,y)2 2
∂g(x,y) 2

∂x + ∂y

’2nd_ddg’ Second derivative along the gradient


∂ 2 g(x,y) ∂g(x,y)2 2
∂ 2 g(x,y) ∂g(x,y)2
∂x2 ∂x + 2 ∂g(x,y)
∂x
∂g(x,y) ∂ g(x,y)
∂y ∂y∂x + ∂y 2 ∂y
k= ∂g(x,y)2 ∂g(x,y)2
∂x + ∂y

’de_saint_venant’ Second derivative along and perpendicular to the gradient


∂g(x,y) ∂g(x,y) ∂ 2 g(x,y) ∂ 2 g(x,y) ∂g(x,y)2 ∂g(x,y)2 ∂ 2 g(x,y)
∂x ∂y ( ∂x2 − ∂y 2 ) − ( ∂x − ∂y ) ∂x∂y
k= ∂g(x,y)2 ∂g(x,y)2
∂x + ∂y

Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Input image.
. DerivGauss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : real
Filtered result image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
Sigma of the Gaussian.
Default Value : 1.0
Suggested values : Sigma ∈ {0.7, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.2 ≤ Sigma ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma > 0.0

HALCON 8.0.2
122 CHAPTER 3. FILTER

. Component (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *


Derivative or feature to be calculated.
Default Value : "x"
List of values : Component ∈ {"none", "x", "y", "gradient", "xx", "yy", "xy", "xxx", "yyy", "xxy", "xyy",
"det", "mean_curvature", "gauss_curvature", "eigenvalue1", "eigenvalue2", "main1_curvature",
"main2_curvature", "kitchen_rosenfeld", "zuniga_haralick", "2nd_ddg", "de_saint_venant", "area", "laplace",
"gradient_dir", "eigenvec_dir"}
Example

read_image(&Image,"mreut");
derivate_gauss(Image,&Gauss,3.0,"x");
zero_crossing(Gauss,&ZeroCrossings);

Parallelization Information
derivate_gauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold
Alternatives
laplace, laplace_of_gauss, binomial_filter, gauss_image, smooth_image,
isotropic_diffusion
See also
zero_crossing, dual_threshold
Module
Foundation

diff_of_gauss ( const Hobject Image, Hobject *DiffOfGauss,


double Sigma, double SigFactor )

T_diff_of_gauss ( const Hobject Image, Hobject *DiffOfGauss,


const Htuple Sigma, const Htuple SigFactor )

Approximate the LoG operator (Laplace of Gaussian).


diff_of_gauss approximates the Laplace-of-Gauss operator by a difference of Gaussians. The standard de-
viations of these Gaussians can be calculated, according to Marr, from the Parameter Sigma of the LoG and the
ratio of the two standard deviations (SigFactor) as:

Sigma
sigma1 = r
log ( SigF1actor )
−2 SigFactor 2 −1

sigma1
sigma2 =
SigFactor
DiffOfGauss = (Image ∗ gauss(sigma1)) − (Image ∗ gauss(sigma2))

For a SigFactor = 1.6, according to Marr, an approximation to the Mexican-Hat-Operator results. The resulting
image is stored in DiffOfGauss.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte


Input image
. DiffOfGauss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : int2
LoG image.

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 123

. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


Smoothing parameter of the Laplace operator to approximate.
Default Value : 3.0
Suggested values : Sigma ∈ {2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.2 ≤ Sigma ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma > 0.0
. SigFactor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Ratio of the standard deviations used (Marr recommends 1.6).
Default Value : 1.6
Typical range of values : 0.1 ≤ SigFactor ≤ 10.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : SigFactor > 0.0
Example

read_image(&Image,"mreut");
diff_of_gauss(Image,&Laplace,2.0,1.6);
zero_crossing(Laplace,&ZeroCrossings);

Complexity
The execution time depends linearly on the number of pixels and the size of sigma.
Result
diff_of_gauss returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
diff_of_gauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold
Alternatives
laplace, derivate_gauss
References
D. Marr: “Vision (A computational investigation into human representation and processing of visual information)”;
New York, W.H. Freeman and Company; 1982.
Module
Foundation

edges_color ( const Hobject Image, Hobject *ImaAmp, Hobject *ImaDir,


const char *Filter, double Alpha, const char *NMS, Hlong Low,
Hlong High )

T_edges_color ( const Hobject Image, Hobject *ImaAmp, Hobject *ImaDir,


const Htuple Filter, const Htuple Alpha, const Htuple NMS,
const Htuple Low, const Htuple High )

Extract color edges using Canny, Deriche, or Shen filters.


edges_color extracts color edges from the input image Image. To define color edges, the multi-channel image
Image is regarded as a mapping f : R2 7→ Rn , where n is the number of channels in Image. For such functions,
there is a natural extension of the gradient: the metric tensor G, which can be used to calculate for every direction,
given by the direction vector v, the rate of change of f in the direction v. For notational convenience, G will be
regarded as a two-dimensional matrix. Thus, the rate of change of the function f in the direction v is given by
v T Gv, where

HALCON 8.0.2
124 CHAPTER 3. FILTER

 Xn n 
∂fi ∂fi X ∂fi ∂fi
fxT fx T
 i=1 ∂x ∂x ∂x ∂y 
  
fx fy i=1
G= = X  .

n n
fxT fy T
fy fy  ∂fi ∂fi X ∂fi ∂fi 
i=1
∂x ∂y i=1
∂y ∂y

The partial derivatives of the images, which are necessary to calculate the metric tensor, are calculated with the
corresponding edge filters, analogously to edges_image. For Filter = ’canny’, the partial derivatives of
the Gaussian smoothing masks are used (see derivate_gauss), for ’deriche1’ and Filter = ’deriche2’ the
corresponding Deriche filters, for Filter = ’shen’ the corresponding Shen filters, and for Filter = ’sobel_fast’
the Sobel filter. Analogously to single-channel images, the gradient direction is defined by the vector v in which the
rate of change f is maximum. The vector v is given by the eigenvector corresponding to the largest eigenvalue of
G. The square root of the eigenvalue is the equivalent of the gradient magnitude (the amplitude) for single-channel
images, and is returned in ImaAmp. For single-channel images, both definitions are equivalent. Since the gradient
magnitude may be larger than what can be represented in the input image data type (byte or uint2), it is stored in
the next larger data type (uint2 or int4) in ImaAmp. The eigenvector also is used to define the edge direction. In
contrast to single-channel images, the edge direction can only be defined modulo 180 degrees. Like in the output
of edges_image, the edge directions are stored in 2-degree steps, and are returned in ImaDir. Points with
edge amplitude 0 are assigned the edge direction 255 (undefined direction). For speed reasons, the edge directions
are not computed explicitly for Filter = ’sobel_fast’. Therefore, ImaDir is an empty object in this case.
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarilyfor all filters except ’sobel_fast’ (where
the filter width is 3 × 3 and Alpha is ignored), and can be estimated by calling info_edges for concrete values
of the parameter Alpha. It decreases for increasing Alpha for the Deriche and Shen filters and increases for
the Canny filter, where it is the standard deviation of the Gaussian on which the Canny operator is based. “Wide”
filters exhibit a larger invariance to noise, but also a decreased ability to detect small details. Non-recursive filters,
such as the Canny filter, are realized using filter masks, and thus the execution time increases for increasing filter
width. In contrast, the execution time for recursive filters does not depend on the filter width. Thus, arbitrary
filter widths are possible using the Deriche and Shen filters without increasing the run time of the operator. The
resulting advantage in speed compared to the Canny operator naturally increases for larger filter widths. As border
treatment, the recursive operators assume that the images are zero outside of the image, while the Canny operator
mirrors the gray value at the image border. Comparable filter widths can be obtained by the following choices of
Alpha:

Alpha(0 deriche20 ) = Alpha(0 deriche10 )/2


Alpha(0 shen0 ) = Alpha(0 deriche10 )/2
0 0
Alpha( canny ) = 1.77/Alpha(0 deriche10 )

edges_color optionally offers to apply a non-maximum-suppression (NMS = ’nms’/’inms’/’hvnms’; ’none’ if


not desired) and hysteresis threshold operation (Low,High; at least one negative if not desired) to the resulting
edge image. Conceptually, this corresponds to the following calls:

nonmax_suppression_dir(...,NMS,...)
hysteresis_threshold(...,Low,High,1000,...)

For ’sobel_fast’, the same non-maximum-suppression is performed for all values of NMS except ’none’. Further-
more, the hysteresis threshold operation is always performed. Additionally, for ’sobel_fast’ the resulting edges are
thinned to a width of one pixel.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. ImaAmp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : uint2 / int4
Edge amplitude (gradient magnitude) image.
. ImaDir (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : direction
Edge direction image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Edge operator to be applied.
Default Value : "canny"
List of values : Filter ∈ {"canny", "deriche1", "deriche2", "shen", "sobel_fast"}

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 125

. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


Filter parameter: small values result in strong smoothing, and thus less detail (opposite for ’canny’).
Default Value : 1.0
Suggested values : Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 0.9, 1.0, 1.1, 1.2, 1.5, 2.0, 2.5, 3.0}
Typical range of values : 0.2 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. NMS (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Non-maximum suppression (’none’, if not desired).
Default Value : "nms"
List of values : NMS ∈ {"nms", "inms", "hvnms", "none"}
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Lower threshold for the hysteresis threshold operation (negative if no thresholding is desired).
Default Value : 20
Suggested values : Low ∈ {5, 10, 15, 20, 25, 30, 40}
Typical range of values : 1 ≤ Low
Minimum Increment : 1
Recommended Increment : 5
Restriction : (Low ≥ 1) ∨ (Low < 0)
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Upper threshold for the hysteresis threshold operation (negative if no thresholding is desired).
Default Value : 40
Suggested values : High ∈ {10, 15, 20, 25, 30, 40, 50, 60, 70}
Typical range of values : 1 ≤ High
Minimum Increment : 1
Recommended Increment : 5
Restriction : ((High ≥ 1) ∨ (High < 0)) ∧ (High ≥ Low)
Result
edges_color returns H_MSG_TRUE if all parameters are correct and no error occurs during execution. If the
input is empty the behavior can be set via set_system(’no_object_result’,<Result>). If necessary,
an exception handling is raised.
Parallelization Information
edges_color is reentrant and automatically parallelized (on tuple level).
Possible Successors
threshold
Alternatives
edges_color_sub_pix
See also
edges_image, edges_sub_pix, info_edges, nonmax_suppression_amp,
hysteresis_threshold
References
C. Steger: “Subpixel-Precise Extraction of Lines and Edges”; International Archives of Photogrammetry and
Remote Sensing, vol. XXXIII, part B3; pp. 141-156; 2000.
C. Steger: “Unbiased Extraction of Curvilinear Structures from 2D and 3D Images”; Herbert Utz Verlag, München;
1998.
S. Di Zenzo: “A Note on the Gradient of a Multi-Image”; Computer Vision, Graphics, and Image Processing, vol.
33; pp. 116-125; 1986.
Aldo Cumani: “Edge Detection in Multispectral Images”; Computer Vision, Graphics, and Image Processing:
Graphical Models and Image Processing, vol. 53, no. 1; pp. 40-51; 1991.
J.Canny: “Finding Edges and Lines in Images”; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cambridge;
1983.
J.Canny: “A Computational Approach to Edge Detection”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; PAMI-8, vol. 6; pp. 679-698; 1986.

HALCON 8.0.2
126 CHAPTER 3. FILTER

R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; pp. 167-187; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; pp. 78-87; 1990.
J. Shen, S. Castan: “An Optimal Linear Operator for Step Edge Detection”; Computer Vision, Graphics, and Image
Processing: Graphical Models and Image Processing, vol. 54, no. 2; pp. 112-133; 1992.
Module
Foundation

edges_color_sub_pix ( const Hobject Image, Hobject *Edges,


const char *Filter, double Alpha, double Low, double High )

T_edges_color_sub_pix ( const Hobject Image, Hobject *Edges,


const Htuple Filter, const Htuple Alpha, const Htuple Low,
const Htuple High )

Extract subpixel precise color edges using Deriche, Shen, or Canny filters.
edges_color_sub_pix extracts subpixel precise color edges from the input image Image. The definition
of color edges is given in the description of edges_color. The same edge filters as in edges_color
can be selected: ’canny’, ’deriche1’, ’deriche2’, and ’shen’. In addition, a fast Sobel filter can be selected with
’sobel_fast’. The filters are specified by the parameter Filter.
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily. For a detailed description of this
parameter see edges_color. This parameter is ignored for Filter = ’sobel_fast’.
The extracted edges are returned as subpixel precise XLD contours in Edges. For all edge operators except for
’sobel_fast’, the following attributes are defined for each edge point (see get_contour_attrib_xld):
’edge_direction’ Edge direction
’angle’ Direction of the normal vectors to the contour (oriented such that the normal vectors point to
the right side of the contour as the contour is traversed from start to end point; the angles are
given with respect to the row axis of the image.)
’response’ Edge amplitude (gradient magnitude)
edges_color_sub_pix links the edge points into edges by using an algorithm similar to a hysteresis thresh-
old operation, which is also used in edges_sub_pix and lines_gauss. Points with an amplitude larger
than High are immediately accepted as belonging to an edge, while points with an amplitude smaller than Low
are rejected. All other points are accepted as edges if they are connected to accepted edge points (see also
lines_gauss and hysteresis_threshold).
Because edge extractors are often unable to extract certain junctions, a mode that tries to extract these missing
junctions by different means can be selected by appending ’_junctions’ to the values of Filter that are described
above. This mode is analogous to the mode for completing junctions that is available in edges_sub_pix and
lines_gauss.
The edge operator ’sobel_fast’ has the same semantics as all the other edge operators. Internally, howver, it is
based on significantly simplified variants of the individual processing steps (hysteresis thresholding, edge point
linking, and extraction of the subpixel edge positions). Therefore, ’sobel_fast’ in some cases may return slightly
less accurate edge positions and may select different edge parts.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image.
. Edges (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted edges.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Edge operator to be applied.
Default Value : "canny"
List of values : Filter ∈ {"canny", "deriche1", "deriche2", "shen", "sobel_fast", "canny_junctions",
"deriche1_junctions", "deriche2_junctions", "shen_junctions"}

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 127

. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


Filter parameter: small values result in strong smoothing, and thus less detail (opposite for ’canny’).
Default Value : 1.0
Suggested values : Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 0.9, 1.0, 1.1, 1.2, 1.5, 2.0, 2.5, 3.0}
Typical range of values : 0.7 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Lower threshold for the hysteresis threshold operation.
Default Value : 20
Suggested values : Low ∈ {5, 10, 15, 20, 25, 30, 40}
Typical range of values : 1 ≤ Low
Minimum Increment : 1
Recommended Increment : 5
Restriction : Low > 0
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Upper threshold for the hysteresis threshold operation.
Default Value : 40
Suggested values : High ∈ {10, 15, 20, 25, 30, 40, 50, 60, 70}
Typical range of values : 1 ≤ High
Minimum Increment : 1
Recommended Increment : 5
Restriction : (High > 0) ∧ (High ≥ Low)
Result
edges_color_sub_pix returns H_MSG_TRUE if all parameters are correct and no error occurs during execu-
tion. If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>).
If necessary, an exception handling is raised.
Parallelization Information
edges_color_sub_pix is reentrant and processed without parallelization.
Alternatives
edges_color
See also
edges_image, edges_sub_pix, info_edges, hysteresis_threshold, lines_gauss,
lines_facet
References
C. Steger: “Subpixel-Precise Extraction of Lines and Edges”; International Archives of Photogrammetry and
Remote Sensing, vol. XXXIII, part B3; pp. 141-156; 2000.
C. Steger: “Unbiased Extraction of Curvilinear Structures from 2D and 3D Images”; Herbert Utz Verlag, München;
1998.
S. Di Zenzo: “A Note on the Gradient of a Multi-Image”; Computer Vision, Graphics, and Image Processing, vol.
33; pp. 116-125; 1986.
Aldo Cumani: “Edge Detection in Multispectral Images”; Computer Vision, Graphics, and Image Processing:
Graphical Models and Image Processing, vol. 53, no. 1; pp. 40-51; 1991.
J.Canny: “Finding Edges and Lines in Images”; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cambridge;
1983.
J.Canny: “A Computational Approach to Edge Detection”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; PAMI-8, vol. 6; pp. 679-698; 1986.
R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; pp. 167-187; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; pp. 78-87; 1990.
J. Shen, S. Castan: “An Optimal Linear Operator for Step Edge Detection”; Computer Vision, Graphics, and Image
Processing: Graphical Models and Image Processing, vol. 54, no. 2; pp. 112-133; 1992.

HALCON 8.0.2
128 CHAPTER 3. FILTER

Module
2D Metrology

edges_image ( const Hobject Image, Hobject *ImaAmp, Hobject *ImaDir,


const char *Filter, double Alpha, const char *NMS, Hlong Low,
Hlong High )

T_edges_image ( const Hobject Image, Hobject *ImaAmp, Hobject *ImaDir,


const Htuple Filter, const Htuple Alpha, const Htuple NMS,
const Htuple Low, const Htuple High )

Extract edges using Deriche, Lanser, Shen, or Canny filters.


edges_image detects step edges using recursively implemented filters (according to Deriche, Lanser and Shen)
or the conventionally implemented “derivative of Gaussian” filter (using filter masks) proposed by Canny. Further-
more, a very fast variant of the Sobel filter can be used. Thus, the following edge operators are available:
’deriche1’, ’lanser1’, ’deriche1_int4’, ’deriche2’, ’lanser2’, ’deriche2_int4’, ’shen’, ’mshen’, ’canny’, and ’so-
bel_fast’
(parameter Filter).
The edge amplitudes (gradient magnitude) are returned in ImaAmp. It should be noted that for ’sobel_fast’ for
speed reasons internally an algorithm is used that computes the x and y derivatives with a restricted value range of
[−128, 127] for byte images and [−32768, 32767] for uint2 images. Consequently, an ideal horizontal or vertical
step edge with an amplitude of more than 128 can assume a maximum amplitude of 128 or 32768, respectively,
in ImaAmp, while an ideal 45 degree step edge can assume a maximum amplitude of 181 or 46341, respectively.
Since ideal step edges typically never occur in real images because the edges are smoothed by the optics and
camera this limitation very rarely has any influence on the application.
For all filters except ’sobel_fast’, the edge directions are returned in ImaDir. For ’sobel_fast’, the edge direction
is not computed to speed up the filter. Consequently, ImaDir is an empty image object. The edge operators
’deriche1’ bzw. ’deriche2’ are also available for int4-images, and return the signed filter response instead of its
absolute value. This behavior can be obtained for byte-images as well by selecting ’deriche1_int4’ bzw. ’de-
riche2_int4’ as filter. This can be used to calculate the second derivative of an image by applying edges_image
(with parameter ’lanser2’) to the signed first derivative. Edge directions are stored in 2-degree steps, i.e., an edge
direction of x degrees with respect to the horizontal axis is stored as x/2 in the edge direction image. Furthermore,
the direction of the change of intensity is taken into account. Let [Ex , Ey ] denote the image gradient. Then the
following edge directions are returned as r/2:

intensity increase Ex /Ey

edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[

Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily for all filters except ’sobel_fast’ (where
the filter width is 3 × 3 and Alpha is ignored), and can be estimated by calling info_edges for concrete
values of the parameter Alpha. It decreases for increasing Alpha for the Deriche, Lanser and Shen filters and
increases for the Canny filter, where it is the standard deviation of the Gaussian on which the Canny operator
is based. “Wide” filters exhibit a larger invariance to noise, but also a decreased ability to detect small details.
Non-recursive filters, such as the Canny filter, are realized using filter masks, and thus the execution time increases
for increasing filter width. In contrast, the execution time for recursive filters does not depend on the filter width.
Thus, arbitrary filter widths are possible using the Deriche, Lanser and Shen filters without increasing the run time

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 129

of the operator. The resulting advantage in speed compared to the Canny operator naturally increases for larger
filter widths. As border treatment, the recursive operators assume that the images to be zero outside of the image,
while the Canny operator repeats the gray value at the image’s border. Comparable filter widths can be obtained
by the following choices of Alpha:

Alpha(0 lanser10 ) = Alpha(0 deriche10 )


Alpha(0 deriche20 ) = Alpha(0 deriche10 )/2
Alpha(0 lanser20 ) = Alpha(0 deriche20 )
Alpha(0 shen0 ) = Alpha(0 deriche10 )/2
Alpha(0 mshen0 ) = Alpha(0 shen0 )
Alpha(0 canny 0 ) = 1.77/Alpha(0 deriche10 )

The originally proposed recursive filters (’deriche1’, ’deriche2’, ’shen’) return a biased estimate of the amplitude
of diagonal edges. This bias is removed in the corresponding modified version of the operators (’lanser1’, ’lanser2’
und ’mshen’), while maintaining the same execution speed.
For relatively small filter widths (11 × 11), i.e., for Alpha (’lanser2’ = 0.5), all filters yield similar results. Only for
“wider” filters differences begin to appear: the Shen filters begin to yield qualitatively inferior results. However,
they are the fastest of the implemented operators — closely followed by the Deriche operators.
edges_image optionally offers to apply a non-maximum-suppression (NMS = ’nms’/’inms’/’hvnms’; ’none’ if
not desired) and hysteresis threshold operation (Low,High; at least one negative if not desired) to the resulting
edge image. Conceptually, this corresponds to the following calls:

nonmax_suppression_dir(...,NMS,...)
hysteresis_threshold(...,Low,High,999,...)

For ’sobel_fast’, the same non-maximum-suppression is performed for all values of NMS except ’none’. Further-
more, the hysteresis threshold operation is always performed. Additionally, for ’sobel_fast’ the resulting edges are
thinned to a width of one pixel.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2 / int4


Input image.
. ImaAmp (output_object) . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2 / int4 / real
Edge amplitude (gradient magnitude) image.
. ImaDir (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : direction
Edge direction image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Edge operator to be applied.
Default Value : "lanser2"
List of values : Filter ∈ {"deriche1", "deriche1_int4", "deriche2", "deriche2_int4", "lanser1", "lanser2",
"shen", "mshen", "canny", "sobel_fast"}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for ’canny’).
Default Value : 0.5
Suggested values : Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 0.9, 1.1}
Typical range of values : 0.2 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. NMS (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Non-maximum suppression (’none’, if not desired).
Default Value : "nms"
List of values : NMS ∈ {"nms", "inms", "hvnms", "none"}

HALCON 8.0.2
130 CHAPTER 3. FILTER

. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Lower threshold for the hysteresis threshold operation (negative, if no thresholding is desired).
Default Value : 20
Suggested values : Low ∈ {5, 10, 15, 20, 25, 30, 40}
Typical range of values : 1 ≤ Low ≤ 255
Minimum Increment : 1
Recommended Increment : 5
Restriction : (Low > 1) ∨ (Low < 0)
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Upper threshold for the hysteresis threshold operation (negative, if no thresholding is desired).
Default Value : 40
Suggested values : High ∈ {10, 15, 20, 25, 30, 40, 50, 60, 70}
Typical range of values : 1 ≤ High ≤ 255
Minimum Increment : 1
Recommended Increment : 5
Restriction : ((High > 1) ∨ (High < 0)) ∧ (High ≥ Low)
Example

read_image(&Image,"fabrik");
edges_image(Image,&Amp,&Dir,"lanser2",0.5,"none",-1,-1);
hysteresis_threshold(Amp,&Margin,20,30,30);

Result
edges_image returns H_MSG_TRUE if all parameters are correct and no error occurs during execution. If
the input is empty the behaviour can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
edges_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
info_edges
Possible Successors
threshold, hysteresis_threshold, close_edges_length
Alternatives
sobel_dir, frei_dir, kirsch_dir, prewitt_dir, robinson_dir
See also
info_edges, nonmax_suppression_amp, hysteresis_threshold, bandpass_image
References
S.Lanser, W.Eckstein: “Eine Modifikation des Deriche-Verfahrens zur Kantendetektion”; 13. DAGM-Symposium,
München; Informatik Fachberichte 290; Seite 151 - 158; Springer-Verlag; 1991.
S.Lanser: “Detektion von Stufenkanten mittels rekursiver Filter nach Deriche”; Diplomarbeit; Technische Univer-
sität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
J.Canny: “Finding Edges and Lines in Images”; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cambridge;
1983.
J.Canny: “A Computational Approach to Edge Detection”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; PAMI-8, vol. 6; S. 679-698; 1986.
R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; S. 167-187; 1987.
R.Deriche: “Optimal Edge Detection Using Recursive Filtering”; Proc. of the First International Conference on
Computer Vision, London; S. 501-505; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; S. 78-87; 1990.
S.Castan, J.Zhao und J.Shen: “Optimal Filter for Edge Detection Methods and Results”; Proc. of the First Euro-
pean Conference on Computer Vision, Antibes; Lecture Notes on computer Science; no. 427; S. 12-17; Springer-
Verlag; 1990.

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 131

Module
Foundation

edges_sub_pix ( const Hobject Image, Hobject *Edges,


const char *Filter, double Alpha, Hlong Low, Hlong High )

T_edges_sub_pix ( const Hobject Image, Hobject *Edges,


const Htuple Filter, const Htuple Alpha, const Htuple Low,
const Htuple High )

Extract sub-pixel precise edges using Deriche, Lanser, Shen, or Canny filters.
edges_sub_pix detects step edges using recursively implemented filters (according to Deriche, Lanser and
Shen) or the conventionally implemented “derivative of Gaussian” filter (using filter masks) proposed by Canny.
Thus, the following edge operators are available:
’deriche1’, ’lanser1’, ’deriche2’, ’lanser2’, ’shen’, ’mshen’, ’canny’, ’sobel’, and ’sobel_fast’
(parameter Filter).
The extracted edges are returned as sub-pixel precise XLD contours in Edges. For all edge operators except
’sobel_fast’, the following attributes are defined for each edge point (see get_contour_attrib_xld):
’edge_direction’ Edge direction
’angle’ Direction of the normal vectors to the contour (oriented such that the normal vectors point to
the right side of the contour as the contour is traversed from start to end point; the angles are
given with respect to the row axis of the image.)
’response’ Edge amplitude (gradient magnitude)
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily for all edge operators except ’sobel
and ’sobel_fast’, and can be estimated by calling info_edges for concrete values of the parameter Alpha. It
decreases for increasing Alpha for the Deriche, Lanser and Shen filters and increases for the Canny filter, where
it is the standard deviation of the Gaussian on which the Canny operator is based. “Wide” filters exhibit a larger
invariance to noise, but also a decreased ability to detect small details. Non-recursive filters, such as the Canny
filter, are realized using filter masks, and thus the execution time increases for increasing filter width. In contrast,
the execution time for recursive filters does not depend on the filter width. Thus, arbitrary filter widths are possible
using the Deriche, Lanser and Shen filters without increasing the run time of the operator. The resulting advantage
in speed compared to the Canny operator naturally increases for larger filter widths. As border treatment, the
recursive operators assume that the images to be zero outside of the image, while the Canny operator repeats the
gray value at the image’s border. Comparable filter widths can be obtained by the following choices of Alpha:

Alpha(0 lanser10 ) = Alpha(0 deriche10 )


Alpha(0 deriche20 ) = Alpha(0 deriche10 )/2
Alpha(0 lanser20 ) = Alpha(0 deriche20 )
Alpha(0 shen0 ) = Alpha(0 deriche10 )/2
Alpha(0 mshen0 ) = Alpha(0 shen0 )
Alpha(0 canny 0 ) = 1.77/Alpha(0 deriche10 )

The originally proposed recursive filters (’deriche1’, ’deriche2’, ’shen’) return a biased estimate of the amplitude
of diagonal edges. This bias is removed in the corresponding modified version of the operators (’lanser1’, ’lanser2’
und ’mshen’), while maintaining the same execution speed.
For relatively small filter widths (11 × 11), i.e., for Alpha (’lanser2’ = 0.5), all filters yield similar results. Only for
“wider” filters differences begin to appear: the Shen filters begin to yield qualitatively inferior results. However,
they are the fastest of the implemented operators that supprt arbitrary mask sizes, closely followed by the Deriche
operators. The two Sobel filters, which use a fixed mask size of (3 × 3), are faster than the other filters. Of these
two, the filter ’sobel_fast’ is significantly faster than ’sobel’.
edges_sub_pix links the edge points into edges by using an algorithm similar to a hysteresis threshold op-
eration, which is also used in lines_gauss. Points with an amplitude larger than High are immediately
accepted as belonging to an edge, while points with an amplitude smaller than Low are rejected. All other
points are accepted as edges if they are connected to accepted edge points (see also lines_gauss and
hysteresis_threshold).

HALCON 8.0.2
132 CHAPTER 3. FILTER

Because edge extractors are often unable to extract certain junctions, a mode that tries to extract these missing
junctions by different means can be selected by appending ’_junctions’ to the values of Filter that are described
above. This mode is analogous to the mode for completing junctions that is available in lines_gauss.
The edge operator ’sobel_fast’ has the same semantics as all the other edge operators. Internally, howver, it is
based on significantly simplified variants of the individual processing steps (hysteresis thresholding, edge point
linking, and extraction of the subpixel edge positions). Therefore, ’sobel_fast’ in some cases may return slightly
less accurate edge positions and may select different edge parts.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2


Input image.
. Edges (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted edges.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Edge operator to be applied.
Default Value : "lanser2"
List of values : Filter ∈ {"deriche1", "lanser1", "deriche2", "lanser2", "shen", "mshen", "canny", "sobel",
"sobel_fast", "deriche1_junctions", "lanser1_junctions", "deriche2_junctions", "lanser2_junctions",
"shen_junctions", "mshen_junctions", "canny_junctions", "sobel_junctions"}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for ’canny’).
Default Value : 0.5
Suggested values : Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 0.9, 1.1}
Typical range of values : 0.2 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Lower threshold for the hysteresis threshold operation.
Default Value : 20
Suggested values : Low ∈ {5, 10, 15, 20, 25, 30, 40}
Typical range of values : 1 ≤ Low ≤ 255
Minimum Increment : 1
Recommended Increment : 5
Restriction : Low > 0
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Upper threshold for the hysteresis threshold operation.
Default Value : 40
Suggested values : High ∈ {10, 15, 20, 25, 30, 40, 50, 60, 70}
Typical range of values : 1 ≤ High ≤ 255
Minimum Increment : 1
Recommended Increment : 5
Restriction : (High > 0) ∧ (High ≥ Low)
Example

read_image(&Image,"fabrik");
edges_sub_pix(Image,&Edges,"lanser2",0.5,20,40);

Complexity
Let A be the number of pixels in the domain of Image. Then the runtime complexity is O(A ∗ Sigma) for the
Canny filter and O(A) for the recursive Lanser, Deriche, and Shen filters.
Let S = Width ∗ Height be the number of pixels of Image. Then edges_sub_pix requires at least 60 ∗ S bytes
of temporary memory during execution for all edge operators except ’sobel_fast’. For ’sobel_fast’, at least 9 ∗ S
bytes of temporary memory are required.
Result
edges_sub_pix returns H_MSG_TRUE if all parameters are correct and no error occurs during execution.

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 133

If the input is empty the behaviour can be set via set_system(’no_object_result’,<Result>). If


necessary, an exception handling is raised.
Parallelization Information
edges_sub_pix is reentrant and automatically parallelized (on tuple level).
Alternatives
sobel_dir, frei_dir, kirsch_dir, prewitt_dir, robinson_dir, edges_image
See also
info_edges, hysteresis_threshold, bandpass_image, lines_gauss, lines_facet
References
S.Lanser, W.Eckstein: “Eine Modifikation des Deriche-Verfahrens zur Kantendetektion”; 13. DAGM-Symposium,
München; Informatik Fachberichte 290; Seite 151 - 158; Springer-Verlag; 1991.
S.Lanser: “Detektion von Stufenkanten mittels rekursiver Filter nach Deriche”; Diplomarbeit; Technische Univer-
sität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
J.Canny: “Finding Edges and Lines in Images”; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cambridge;
1983.
J.Canny: “A Computational Approach to Edge Detection”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; PAMI-8, vol. 6; S. 679-698; 1986.
R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; S. 167-187; 1987.
R.Deriche: “Optimal Edge Detection Using Recursive Filtering”; Proc. of the First International Conference on
Computer Vision, London; S. 501-505; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; S. 78-87; 1990.
S.Castan, J.Zhao und J.Shen: “Optimal Filter for Edge Detection Methods and Results”; Proc. of the First Euro-
pean Conference on Computer Vision, Antibes; Lecture Notes on computer Science; no. 427; S. 12-17; Springer-
Verlag; 1990.
Module
2D Metrology

frei_amp ( const Hobject Image, Hobject *ImageEdgeAmp )


T_frei_amp ( const Hobject Image, Hobject *ImageEdgeAmp )

Detect edges (amplitude) using the Frei-Chen operator.


frei_amp calculates an approximation of the first derivative of the image data and is used as an edge detector.
The filter is based on the following filter masks:

1 1 1
A= 0 0 0
−1 −1 −1

1 0 −1
B= 1 0 −1
1 0 −1

The result image contains the maximum response of the masks A and B.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.

HALCON 8.0.2
134 CHAPTER 3. FILTER

Example

read_image(&Image,"fabrik");
frei_amp(Image,&Frei_amp);
threshold(Frei_amp,&Edges,128,255);

Result
frei_amp always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
frei_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Alternatives
sobel_amp, kirsch_amp, prewitt_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation

frei_dir ( const Hobject Image, Hobject *ImageEdgeAmp,


Hobject *ImageEdgeDir )

T_frei_dir ( const Hobject Image, Hobject *ImageEdgeAmp,


Hobject *ImageEdgeDir )

Detect edges (amplitude and direction) using the Frei-Chen operator.


frei_dir calculates an approximation of the first derivative of the image data and is used as an edge detector.
The filter is based on the following filter masks:

1 2 1
A= 0 √ 0 0
−1 − 2 −1

√1 0 −1

B= 2 0 − 2
1 0 −1
The result image contains the maximum response of the masks A and B. The edge directions are returned in
ImageEdgeDir, and are stored in 2-degree steps, i.e., an edge direction of x degrees with respect to the horizontal
axis is stored as x/2 in the edge direction image. Furthermore, the direction of the change of intensity is taken into
account. Let [Ex , Ey ] denote the image gradient. Then the following edge directions are returned as r/2:

intensity increase Ex /Ey

edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 135

Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
. ImageEdgeDir (output_object) . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : direction
Edge direction image.
Example

read_image(&Image,"fabrik");
frei_dir(Image,&Frei_dirA,&Frei_dirD);
threshold(Frei_dirA,&Res,128,255);

Result
frei_dir always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
frei_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, robinson_dir, prewitt_dir, kirsch_dir
See also
bandpass_image, laplace_of_gauss
Module
Foundation

highpass_image ( const Hobject Image, Hobject *Highpass, Hlong Width,


Hlong Height )

T_highpass_image ( const Hobject Image, Hobject *Highpass,


const Htuple Width, const Htuple Height )

Extract high frequency components from an image.


highpass_image extracts high frequency components in an image by applying a linear filter with the following
matrix (in case of a 7 × 5 matrix):

1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 −35 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1

This corresponds to applying a mean operator ( mean_image), and then subtracting the original gray value. A
value of 128 is added to the result, i.e., zero crossings occur for 128.
This filter emphasizes high frequency components (edges and corners). The cutoff frequency is determined by the
size (Height × Width) of the filter matrix: the larger the matrix, the smaller the cutoff frequency is.
At the image borders the pixels’ gray values are mirrored. In case of over- or underflow the gray values are clipped
(255 and 0, resp.).

HALCON 8.0.2
136 CHAPTER 3. FILTER

Attention
If even values are passed for Height or Width, the operator uses the next larger odd value instead. Thus, the
center of the filter mask is always uniquely determined.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte


Input image.
. Highpass (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte
High-pass-filtered result image.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the filter mask.
Default Value : 9
Suggested values : Width ∈ {3, 5, 7, 9, 11, 13, 17, 21, 29, 41, 51, 73, 101}
Typical range of values : 3 ≤ Width ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Width ≥ 3) ∧ odd(Width)
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 9
Suggested values : Height ∈ {3, 5, 7, 9, 11, 13, 17, 21, 29, 41, 51, 73, 101}
Typical range of values : 3 ≤ Height ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Height ≥ 3) ∧ odd(Height)
Example

highpass_image(Image,&Highpass,7,5);
threshold(Highpass,&Region,60.0,255.0);
skeleton(Region,&Skeleton);

Result
highpass_image returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
highpass_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold, skeleton
Alternatives
mean_image, sub_image, convol_image, bandpass_image
See also
dyn_threshold
Module
Foundation

T_info_edges ( const Htuple Filter, const Htuple Mode,


const Htuple Alpha, Htuple *Size, Htuple *Coeffs )

Estimate the width of a filter in edges_image.


info_edges returns an estimate of the width of any of the filters used in edges_image. To do so, the
corresponding continuous impulse responses of the filters are sampled until the first filter coefficient is smaller
than five percent of the largest coefficient. Alpha is the filter parameter (see edges_image). Seven edge
operators are supported (parameter Filter):

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 137

’deriche1’, ’lanser1’, ’deriche2’, ’lanser2’, ’shen’, ’mshen’ und ’canny’.


The parameter Mode (’edge’/’smooth’) is used to determine whether the corresponding edge or smoothing operator
is to be sampled. The Canny operator (which uses the Gaussian for smoothing) is implemented using conventional
filter masks, while all other filters are implemented recursively. Therefore, for the Canny filter the coefficients of
the one-dimensional impulse responses f (n) with n ≥ 0 are returned in Coeffs in addition to the filter width.
Parameter
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Name of the edge operator.
Default Value : "lanser2"
List of values : Filter ∈ {"deriche1", "lanser1", "deriche2", "lanser2", "shen", "mshen", "canny"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
1D edge filter (’edge’) or 1D smoothing filter (’smooth’).
Default Value : "edge"
List of values : Mode ∈ {"edge", "smooth"}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for ’canny’).
Default Value : 0.5
Typical range of values : 0.2 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. Size (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Filter width in pixels.
. Coeffs (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
For Canny filters: Coefficients of the “positive” half of the 1D impulse response.
Example

read_image(&Image,"fabrik");
info_edges("lanser2","edge",0.5,Size,Coeffs) ;
edges_image(Image,&Amp,&Dir,"lanser2",0.5,"none",-1,-1);
hysteresis_threshold(Amp,&Margin,20,30,30);

Result
info_edges returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
info_edges is reentrant and processed without parallelization.
Possible Successors
edges_image, threshold, skeleton
See also
edges_image
Module
Foundation

kirsch_amp ( const Hobject Image, Hobject *ImageEdgeAmp )


T_kirsch_amp ( const Hobject Image, Hobject *ImageEdgeAmp )

Detect edges (amplitude) using the Kirsch operator.


kirsch_amp calculates an approximation of the first derivative of the image data and is used as an edge detector.
The filter is based on the following filter masks:

−3 −3 5
−3 0 5
−3 −3 5

HALCON 8.0.2
138 CHAPTER 3. FILTER

−3 5 5
−3 0 5
−3 −3 −3
5 5 5
−3 0 −3
−3 −3 −3
5 5 −3
5 0 −3
−3 −3 −3
5 −3 −3
5 0 −3
5 −3 −3
−3 −3 −3
5 0 −3
5 5 −3
−3 −3 −3
−3 0 −3
5 5 5
−3 −3 −3
−3 0 5
−3 5 5

The result image contains the maximum response of all masks.


Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
Example

read_image(&Image,"fabrik");
kirsch_amp(Image,&Kirsch_amp);
threshold(Kirsch_amp,&Edges,128,255);

Result
kirsch_amp always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
kirsch_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Alternatives
sobel_amp, frei_amp, prewitt_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 139

kirsch_dir ( const Hobject Image, Hobject *ImageEdgeAmp,


Hobject *ImageEdgeDir )

T_kirsch_dir ( const Hobject Image, Hobject *ImageEdgeAmp,


Hobject *ImageEdgeDir )

Detect edges (amplitude and direction) using the Kirsch operator.


kirsch_dir calculates an approximation of the first derivative of the image data and is used as an edge detector.
The filter is based on the following filter masks:

−3 −3 5
−3 0 5
−3 −3 5

−3 5 5
−3 0 5
−3 −3 −3
5 5 5
−3 0 −3
−3 −3 −3
5 5 −3
5 0 −3
−3 −3 −3
5 −3 −3
5 0 −3
5 −3 −3
−3 −3 −3
5 0 −3
5 5 −3
−3 −3 −3
−3 0 −3
5 5 5
−3 −3 −3
−3 0 5
−3 5 5

The result image contains the maximum response of all masks. The edge directions are returned in
ImageEdgeDir, and are stored as x/2. They correspond to the direction of the mask yielding the maximum
response.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
. ImageEdgeDir (output_object) . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : direction
Edge direction image.
Example

read_image(&Image,"fabrik");
kirsch_dir(Image,&Kirsch_dirA,&Kirsch_dirD);
threshold(Kirsch_dirA,&Res,128,255);

HALCON 8.0.2
140 CHAPTER 3. FILTER

Result
kirsch_dir always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
kirsch_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, robinson_dir, prewitt_dir, frei_dir
See also
bandpass_image, laplace_of_gauss
Module
Foundation

laplace ( const Hobject Image, Hobject *ImageLaplace,


const char *ResultType, Hlong MaskSize, const char *FilterMask )

T_laplace ( const Hobject Image, Hobject *ImageLaplace,


const Htuple ResultType, const Htuple MaskSize,
const Htuple FilterMask )

Calculate the Laplace operator by using finite differences.


laplace filters the input images Image using a Laplace operator. Depending on the parameter FilterMask
the following approximations of the Laplace operator are used:
’n_4’
1
1 −4 1
1

’n_8’
1 1 1
1 −8 1
1 1 1

’n_8_isotropic’

10 22 10
22 −128 22
10 22 10

For the three filter masks the following normelizations of the resulting gray values is applied, (i.e., by dividing
the result by the given divisor): ’n_4’ normalization by 1, ’n_8’, normalization by 2 and for ’n_8_isotropic’
normalization by 32.
For a Laplace operator with size 3 × 3, the corresponding filter is applied directly, while for larger filter
sizes the input image is first smoothed using using a Gaussian filter (see gauss_image) or a binomial fil-
ter (see binomial_filter) of size MaskSize-2. The Gaussian filter is selected for the above values of
ResultType. Here, MaskSize = 5, 7, 9, 11, or 13 must be used. The binomial filter is selected by appending
’_binomial’ to the above values of ResultType. Here, MaskSize can be selected between 5 and 39. Fur-
thermore, it is possible to select different amounts of smoothing for the column and row direction by passing two
values in MaskSize. Here, the first value of MaskSize corresponds to the mask width (smoothing in the column
direction), while the second value corresponds to the mask height (smoothing in the row direction) of the binomial
filter. Therefore,

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 141

laplace(O:R:’absolute’,MaskSize,N:)

for MaskSize > is equivalent to

gauss_image(O:G:MaskSize-2:) .
laplace(G:R:’absolute’,MaskSize,N:).

and

laplace(O:R:’absolute_binomial’,MaskSize,N:)

is equivalent to

binomial_filter(O:B:MaskSize-2,MaskSize-2:) .
laplace(B:R:’absolute’,3,N:)

laplace either returns the absolute value of the Laplace filtered image (ResultType = ’absolute’) in a byte
or uint2 image or the signed result (ResultType = ’signed’ or ’signed_clipped’). Here, the output image type
has the same number of bytes per pixel as the input image (i.e., int1 or int2) for ’signed_clipped’, while the output
image has the next larger number of pixels (i.e., int2 or int4) for ’signed’.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2


Input image.
. ImageLaplace (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2 / int2 / int2
/ int4
Laplace-filtered result image.
. ResultType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Type of the result image, whereas for byte and uint2 the absolute value is used.
Default Value : "absolute"
List of values : ResultType ∈ {"absolute", "signed_clipped", "signed", "absolute_binomial",
"signed_clipped_binomial", "signed_binomial"}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Size of filter mask.
Default Value : 3
List of values : MaskSize ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39}
. FilterMask (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Filter mask used in the Laplace operator
Default Value : "n_4"
List of values : FilterMask ∈ {"n_4", "n_8", "n_8_isotropic"}
Example

read_image(&Image,"mreut");
laplace(Image,&Laplace,"signed",3,"n_8_isotropic");
zero_crossing(Laplace,&ZeroCrossings);

Result
laplace returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
laplace is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold, threshold
Alternatives
diff_of_gauss, laplace_of_gauss, derivate_gauss

HALCON 8.0.2
142 CHAPTER 3. FILTER

See also
highpass_image, edges_image
Module
Foundation

laplace_of_gauss ( const Hobject Image, Hobject *ImageLaplace,


double Sigma )

T_laplace_of_gauss ( const Hobject Image, Hobject *ImageLaplace,


const Htuple Sigma )

LoG-Operator (Laplace of Gaussian).


laplace_of_gauss calculates the Laplace-of-Gaussian operator, i.e., the Laplace operator on a Gaussian
smoothed image, for arbitrary smoothing parameters Sigma. The Laplace operator is given by:

∂ 2 g(x, y) ∂ 2 g(x, y)
∆g(x, y) = +
∂x2 ∂y 2

The derivatives in laplace_of_gauss are calculated by appropriate derivatives of the Gaussian, resulting in
the following formula for the convolution mask:

∆Gσ (x, y) =

x2 + y 2
 2
x + y2
  
1
− 1 exp −
2πσ 4 2σ 2 2σ 2
Parameter
. Image (input_object) . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. ImageLaplace (output_object) . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : int2
Laplace filtered image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Smoothing parameter of the Gaussian.
Default Value : 2.0
Suggested values : Sigma ∈ {0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 7.0}
Typical range of values : 0.7 ≤ Sigma ≤ 5.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : (Sigma > 0.7) ∧ (Sigma ≤ 25.0)
Example

read_image(&Image,"mreut");
laplace_of_gauss(Image,&Laplace,2.0);
zero_crossing(Laplace,&ZeroCrossings);

Parallelization Information
laplace_of_gauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold
Alternatives
laplace, diff_of_gauss, derivate_gauss
See also
derivate_gauss
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 143

prewitt_amp ( const Hobject Image, Hobject *ImageEdgeAmp )


T_prewitt_amp ( const Hobject Image, Hobject *ImageEdgeAmp )

Detect edges (amplitude) using the Prewitt operator.


prewitt_amp calculates an approximation of the first derivative of the image data and is used as an edge detector.
The filter is based on the following filter masks:

1 1 1
A= 0 0 0
−1 −1 −1

1 0 −1
B= 1 0 −1
1 0 −1

The result image contains the maximum response of the masks A and B.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
Example

read_image(&Image,"fabrik");
prewitt_amp(Image,&Prewitt);
threshold(Prewitt,&Edges,128,255);

Result
prewitt_amp always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
prewitt_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
threshold, gray_skeleton, nonmax_suppression_amp, close_edges,
close_edges_length
Alternatives
sobel_amp, kirsch_amp, frei_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation

prewitt_dir ( const Hobject Image, Hobject *ImageEdgeAmp,


Hobject *ImageEdgeDir )

T_prewitt_dir ( const Hobject Image, Hobject *ImageEdgeAmp,


Hobject *ImageEdgeDir )

Detect edges (amplitude and direction) using the Prewitt operator.


prewitt_dir calculates an approximation of the first derivative of the image data and is used as an edge detector.
The filter is based on the following filter masks:

HALCON 8.0.2
144 CHAPTER 3. FILTER

1 1 1
A= 0 0 0
−1 −1 −1

1 0 −1
B= 1 0 −1
1 0 −1

The result image contains the maximum response of the masks A and B. The edge directions are returned in
ImageEdgeDir, and are stored in 2-degree steps, i.e., an edge direction of x degrees with respect to the horizontal
axis is stored as x/2 in the edge direction image. Furthermore, the direction of the change of intensity is taken into
account. Let [Ex , Ey ] denote the image gradient. Then the following edge directions are returned as r/2:

intensity increase Ex /Ey

edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[

Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
. ImageEdgeDir (output_object) . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : direction
Edge direction image.
Example

read_image(&Image,"fabrik");
prewitt_dir(Image,&PrewittA,&PrewittD);
threshold(PrewittA,&Edges,128,255);

Result
prewitt_dir always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
prewitt_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, robinson_dir, frei_dir, kirsch_dir
See also
bandpass_image, laplace_of_gauss
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 145

roberts ( const Hobject Image, Hobject *ImageRoberts,


const char *FilterType )

T_roberts ( const Hobject Image, Hobject *ImageRoberts,


const Htuple FilterType )

Detect edges using the Roberts filter.


roberts calculates the first derivative of an image and is used as an edge operator. If the following mask describes
a part of the image,

A B
C D

the different filter types are defined as follows:

’roberts_max’ max(|A − D|, |B − C|)


’gradient_max’ max(|A + B − (C + D)|, |A + C − (B + D)|)
’gradient_sum’ |A + B − (C + D)| + |A + C − (B + D)|

If an overflow occurs the result is clipped. The result of the operator is stored at the pixel with the coordinates of
“D”.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Input image.
. ImageRoberts (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Roberts-filtered result images.
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Filter type.
Default Value : "gradient_sum"
List of values : FilterType ∈ {"roberts_max", "gradient_max", "gradient_sum"}
Example

read_image(&Image,"fabrik");
roberts(Image,&Roberts,"roberts_max");
threshold(Roberts,&Margin,128.0,255.0);

Result
roberts returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
roberts is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image
Possible Successors
threshold, skeleton
Alternatives
edges_image, sobel_amp, frei_amp, kirsch_amp, prewitt_amp
See also
laplace, highpass_image, bandpass_image
Module
Foundation

HALCON 8.0.2
146 CHAPTER 3. FILTER

robinson_amp ( const Hobject Image, Hobject *ImageEdgeAmp )


T_robinson_amp ( const Hobject Image, Hobject *ImageEdgeAmp )

Detect edges (amplitude) using the Robinson operator.


robinson_amp calculates an approximation of the first derivative of the image data and is used as an edge
detector. In robinson_amp the following four of the originally proposed eight 3 × 3 filter masks are convolved
with the image. The other four masks are obtained by a multiplication by -1. All masks contain only the values
0,1,-1,2,-2.

−1 0 1
−2 0 2
−1 0 1

2 1 0
1 0 −1
0 −1 −2
0 1 2
−1 0 1
−2 −1 0
1 2 1
0 0 0
−1 −2 −1

The result image contains the maximum response of all masks.


Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
Example

read_image(&Image,"fabrik");
robinson_amp(Image,&Robinson_amp);
threshold(Robinson_amp,&Edges,128,255);

Result
robinson_amp always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
robinson_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Alternatives
sobel_amp, frei_amp, prewitt_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 147

robinson_dir ( const Hobject Image, Hobject *ImageEdgeAmp,


Hobject *ImageEdgeDir )

T_robinson_dir ( const Hobject Image, Hobject *ImageEdgeAmp,


Hobject *ImageEdgeDir )

Detect edges (amplitude and direction) using the Robinson operator.


robinson_dir calculates an approximation of the first derivative of the image data and is used as an edge
detector. In robinson_amp the following four of the originally proposed eight 3 × 3 filter masks are convolved
with the image. The other four masks are obtained by a multiplication by -1. All masks contain only the values
0,1,-1,2,-2.

−1 0 1
−2 0 2
−1 0 1

2 1 0
1 0 −1
0 −1 −2
0 1 2
−1 0 1
−2 −1 0
1 2 1
0 0 0
−1 −2 −1

The result image contains the maximum response of all masks. The edge directions are returned in
ImageEdgeDir, and are stored as x/2. They correspond to the direction of the mask yielding the maximum
response.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
. ImageEdgeDir (output_object) . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : direction
Edge direction image.
Example

read_image(&Image,"fabrik");
robinson_dir(Image,&Robinson_dirA,&Robinson_dirD);
threshold(Robinson_dirA,&Res,128,255);

Result
robinson_dir always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
robinson_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, kirsch_dir, prewitt_dir, frei_dir

HALCON 8.0.2
148 CHAPTER 3. FILTER

See also
bandpass_image, laplace_of_gauss
Module
Foundation

sobel_amp ( const Hobject Image, Hobject *EdgeAmplitude,


const char *FilterType, Hlong Size )

T_sobel_amp ( const Hobject Image, Hobject *EdgeAmplitude,


const Htuple FilterType, const Htuple Size )

Detect edges (amplitude) using the Sobel operator.


sobel_amp calculates first derivative of an image and is used as an edge detector. The filter is based on the
following filter masks:

1 2 1
A= 0 0 0
−1 −2 −1

1 0 −1
B= 2 0 −2
1 0 −1

These masks are used differently, according to the selected filter type. (In the following, a und b denote the results
of convolving an image with A und B for one particular pixel.)

’sum_sqrt’ a2 + b2 /4
’sum_abs’ (|a| + |b|)/4
’thin_sum_abs’ (thin(|a|) + thin(|b|))/4
’thin_max_abs’ max(thin(|a|), thin(|b|))/4
’x’ b/4
’y’ a/4

Here, thin(x) is equal to x for a vertical maximum (mask A) and a horizontal maximum (mask B), respectively,
and 0 otherwise. Thus, for ’thin_sum_abs’ and ’thin_max_abs’ the gradient image is thinned. For the filter types ’x’
and ’y’ if the input image is of type byte the output image is of type int1, of type int2 otherwise. For a Sobel operator
with size 3 × 3, the corresponding filters A and B are applied directly, while for larger filter sizes the input image
is first smoothed using a Gaussian filter (see gauss_image) or a binomial filter (see binomial_filter) of
size Size-2. The Gaussian filter is selected for the above values of FilterType. Here, Size = 5, 7, 9, 11, or
13 must be used. The binomial filter is selected by appending ’_binomial’ to the above values of FilterType.
Here, Size can be selected between 5 and 39. Furthermore, it is possible to select different amounts of smoothing
the the column and row direction by passing two values in Size. Here, the first value of Size corresponds
to the mask width (smoothing in the column direction), while the second value corresponds to the mask height
(smoothing in the row direction) of the binomial filter. The binomial filter can only be used for images of type
byte and uint2. Since smoothing reduces the edge amplitudes, in this case the edge amplitudes are multiplied by a
factor of 2 to prevent information loss. Therefore,

sobel_amp(I,E,Dir,FilterTyp,S)

for Size > 3 is conceptually equivalent to

scale_image(I,F,2,0)
gauss_image(F,G,S-2)
sobel_amp(G,E,FilterType,3)

or to

HALCON/C Reference Manual, 2008-5-13


3.4. EDGES 149

scale_image(I,F,2,0)
binomial_filter(F,G,S[0]-2,S[1]-2)
sobel_amp(G,E,FilterType,3).

For sobel_amp special optimizations are implemented FilterType = 0 sum_abs 0 that use SIMD technol-
ogy. The actual application of these special optimizations is controlled by the system parameter ’mmx_enable’
(see set_system). If ’mmx_enable’ is set to ’true’ (and the SIMD instruction set is available), the internal
calculations are performed using SIMD technology. Note that SIMD technology performs best on large, compact
input regions. Depending on the input region and the capabilities of the hardware the execution of sobel_amp
might even take significantly more time with SIMD technology than without.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. EdgeAmplitude (output_object) . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : int1 / int2 / uint2
Edge amplitude (gradient magnitude) image.
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Filter type.
Default Value : "sum_abs"
List of values : FilterType ∈ {"sum_abs", "thin_sum_abs", "thin_max_abs", "sum_sqrt", "x", "y",
"sum_abs_binomial", "thin_sum_abs_binomial", "thin_max_abs_binomial", "sum_sqrt_binomial",
"x_binomial", "y_binomial"}
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Size of filter mask.
Default Value : 3
List of values : Size ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39}
Example

read_image(&Image,"fabrik");
sobel_amp(Image,&Amp,"sum_abs",3);
threshold(Amp,&Edg,128.0,255.0);

Result
sobel_amp returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
sobel_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, mean_image, anisotropic_diffusion, sigma_image
Possible Successors
threshold, nonmax_suppression_amp, gray_skeleton
Alternatives
frei_amp, roberts, kirsch_amp, prewitt_amp, robinson_amp
See also
laplace, highpass_image, bandpass_image
Module
Foundation

sobel_dir ( const Hobject Image, Hobject *EdgeAmplitude,


Hobject *EdgeDirection, const char *FilterType, Hlong Size )

T_sobel_dir ( const Hobject Image, Hobject *EdgeAmplitude,


Hobject *EdgeDirection, const Htuple FilterType, const Htuple Size )

Detect edges (amplitude and direction) using the Sobel operator.

HALCON 8.0.2
150 CHAPTER 3. FILTER

sobel_dir calculates first derivative of an image and is used as an edge detector. The filter is based on the
following filter masks:
1 2 1
A= 0 0 0
−1 −2 −1
1 0 −1
B= 2 0 −2
1 0 −1
These masks are used differently, according to the selected filter type. (In the following, a und b denote the results
of convolving an image with A und B for one particular pixel.)

’sum_sqrt’ a2 + b2 /4
’sum_abs’ (|a| + |b|)/4
For a Sobel operator with size 3 × 3, the corresponding filters A and B are applied directly, while for larger filter
sizes the input image is first smoothed using a Gaussian filter (see gauss_image) or a binomial filter (see
binomial_filter) of size Size-2. The Gaussian filter is selected for the above values of FilterType.
Here, Size = 5, 7, 9, 11, or 13 must be used. The binomial filter is selected by appending ’_binomial’ to the
above values of FilterType. Here, Size can be selected between 5 and 39. Furthermore, it is possible to
select different amounts of smoothing the the column and row direction by passing two values in Size. Here, the
first value of Size corresponds to the mask width (smoothing in the column direction), while the second value
corresponds to the mask height (smoothing in the row direction) of the binomial filter. The binomial filter can only
be used for images of type byte and uint2. Since smoothing reduces the edge amplitudes, in this case the edge
amplitudes are multiplied by a factor of 2 to prevent information loss. Therefore,
sobel_dir(I:Amp,Dir:FilterTyp,S:)
for Size > 3 is conceptually equivalent to

scale_image(I,F,2,0)
gauss_image(F,G,S-2)
sobel_dir(G,Amp,Dir,FilterType,3:)
or to

scale_image(I,F,2,0)
binomial_filter(F,G,S[0]-2,S[1]-2)
sobel_dir(G,Amp,Dir,FilterType,3:).
The edge directions are returned in EdgeDirection, and are stored in 2-degree steps, i.e., an edge direction of x
degrees with respect to the horizontal axis is stored as x/2 in the edge direction image. Furthermore, the direction
of the change of intensity is taken into account. Let [Ex , Ey ] denote the image gradient. Then the following edge
directions are returned as r/2:

intensity increase Ex /Ey

edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).

HALCON/C Reference Manual, 2008-5-13


3.5. ENHANCEMENT 151

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. EdgeAmplitude (output_object) . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
. EdgeDirection (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : direction
Edge direction image.
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Filter type.
Default Value : "sum_abs"
List of values : FilterType ∈ {"sum_abs", "sum_sqrt", "sum_abs_binomial", "sum_sqrt_binomial"}
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Size of filter mask.
Default Value : 3
List of values : Size ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39}
Example

read_image(&Image,"fabrik");
sobel_dir(Image,&Amp,&Dir,"sum_abs",3);
threshold(Amp,&Edg,128.0,255.0);

Result
sobel_dir returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
sobel_dir is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
binomial_filter, gauss_image, mean_image, anisotropic_diffusion, sigma_image
Possible Successors
nonmax_suppression_dir, hysteresis_threshold, threshold
Alternatives
edges_image, frei_dir, kirsch_dir, prewitt_dir, robinson_dir
See also
roberts, laplace, highpass_image, bandpass_image
Module
Foundation

3.5 Enhancement
T_adjust_mosaic_images ( const Hobject Images,
Hobject *CorrectedImages, const Htuple From, const Htuple To,
const Htuple ReferenceImage, const Htuple HomMatrices2D,
const Htuple EstimationMethod, const Htuple EstimateParameters,
const Htuple OECFModel )

Automatic color correction of panorama images.


adjust_mosaic_images performs the radiometric adjustment of images in panoramas. The images to be
corrected have to be passed in Images, the corrected images will be returned in CorrectedImages.
The parameters From and To must contain the source and destination indices of all image pairs in the panorama.
The projective 3x3-matrix of each image pair has to be passed in HomMatrices2D. The image, which will be
used as the reference for brightness and white balance, is selected with the parameter ReferenceImage.
EstimationMethod is used for choosing whether a fast but less accurate, or a slower but more accurate
determination method should be used. This is done by setting EstimationMethod either to ’standard’ or

HALCON 8.0.2
152 CHAPTER 3. FILTER

’gold_standard’. The availability of the individual method is depending on the selected EstimateParameters,
which determines the model to be used for estimating the radiometric adjustment terms. It is always pos-
sible to determine the amount of vignetting in the images by selecting ’vignetting’. However, if selected,
EstimationMethod must be set to ’gold_standard’. For the remainder of the radiometric adjustment three
different options are available:
1. Image adjustment with the additive model. This should only be used to adjust images with very small differences
in exposure or white balance. To choose this method, EstimateParameters must be set to ’add_gray’. This
model can be selected either exclusively and only with EstimationMethod = ’standard’ or in combination
with EstimateParameters = ’vignetting’ and only with EstimationMethod = ’gold_standard’.
2. Image adjustment with the linear model. In this model, images are expected to be taken with a camera using
a linear transfer function. The adjustment terms are consequently represented as multiplication factors. To select
this model, EstimateParameters must be set to ’mult_gray’. It can be called with EstimationMethod
= ’standard’ or EstimationMethod = ’gold_standard’. A combined call with EstimateParameters =
’vignetting’ is also possible, EstimationMethod must be set to ’gold_standard’ in that case.
3. Image adjustment with the calibrated model. In this model, images are assumed to be taken with a camera using
a nonlinear transfer function. A function of the OECF class selected with OECFModel is used to approximate
the actually used OECF in the process of image acquisition. As with the linear model, the correction terms
are represented as multiplication factors. This model can be selected by choosing EstimateParameters =
[’mult_gray’,’response’] and must be called with EstimationMethod = ’gold_standard’. It is possible to
determine the amount of vignetting as well in this case by choosing EstimateParameters = ’vignetting’.
This model is similar to the linear model. However, in this case the camera may have a nonlinear response. This
means that before the gray values of the images can be multiplied by their respective correction factor, the gray
values must be backprojected to a linear response. To do so, the camera’s response must be determined. Since the
response usually does not change over an image sequence, this parameter is assumed to be constant throughout the
whole image sequence.
Any kind of function could be considered to be used as an OECF. As in the operator
radiometric_self_calibration, a polynomial fitting might be used, but for typical images in a
mosaicking application this would not work very well. The reason for this is that polynomial fitting has too
many parameters that need to be determined. Instead, only simpler types of response functions can be estimated.
Currently, only so-called Laguerre-functions are available.
The response of a Laguerre-type OECF is determined by only one parameter called Phi. In a first step, the whole
gray value spectrum (in case of 8bit images the values 0 to 255) is converted to floating point numbers in the
interval [0:1]. Then, the OECF backprojection is calculated based on this and the resulting gray values are once
again converted to the original interval.
The inverse transform of the gray values back to linear values based on a Laguerre-type OECF is described by the
following equation:

2 P hi · sin(π · I_nl)
I_l = I_nl + · arctan( )
π 1 − P hi · cos(π · I_nl)

with I_l the linear gray value and I_nl the (nonlinear) gray value.
The parameter OECFModel is only used if the calibrated model has been chosen. Otherwise, any input for
OECFModel will be ignored.
The parameter EstimateParameters can also be used to influence the performance and memory consumption
of the operator. With ’no_cache’ the internal caching mechanism can be disabled. This switch only has and influ-
ence if EstimationMethod is set to ’gold_standard’. Otherwise this switch will be ignored. When disabling
the internal caching, the operator uses far less memory, but therefore calculates the corresponding grayvalue pairs
in each iteration of the minimization algorithm again. Therefore, disabling caching is only advisable if all physical
memory is used up at some point of the calculation and the operating system starts using swap space.
A second option to inluence the performance is using subsampling. When setting EstimateParameters to
’subsampling_2’, images are internally zoomed down by a factor of 2. Despite the suggested value list, not only
factors of 2 and 4 are available, but any integer number might be specified by appending it to subsampling_ in
EstimateParameters. With this, the amount of image data is tremendously reduced, which leads to a much
faster computation of the internal minimization. In fact, using moderate subsampling might even lead to better

HALCON/C Reference Manual, 2008-5-13


3.5. ENHANCEMENT 153

results since it also decreases the influence of slightly misaligned pixels. Although subsampling also influences
the minimization if EstimationMethod is set to ’standard’, it is mostly useful for ’gold_standard’.
Some more general remarks on using adjust_mosaic_images in applications:
• Estimation of vignetting will only work well if significant vignetting is visible in the images. Otherwise, the
operator may lead to erratic results.
• Estimation of the response is rather slow because the problem is quite complex. Therefore, it is advisable not
to determine the response in time critical applications. Apart from this, the response can only be determined
correctly if there are relatively large brightness differences between the images.
• It is not possible to correct saturation. If there are saturated areas in an image, they will remain saturated.
• adjust_mosaic_images can only be used to correct different brightness in images, which is caused by different
exposure (shutter time, aperture) or different light intensity. It cannot be used to correct brightness differences
based on inhomogeneous illumination within each image.
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; Hobject : byte
Input images.
. CorrectedImages (output_object) . . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; Hobject * : byte
Output images.
. From (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
List of source images.
. To (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
List of destination images.
. ReferenceImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Reference image.
. HomMatrices2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Projective matrices.
. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Estimation algorithm for the correction.
Default Value : "standard"
List of values : EstimationMethod ∈ {"standard", "gold_standard"}
. EstimateParameters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Parameters to be estimated.
Default Value : ["mult_gray"]
List of values : EstimateParameters ∈ {"add_gray", "mult_gray", "response", "vignetting",
"subsampling_2", "subsampling_4", "no_cache"}
. OECFModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Model of OECF to be used.
Default Value : ["laguerre"]
List of values : OECFModel ∈ {"laguerre"}
Example (Syntax: HDevelop)

* For the input data to stationary_camera_self_calibration, please


* refer to the example for stationary_camera_self_calibration.
stationary_camera_self_calibration (4, 640, 480, 1, From, To,
HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches,
’gold_standard’,
[’focus’,’principal_point’],
’true’, CameraMatrix, Kappa,
RotationMatrices, X, Y, Z, Error)
adjust_mosaic_images(Images,CorrectedImages,From,To,1,HomMatrices2D,
’gold_standard’,[’mult_gray’,’response’])

Result
If the parameters are valid, the operator adjust_mosaic_images returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.

HALCON 8.0.2
154 CHAPTER 3. FILTER

Parallelization Information
adjust_mosaic_images is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
Possible Successors
gen_spherical_mosaic
References
David Hasler, Sabine S"usstrunk: Mapping colour in image stitching applications. Journal of Visual Communica-
tion and Image Representation, 15(1):65-90, 2004.
Module
Foundation

coherence_enhancing_diff ( const Hobject Image, Hobject *ImageCED,


double Sigma, double Rho, double Theta, Hlong Iterations )

T_coherence_enhancing_diff ( const Hobject Image, Hobject *ImageCED,


const Htuple Sigma, const Htuple Rho, const Htuple Theta,
const Htuple Iterations )

Perform a coherence enhancing diffusion of an image.


The operator coherence_enhancing_diff performs an anisotropic diffusion process on the input image
Image to increase the coherence of the image structures contained in Image. In particular, noncontinuous image
edges are connected by diffusion, without being smoothed perpendicular to their dominating direction. For this,
coherence_enhancing_diff uses the anisotropic diffusion equation

ut = div(G(u)∇u)

formulated by Weickert. With a 2 × 2 coefficient matrix G that depends on the gray values in Image, this is an
enhancement of the mean curvature flow or intrinsic heat equation

∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|

on the gray value function u defined by the input image Image at a time t0 = 0. The smoothing operator
mean_curvature_flow is a direct application of the mean curvature flow equation. The discrete diffusion
equation is solved in Iterations time steps of length Theta, so that the output image ImageCED contains
the gray value function at the time Iterations · Theta.
To detect the edge direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter Sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator isotropic_diffusion for isotropic image smoothing.
While the matrix G is given by

1
GM CF (u) = I − ∇u(∇u)T ,
|∇u|2

in the case of the operator mean_curvature_flow, where I denotes the unit matrix, GMCF is again smoothed
componentwise by a Gaussian filter of standard deviation Rho for coherence_enhancing_diff. Then, the
final coefficient matrix

GCED = g1 (λ1 − λ2 )2 w1 (w1 )T + g2 (λ1 − λ2 )2 w2 (w2 )T


 

is constructed from the eigenvalues λ1 , λ2 and eigenvectors w1 , w2 of the resulting intermediate matrix, where the
functions

HALCON/C Reference Manual, 2008-5-13


3.5. ENHANCEMENT 155

g1 (p) = 0.001  
−1
g2 (p) = 0.001 + 0.999 exp
p

were determined empirically and taken from the publication of Weickert.


Hence, the diffusion direction in mean_curvature_flow is only determined by the local direction of the gray
value gradient, while GCED considers the macroscopic structure of the image objects on the scale Rho and the
magnitude of the diffusion in coherence_enhancing_diff depends on how well this structure is defined.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. ImageCED (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Smoothing for derivative operator.
Default Value : 0.5
Suggested values : Sigma ∈ {0.0, 0.1, 0.5, 1.0}
Restriction : Sigma ≥ 0
. Rho (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Smoothing for diffusion coefficients.
Default Value : 3.0
Suggested values : Rho ∈ {0.0, 1.0, 3.0, 5.0, 10.0, 30.0}
Restriction : Rho ≥ 0
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Time step.
Default Value : 0.5
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.5}
Restriction : (0 < Theta) ≤ 0.5
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 5, 10, 20, 50, 100, 500}
Restriction : Iterations ≥ 1
Parallelization Information
coherence_enhancing_diff is reentrant and automatically parallelized (on tuple level).
References
J. Weickert, V. Hlavac, R. Sara; “Multiscale texture enhancement”; Computer analysis of images and patterns,
Lecture Notes in Computer Science, Vol. 970, pp. 230-237; Springer, Berlin; 1995.
J. Weickert, B. ter Haar Romeny, L. Florack, J. Koenderink, M. Viergever; “A review of nonlinear diffusion
filtering”; Scale-Space Theory in Computer Vision, Lecture Notes in Comp. Science, Vol. 1252, pp. 3-28;
Springer, Berlin; 1997.
Module
Foundation

emphasize ( const Hobject Image, Hobject *ImageEmphasize,


Hlong MaskWidth, Hlong MaskHeight, double Factor )

T_emphasize ( const Hobject Image, Hobject *ImageEmphasize,


const Htuple MaskWidth, const Htuple MaskHeight, const Htuple Factor )

Enhance contrast of the image.


The operator emphasize emphasizes high frequency areas of the image (edges and corners). The resulting
images appears sharper.

HALCON 8.0.2
156 CHAPTER 3. FILTER

First the procedure carries out a filtering with the low pass ( mean_image). The resulting gray values (res) are
calculated from the obtained gray values (mean) and the original gray values (orig) as follows:

res := round((orig − mean) ∗ Factor) + orig

Factor serves as measurement of the increase in contrast. The division frequency is determined via the size of
the filter matrix: The larger the matrix, the lower the disivion frequency.
As an edge treatment the gray values are mirrored at the edges of the image. Overflow and/or underflow of gray
values is clipped.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Image to be enhanced.
. ImageEmphasize (output_object) . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
contrast enhanced image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of low pass mask.
Default Value : 7
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 15, 21, 25, 31, 39}
Typical range of values : 3 ≤ MaskWidth ≤ 201
Minimum Increment : 2
Recommended Increment : 2
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the low pass mask.
Default Value : 7
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 15, 21, 25, 31, 39}
Typical range of values : 3 ≤ MaskHeight ≤ 201
Minimum Increment : 2
Recommended Increment : 2
. Factor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Intensity of contrast emphasis.
Default Value : 1.0
Suggested values : Factor ∈ {0.3, 0.5, 0.7, 1.0, 1.4, 1.8, 2.0}
Typical range of values : 0.0 ≤ Factor ≤ 20.0 (sqrt)
Minimum Increment : 0.01
Recommended Increment : 0.2
Restriction : (0 < Factor) ∧ (Factor < 20)
Example

read_image(&Image,"mreut");
disp_image(Image,WindowHandle);
draw_region(&Region,WindowHandle);
reduce_domain(Image,Region,&Mask);
emphasize(Mask,&Sharp,7,7,2.0);
disp_image(Sharp,WindowHandle);

Result
If the parameter values are correct the operator emphasize returns the value H_MSG_TRUE The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
emphasize is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
mean_image, sub_image, laplace, add_image

HALCON/C Reference Manual, 2008-5-13


3.5. ENHANCEMENT 157

See also
mean_image, highpass_image
Module
Foundation

equ_histo_image ( const Hobject Image, Hobject *ImageEquHisto )


T_equ_histo_image ( const Hobject Image, Hobject *ImageEquHisto )

Histogram linearisation of images


The operator equ_histo_image enhances the contrast. The starting point is the histogram of the input images.
The following simple gray value transformation f (g) is carried out for byte images:
X
f (g) = 255 h(x)
x=0...g

h(x) describes the relative frequency of the occurrence of the gray value x. For uint2 images, the only difference
is that the value 255 is replaced with a different maximum value. The maximum value is computed from the
number of significant bits stored with the input image, provided that this value is set. If not, the value of the system
parameter ’int2_bits’ is used (see set_system), if this value is set (i.e., different from -1). If none of the two
values is set, the number of significant bits is set to 16.
This transformation linearises the cumulative histogram. Maxima in the original histogram are "‘spreaded"’ and
thus the contrast in image regions with these frequently occuring gray values is increased. Supposedly homogenous
regions receive more easily visible structures. On the other hand, of course, the noise in the image increases cor-
respondlingly. Minima in the original histogram are dually "‘compressed"’. The transformed histogram contains
gaps, but the remaining gray values used occur approximately at the same frequency ("‘histogram equalization"’).
Attention
The operator equ_histo_image primarily serves for optical processing of images for a human viewer. For
example, the (local) contrast spreading can lead to a detection of fictitious edges.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2


Image to be enhanced.
. ImageEquHisto (output_object) . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Image with linearized gray values.
Parallelization Information
equ_histo_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
disp_image
Alternatives
scale_image, scale_image_max, illuminate
See also
scale_image
References
R.C. Gonzales, P. Wintz: "‘Digital Image Processing"’; Second edition; Addison Wesley; 1987.
Module
Foundation

HALCON 8.0.2
158 CHAPTER 3. FILTER

illuminate ( const Hobject Image, Hobject *ImageIlluminate,


Hlong MaskWidth, Hlong MaskHeight, double Factor )

T_illuminate ( const Hobject Image, Hobject *ImageIlluminate,


const Htuple MaskWidth, const Htuple MaskHeight, const Htuple Factor )

Illuminate image.
The operator illuminate enhances contrast. Very dark parts of the image are "‘illuminated"’ more strongly,
very light ones are "‘darkened"’. If orig is the original gray value and mean is the corresponding gray value of the
low pass filtered image detected via the operators mean_image and filter size MaskHeight x MaskWidth.
For byte-images val equals 127, for int2-images and uint2-images val equals the median value. The resulting gray
value is new:

new := round((val − mean) ∗ Factor + orig)

The low pass should have rather large dimensions (30 x 30 to 200 x 200). Reasonable parameter combinations
might be:

MaskHeight MaskWidth Factor


40 40 0.55
100 100 0.7
150 150 0.8

i.e. the larger the low pass mask is chosen, the larger Factor should be as well.
The following "‘spotlight effect"’ should be noted: If, for example, a dark object is in front of a light wall the object
as well as the wall, which is already light in the immediate proximity of the object contours, are lightened by the
operator illuminate. This corresponds roughly to the effect that is produced when the object is illuminated
by a strong spotlight. The same applies to light objects in front of a darker background. In this case, however, the
fictitious "‘spotlight"’ darkens objects.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Image to be enhanced.
. ImageIlluminate (output_object) . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
"‘Illuminated"’ image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of low pass mask.
Default Value : 101
Suggested values : MaskWidth ∈ {31, 41, 51, 71, 101, 121, 151, 201}
Typical range of values : 3 ≤ MaskWidth ≤ 299
Minimum Increment : 2
Recommended Increment : 10
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of low pass mask.
Default Value : 101
Suggested values : MaskHeight ∈ {31, 41, 51, 71, 101, 121, 151, 201}
Typical range of values : 3 ≤ MaskHeight ≤ 299
Minimum Increment : 2
Recommended Increment : 10
. Factor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Scales the "‘correction gray value"’ added to the original gray values.
Default Value : 0.7
Suggested values : Factor ∈ {0.3, 0.5, 0.7, 1.0, 1.5, 2.0, 3.0, 5.0}
Typical range of values : 0.0 ≤ Factor ≤ 5.0
Minimum Increment : 0.01
Recommended Increment : 0.2
Restriction : (0 < Factor) ∧ (Factor < 5)

HALCON/C Reference Manual, 2008-5-13


3.5. ENHANCEMENT 159

Example

read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
illuminate(Image,&Better,40,40,0.55);
disp_image(Better,WindowHandle);

Result
If the parameter values are correct the operator illuminate returns the value H_MSG_TRUE The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
illuminate is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
scale_image_max, equ_histo_image, mean_image, sub_image
See also
emphasize, gray_histo
Module
Foundation

mean_curvature_flow ( const Hobject Image, Hobject *ImageMCF,


double Sigma, double Theta, Hlong Iterations )

T_mean_curvature_flow ( const Hobject Image, Hobject *ImageMCF,


const Htuple Sigma, const Htuple Theta, const Htuple Iterations )

Apply the mean curvature flow to an image.


The operator mean_curvature_flow applies the mean curvature flow or intrinsic heat equatio

∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|

to the gray value function u defined by the input image Image at a time t0 = 0. The discretized equation is solved
in Iterations time steps of length Theta, so that the output image contains the gray value function at the time
Iterations · Theta.
The mean curvature flow causes a smoothing of Image in the direction of the edges in the image, i.e. along the
contour lines of u, while perpendicular to the edge direction no smoothing is performed and hence the boundaries
of image objects are not smoothed. To detect the image direction more robustly, in particular on noisy input data,
an additional isotropic smoothing step can precede the computation of the gray value gradients. The parameter
Sigma determines the magnitude of the smoothing by means of the standard deviation of a corresponding Gaussian
convolution kernel, as used in the operator isotropic_diffusion for isotropic image smoothing.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real


Input image.
. ImageMCF (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Smoothing parameter for derivative operator.
Default Value : 0.5
Suggested values : Sigma ∈ {0.0, 0.1, 0.5, 1.0}
Restriction : Sigma ≥ 0

HALCON 8.0.2
160 CHAPTER 3. FILTER

. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


Time step.
Default Value : 0.5
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.5}
Restriction : (0 < Theta) ≤ 0.5
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 5, 10, 20, 50, 100, 500}
Restriction : Iterations ≥ 1
Parallelization Information
mean_curvature_flow is reentrant and automatically parallelized (on tuple level).
References
M. G. Crandall, P. Lions; “Convergent Difference Schemes for Nonlinear Parabolic Equations and Mean Curvature
Motion”; Numer. Math. 75 pp. 17-41; 1996.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation

scale_image_max ( const Hobject Image, Hobject *ImageScaleMax )


T_scale_image_max ( const Hobject Image, Hobject *ImageScaleMax )

Maximum gray value spreading in the value range 0 to 255.


The operator scale_image_max calculates the minimum and maximum and scales the image to the maximum
value range of a byte image. This way the dynamics (value range) is fully exploited. The number of different gray
scales does not change, but in general the visual impression is enhanced. The gray values of images of the real,
int2, uint2 and int4 type are scaled to the range 0 to 255 and returned as byte images.
Attention
The output always is an image of the type byte.
Parameter

. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real


Image to be scaled.
. ImageScaleMax (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte
contrast enhanced image.
Parallelization Information
scale_image_max is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
disp_image
Alternatives
equ_histo_image, scale_image, illuminate, convert_image_type
See also
min_max_gray, gray_histo
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.5. ENHANCEMENT 161

shock_filter ( const Hobject Image, Hobject *SharpenedImage,


double Theta, Hlong Iterations, const char *Mode, double Sigma )

T_shock_filter ( const Hobject Image, Hobject *SharpenedImage,


const Htuple Theta, const Htuple Iterations, const Htuple Mode,
const Htuple Sigma )

Apply a shock filter to an image.


The operator shock_filter applies a shock filter to the input image Image to sharpen the edges contained in
it. The principle of the shock filter is based on the transport of the gray values of the image towards an edge from
both sides through dilation and erosion and satisfies the differential equation

ut = s |∇u|

on the function u defined by the gray values in Image at a time t0 = 0. The discretized equation is solved in
Iterations time steps of length Theta, so that the output image SharpenedImage contains the gray value
function at the time Iterations · Theta.
The decision between dilation and erosion is made using the sign function s ∈ {−1, 0, +1} on a conventional edge
detector. The detector of Canny
 
∇u ∇u 2
s = −sgn D u( , )
|∇u| |∇u|

is available with Mode = 0 canny 0 and the detector of Marr/Hildreth (the Laplace operator)

s = −sgn(∆u)

can be selected by Mode = 0 laplace 0 .


To make the edge detection more robust, in particular on noisy images, it can be performed on a smoothed image
matrix. This is done by giving the standard deviation of a Gaussian kernel for convolution with the image matrix
in the parameter Sigma.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real


Input image.
. SharpenedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Time step.
Default Value : 0.5
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7}
Restriction : (0 < Theta) ≤ 0.7
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 3, 10, 100}
Restriction : Iterations ≥ 1
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of edge detector.
Default Value : "canny"
List of values : Mode ∈ {"laplace", "canny"}
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Smoothing of edge detector.
Default Value : 1.0
Suggested values : Sigma ∈ {0.0, 0.5, 1.0, 2.0, 5.0}
Restriction : Theta ≥ 0

HALCON 8.0.2
162 CHAPTER 3. FILTER

Parallelization Information
shock_filter is reentrant and automatically parallelized (on tuple level).
References
F. Guichard, J. Morel; “A Note on Two Classical Shock Filters and Their Asymptotics”; Michael Kerckhove (Ed.):
Scale-Space and Morphology in Computer Vision, LNCS 2106, pp. 75-84; Springer, New York; 2001.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation

3.6 FFT
convol_fft ( const Hobject ImageFFT, const Hobject ImageFilter,
Hobject *ImageConvol )

T_convol_fft ( const Hobject ImageFFT, const Hobject ImageFilter,


Hobject *ImageConvol )

Convolve an image with a filter in the frequency domain.


convol_fft convolves two (Fourier-transformed) images in the frequency domain, i.e., the pixels of the complex
image ImageFFT are multiplied by the corresponding pixels of the filter ImageFilter.
Attention
The filtering is always done on the entire image, i.e., the region of the image is ignored.
Parameter
. ImageFFT (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Complex input image.
. ImageFilter (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : real / complex
Filter in frequency domain.
. ImageConvol (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : complex
Result of applying the filter.
Example (Syntax: HDevelop)

gen_highpass(Highpass,0.2,’n’,’dc_edge’,Width,Height)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_fft(ImageFFT,Highpass:ImageConvol)
fft_generic(ImageConvol,ImageResult,’from_freq’,1,’none’,’dc_edge’,’byte’)

Result
convol_fft returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
convol_fft is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
fft_image, fft_generic, rft_generic, gen_highpass, gen_lowpass, gen_bandpass,
gen_bandfilter
Possible Successors
power_byte, power_real, power_ln, fft_image_inv, fft_generic, rft_generic
Alternatives
convol_gabor
See also
gen_gabor, gen_highpass, gen_lowpass, gen_bandpass, convol_gabor, fft_image_inv
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 163

convol_gabor ( const Hobject ImageFFT, const Hobject GaborFilter,


Hobject *ImageResultGabor, Hobject *ImageResultHilbert )

T_convol_gabor ( const Hobject ImageFFT, const Hobject GaborFilter,


Hobject *ImageResultGabor, Hobject *ImageResultHilbert )

Convolve an image with a Gabor filter in the frequency domain.


convol_gabor convolves a Fourier-transformed image with a Gabor filter GaborFilter (see gen_gabor)
and its Hilbert transform in the frequency domain. The result image is of type ’complex’.
Attention
The filtering is always done on the entire image, i.e., the region of the image is ignored.
Parameter
. ImageFFT (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image.
. GaborFilter (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; Hobject : real
Gabor/Hilbert-Filter.
. ImageResultGabor (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : complex
Result of the Gabor filter.
. ImageResultHilbert (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : complex
Result of the Hilbert filter.
Example (Syntax: HDevelop)

gen_gabor(Filter,1.4,0.4,1.0,1.5,’n’,’dc_edge’,512,512)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_gabor(ImageFFT,Filter,Gabor,Hilbert,’dc_edge’)
fft_generic(Gabor,GaborInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
fft_generic(Hilbert,HilbertInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
energy_gabor(GaborInv,HilbertInv,Energy)

Result
convol_gabor returns H_MSG_TRUE if all images are of correct type. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
convol_gabor is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
fft_image, fft_generic, gen_gabor
Possible Successors
power_byte, power_real, power_ln, fft_image_inv, fft_generic
Alternatives
convol_fft
See also
convol_image
Module
Foundation

correlation_fft ( const Hobject ImageFFT1, const Hobject ImageFFT2,


Hobject *ImageCorrelation )

T_correlation_fft ( const Hobject ImageFFT1, const Hobject ImageFFT2,


Hobject *ImageCorrelation )

Compute the correlation of two images in the frequency domain.

HALCON 8.0.2
164 CHAPTER 3. FILTER

correlation_fft calculates the correlation of the Fourier-transformed input images in the frequency do-
main. The correlation is calculated by multiplying ImageFFT1 with the complex conjugate of ImageFFT2.
It should be noted that in order to achieve a correct scaling of the correlation in the spatial domain, the oper-
ators fft_generic or rft_generic with Norm = ’none’ must be used for the forward transform and
fft_generic or rft_generic with Norm = ’n’ for the reverse transform. If ImageFFT1 and ImageFFT2
contain the same number of images, the corresponding images are correlated pairwise. Otherwise, ImageFFT2
must contain only one single image. In this case, the correlation is performed for each image of ImageFFT1 with
ImageFFT2 .
Attention
The filtering is always performed on the entire image, i.e., the domain of the image is ignored.
Parameter

. ImageFFT1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex


Fourier-transformed input image 1.
. ImageFFT2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Fourier-transformed input image 2.
Number of elements : (ImageFFT2 = ImageFFT1) ∨ (ImageFFT2 = 1)
. ImageCorrelation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : complex
Correlation of the input images in the frequency domain.
Example (Syntax: HDevelop)

/* Compute the auto-correlation of an image. */


get_image_pointer1(Image,Pointer,Type,Width,Height)
rft_generic(Image,ImageFFT,’to_freq’,’none’,’complex’,Width)
correlation_fft(ImageFFT,ImageFFT:Correlation)
rft_generic(Correlation,AutoCorrelation,’from_freq’,’n’,’real’,Width)

Result
convol_fft returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
correlation_fft is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
fft_generic, fft_image, rft_generic
Possible Successors
fft_generic, fft_image_inv, rft_generic
Module
Foundation

energy_gabor ( const Hobject ImageGabor, const Hobject ImageHilbert,


Hobject *Energy )

T_energy_gabor ( const Hobject ImageGabor, const Hobject ImageHilbert,


Hobject *Energy )

Calculate the energy of a two-channel image.


energy_gabor calculates the local contrast (Energy) of the two input images. The energy of the resulting
image is then defined as

Energy = channel12 + channel22 .

Often the calculation of the energy is preceded by the convolution of an image with a Gabor filter and the Hilbert
transform of the Gabor filter (see convol_gabor). In this case, the first channel of the image passed to
energy_gabor is the Gabor-filtered image, transformed back into the spatial domain (see fft_image_inv),

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 165

and the second channel the result of the convolution with the Hilbert transform, also transformed back into the
spatial domain. The local energy is a measure for the local contrast of structures (e.g., edges and lines) in the
image.
Parameter
. ImageGabor (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
1st channel of input image (usually: Gabor image).
. ImageHilbert (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
2nd channel of input image (usually: Hilbert image).
. Energy (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Image containing the local energy.
Example

fft_image(Image,&FFT);
gen_gabor(&Filter,1.4,0.4,1.0,1.5,512);
convol_gabor(FFT,Filter,&Gabor,&Hilbert);
fft_image_inv(Gabor,&GaborInv);
fft_image_inv(Hilbert,&HilbertInv);
energy_gabor(GaborInv,HilbertInv,&Energy);

Result
energy_gabor returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
energy_gabor is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
gen_gabor, convol_gabor, fft_image_inv
Module
Foundation

fft_generic ( const Hobject Image, Hobject *ImageFFT,


const char *Direction, Hlong Exponent, const char *Norm,
const char *Mode, const char *ResultType )

T_fft_generic ( const Hobject Image, Hobject *ImageFFT,


const Htuple Direction, const Htuple Exponent, const Htuple Norm,
const Htuple Mode, const Htuple ResultType )

Compute the fast Fourier transform of an image.


fft_generic computes the fast Fourier transform of the input image Image. Because several definitions of the
forward and reverse transforms exist in the literature, this operator allows the user to select the most convenient
definition.
The general definition of a Fourier transform is as follows:

M −1 N −1
1 X X s2πi(km/M +ln/N )
F (m, n) = e f (k, l)
c
k=0 l=0

Opinions vary on whether the sign s in the exponent should be set to 1 or -1 for the forward transform, i.e., the
transform for going to the frequency domain. There is also disagreement on the magnitude of the normalizing
factor c. This is √
sometimes set to 1 for the forward transform, sometimes to M N , and sometimes (in case of the
unitary FFT) to M N . Especially in image processing applications the DC term is shifted to the center of the
image.
fft_generic allows to select these choices individually. The parameter Direction allows to select the
logical direction of the FFT. (This parameter is not unnecessary; it is needed to discern how to shift the image if

HALCON 8.0.2
166 CHAPTER 3. FILTER

the DC term should rest in the center of the image.) Possible values are ’to_freq’ and ’from_freq’. The parameter
Exponent is used to determine the sign of the exponent. It can be set to 1 or -1. The normalizing factor can be
set with Norm, and can take on the values ’none’, ’sqrt’ and ’n’. The parameter Mode determines the location of
the DC term of the FFT. It can be set to ’dc_center’ or ’dc_edge’.
In any case, the user must ensure the consistent use of the parameters. This means that the normalizing factors
used for the forward and backward transform must yield M N when multiplied, the exponents must be of opposite
sign, and Mode must be equal for both transforms.
A consistent combination is, for example (’to_freq’,-1,’n’,’dc_edge’) for the forward transform and
(’from_freq’,1,’none’,’dc_edge’) for the reverse transform. In this case, the FFT can be interpreted as interpo-
lation with trigonometric basis functions. Another possible combination is (’to_freq’,-1,’sqrt’,’dc_center’) and
(’from_freq’,1,’sqrt’,’dc_center’).
The parameter ResultType can be used to specify the result image type of the reverse transform (Direction
= ’from_freq’). In the forward transform (Direction = ’to_freq’), ResultType must be set to ’complex’.
Parameter

. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex
Input image.
. ImageFFT (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex
Fourier-transformed image.
. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Calculate forward or reverse transform.
Default Value : "to_freq"
List of values : Direction ∈ {"to_freq", "from_freq"}
. Exponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Sign of the exponent.
Default Value : -1
List of values : Exponent ∈ {-1, 1}
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the transform.
Default Value : "sqrt"
List of values : Norm ∈ {"none", "sqrt", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge"}
. ResultType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Image type of the output image.
Default Value : "complex"
List of values : ResultType ∈ {"complex", "byte", "int1", "int2", "uint2", "int4", "real", "direction",
"cyclic"}
Example

/* simulation of fft */
my_fft(Hobject In, Hobject *Out)
{
fft_generic(In,Out,"to_freq",-1,"sqrt","dc_center","complex");
}

/* simulation of fft_image_inv */
my_fft_image_inv(Hobject In, Hobject *Out)
{
fft_generic(In,&Out,"from_freq",1,"sqrt","dc_center","byte");
}

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 167

Result
fft_generic returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
fft_generic is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
optimize_fft_speed, read_fft_optimization_data
Possible Successors
convol_fft, convol_gabor, convert_image_type, power_byte, power_real, power_ln,
phase_deg, phase_rad, energy_gabor
Alternatives
fft_image, fft_image_inv, rft_generic
Module
Foundation

fft_image ( const Hobject Image, Hobject *ImageFFT )


T_fft_image ( const Hobject Image, Hobject *ImageFFT )

Compute the fast Fourier transform of an image.


fft_image calculates the Fourier transform of the input image (Image), i.e., it transforms the image into the
frequency domain. The algorithm used is the fast Fourier transform. This corresponds to the call

fft_generic(Image,ImageFFT,’to_freq’,-1,’sqrt’,’dc_center’,’complex’)

.
Attention
The filtering is always done on the entire image, i.e., the region of the image is ignored.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real


Input image.
. ImageFFT (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : complex
Fourier-transformed image.
Result
fft_image returns H_MSG_TRUE if the input image is of correct type. If the input is empty the behavior can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
fft_image is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
optimize_fft_speed, read_fft_optimization_data
Possible Successors
convol_fft, convol_gabor, convert_image_type, power_byte, power_real, power_ln,
phase_deg, phase_rad
Alternatives
fft_generic, rft_generic
See also
fft_image_inv
Module
Foundation

HALCON 8.0.2
168 CHAPTER 3. FILTER

fft_image_inv ( const Hobject Image, Hobject *ImageFFTInv )


T_fft_image_inv ( const Hobject Image, Hobject *ImageFFTInv )

Compute the inverse fast Fourier transform of an image.


fft_image_inv calculates the inverse Fourier transform of the input image (Image), i.e., it transforms the
image back into the spatial domain. This corresponds to the call

fft_generic(Image,ImageFFT,’from_freq’,1,’sqrt’,’dc_center’,’byte’)

.
Attention
The filtering is always done on the entire image, i.e., the region of the image is ignored.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex


Input image.
. ImageFFTInv (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte
Inverse-Fourier-transformed image.
Result
fft_image_inv returns H_MSG_TRUE if the input image is of correct type. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
fft_image_inv is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
convol_fft, convol_gabor, fft_image, optimize_fft_speed,
read_fft_optimization_data
Possible Successors
convert_image_type, energy_gabor
Alternatives
fft_generic, rft_generic
See also
fft_image, fft_generic, energy_gabor
Module
Foundation

gen_bandfilter ( Hobject *ImageFilter, double MinFrequency,


double MaxFrequency, const char *Norm, const char *Mode, Hlong Width,
Hlong Height )

T_gen_bandfilter ( Hobject *ImageFilter, const Htuple MinFrequency,


const Htuple MaxFrequency, const Htuple Norm, const Htuple Mode,
const Htuple Width, const Htuple Height )

Generate an ideal band filter.


gen_bandfilter generates an ideal band filter in the frequency domain. The parameters MinFrequency
and MaxFrequency determine the cutoff frequencies of the filter as a fraction of the maximum (horizontal and
vertical) frequency that can be represented in an image of size Width × Height, i.e., MinFrequency and
MaxFrequency should lie between 0 and 1. To achieve a maximum efficiency of the filtering operation, the
parameter Norm can be used to specify the normalization factor of the filter. If fft_generic and Norm = ’n’
is used the normalization in the FFT can be avoided. Mode can be used to determine where the DC term of the
filter lies or whether the filter should be used in the real-valued FFT. If fft_generic is used, ’dc_edge’ can be
used to gain efficiency. If fft_image and fft_image_inv are used for filtering, Norm = ’none’ and Mode

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 169

= ’dc_center’ must be used. If rft_generic is used, Mode = ’rft’ must be used. The resulting image contains
an annulus with the value 0, and a value determined by the normalization outside of this annulus.
Parameter

. ImageFilter (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real


Band filter in the frequency domain.
. MinFrequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Minimum frequency.
Default Value : 0.1
Suggested values : MinFrequency ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : MinFrequency ≥ 0
. MaxFrequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Maximum frequency.
Default Value : 0.2
Suggested values : MaxFrequency ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : (MaxFrequency ≥ 0) ∧ (MaxFrequency ≥ MinFrequency)
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge", "rft"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example (Syntax: HDevelop)

/* Filtering with maximum efficiency with fft_generic. */


gen_bandpass(Bandfilter,0.2,0.4,’n’,’dc_edge’,Width,Height)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_fft(ImageFFT,Bandfilter:ImageConvol)
fft_generic(ImageConvol,ImageResult,’from_freq’,1,’none’,’dc_edge’,’byte’)

Result
gen_bandfilter returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
gen_bandfilter is reentrant and processed without parallelization.
Possible Successors
convol_fft
Alternatives
gen_circle, paint_region
See also
gen_highpass, gen_lowpass, gen_bandpass, gen_gauss_filter,
gen_derivative_filter
Module
Foundation

HALCON 8.0.2
170 CHAPTER 3. FILTER

gen_bandpass ( Hobject *ImageBandpass, double MinFrequency,


double MaxFrequency, const char *Norm, const char *Mode, Hlong Width,
Hlong Height )

T_gen_bandpass ( Hobject *ImageBandpass, const Htuple MinFrequency,


const Htuple MaxFrequency, const Htuple Norm, const Htuple Mode,
const Htuple Width, const Htuple Height )

Generate an ideal bandpass filter.


gen_bandpass generates an ideal bandpass filter in the frequency domain. The parameters MinFrequency
and MaxFrequency determine the cutoff frequencies of the filter as a fraction of the maximum (horizontal and
vertical) frequency that can be represented in an image of size Width × Height, i.e., MinFrequency and
MaxFrequency should lie between 0 and 1. To achieve a maximum efficiency of the filtering operation, the
parameter Norm can be used to specify the normalization factor of the filter. If fft_generic and Norm = ’n’
is used the normalization in the FFT can be avoided. Mode can be used to determine where the DC term of the
filter lies or whether the filter should be used in the real-valued FFT. If fft_generic is used, ’dc_edge’ can be
used to gain efficiency. If fft_image and fft_image_inv are used for filtering, Norm = ’none’ and Mode
= ’dc_center’ must be used. If rft_generic is used, Mode = ’rft’ must be used. The resulting image contains
an annulus with the value 255, and the value 0 outside of this annulus.
Parameter

. ImageBandpass (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real


Bandpass filter in the frequency domain.
. MinFrequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Minimum frequency.
Default Value : 0.1
Suggested values : MinFrequency ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : MinFrequency ≥ 0
. MaxFrequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Maximum frequency.
Default Value : 0.2
Suggested values : MaxFrequency ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : (MaxFrequency ≥ 0) ∧ (MaxFrequency ≥ MinFrequency)
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge", "rft"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example (Syntax: HDevelop)

/* Filtering with maximum efficiency with fft_generic. */


gen_bandpass(Bandpass,0.2,0.4,’n’,’dc_edge’,Width,Height)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_fft(ImageFFT,Bandpass:ImageConvol)
fft_generic(ImageConvol,ImageResult,’from_freq’,1,’none’,’dc_edge’,’byte’)

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 171

Result
gen_bandpass returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
gen_bandpass is reentrant and processed without parallelization.
Possible Successors
convol_fft
See also
gen_highpass, gen_lowpass, gen_bandfilter, gen_gauss_filter,
gen_derivative_filter
Module
Foundation

gen_derivative_filter ( Hobject *ImageDerivative,


const char *Derivative, Hlong Exponent, const char *Norm,
const char *Mode, Hlong Width, Hlong Height )

T_gen_derivative_filter ( Hobject *ImageDerivative,


const Htuple Derivative, const Htuple Exponent, const Htuple Norm,
const Htuple Mode, const Htuple Width, const Htuple Height )

Generate a derivative filter in the frequency domain.


gen_derivative_filter generates a derivative filter in the frequency domain. The derivative to be com-
puted is determined by Derivative. Exponent specifies the exponent used in the reverse transform. It must
be set to the same value that is used in fft_generic. If fft_image_inv is used in the reverse trans-
form, Exponent = 1 must be used. However, since the derivative image typically contains negative values,
fft_generic should always be used for the reverse transform. To achieve a maximum efficiency of the filtering
operation, the parameter Norm can be used to specify the normalization factor of the filter. If fft_generic
and Norm = ’n’ is used the normalization in the FFT can be avoided. Mode can be used to determine where the
DC term of the filter lies or whether the filter should be used in the real-valued FFT. If fft_generic is used,
’dc_edge’ can be used to gain efficiency. If fft_image and fft_image_inv are used for filtering, Norm =
’none’ and Mode = ’dc_center’ must be used. If rft_generic is used, Mode = ’rft’ must be used.
Parameter
. ImageDerivative (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : complex
Derivative filter as image in the frequency domain.
. Derivative (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Derivative to be computed.
Default Value : "x"
Suggested values : Derivative ∈ {"x", "y", "xx", "xy", "yy", "xxx", "xxy", "xyy", "yyy"}
. Exponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Exponent used in the reverse transform.
Default Value : 1
Suggested values : Exponent ∈ {-1, 1}
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge", "rft"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}

HALCON 8.0.2
172 CHAPTER 3. FILTER

. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example (Syntax: HDevelop)

/* Generate a smoothed derivative filter. */


gen_gauss_filter (ImageGauss, Sigma, Sigma, 0, ’n’, ’dc_edge’, 512, 512)
convert_image_type (ImageGauss, ImageGaussComplex, ’complex’)
gen_derivative_filter (ImageDerivX, ’x’, 1, ’none’, ’dc_edge’, 512, 512)
mult_image (ImageGaussComplex, ImageDerivX, ImageDerivXGauss, 1, 0)
/* Filter an image with the smoothed derivative filter. */
fft_generic (Image, ImageFFT, ’to_freq’, -1, ’none’, ’dc_edge’, ’complex’)
convol_fft (ImageFFT, ImageDerivXGauss, Filtered)
fft_generic (Filtered, ImageX, ’from_freq’, 1, ’none’, ’dc_edge’, ’real’)

Result
gen_derivative_filter returns H_MSG_TRUE if all parameters are correct. If necessary, an exception
handling is raised.
Parallelization Information
gen_derivative_filter is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
convol_fft
See also
fft_image_inv, gen_gauss_filter, gen_lowpass, gen_bandpass, gen_bandfilter,
gen_highpass
Module
Foundation

gen_filter_mask ( Hobject *ImageFilter, const char *FilterMask,


double Scale, Hlong Width, Hlong Height )

T_gen_filter_mask ( Hobject *ImageFilter, const Htuple FilterMask,


const Htuple Scale, const Htuple Width, const Htuple Height )

Store a filter mask in the spatial domain as a real-image.


gen_filter_mask stores a filter mask in the spatial domain as a real-image. The center of the filter mask lies in
the center of the resulting image. The parameter Scale determines by which amount the values of the filter mask
are multiplied (this results in larger values of the Fourier transform of the filter). The corresponding filter matrix,
which is given in FilterMask can be generated either from a file or a tuple. The format of the filter matrix
is described with the operator convol_image. Example filter masks can be found in the directory “filter” in
the HALCON home directory. This operator is useful for visualizing the frequency response of filter masks (by
applying a Fourier transform to the result image of this operator).
Parameter

. ImageFilter (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real


Filter in the spatial domain.
. FilterMask (input_control) . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; (Htuple .) const char * / Hlong
Filter mask as file name or tuple.
Default Value : "gauss"
Suggested values : FilterMask ∈ {"gauss", "laplace4", "laplace8", "lowpas_3_3", "lowpas_5_5",
"lowpas_7_7", "lowpas_9_9", "sobel_c", "sobel_l"}

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 173

. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double


Scaling factor.
Default Value : 1.0
Suggested values : Scale ∈ {0.3, 0.5, 0.75, 1.0, 1.25, 1.5, 2.0}
Typical range of values : 0.001 ≤ Scale ≤ 10.0
Minimum Increment : 0.001
Recommended Increment : 0.1
Restriction : Scale > 0.0
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example (Syntax: HDevelop)

* If the filter should be read from a file:


gen_filter_mask (Filter, ’lowpas_3_3’, 1.0, 512, 512)
* If the filter should be directly passed as a tuple:
gen_filter_mask (Filter, [3,3,9,1,1,1,1,1,1,1,1,1], 1.0, 512, 512)
fft_image (Filter, FilterFFT)
set_paint (WindowHandle, ’3D-plot_hidden’)
disp_image (FilterFFT, WindowHandle)

Parallelization Information
gen_filter_mask is reentrant and processed without parallelization.
Possible Successors
fft_image, fft_generic
See also
convol_image
Module
Foundation

gen_gabor ( Hobject *ImageFilter, double Angle, double Frequency,


double Bandwidth, double Orientation, const char *Norm,
const char *Mode, Hlong Width, Hlong Height )

T_gen_gabor ( Hobject *ImageFilter, const Htuple Angle,


const Htuple Frequency, const Htuple Bandwidth,
const Htuple Orientation, const Htuple Norm, const Htuple Mode,
const Htuple Width, const Htuple Height )

Generate a Gabor filter.


gen_gabor generates a Gabor filter with a user-definable bandpass frequency range and sign for the Hilbert
transform. This is done by calculating a symmetrical filter in the frequency domain, which can be adapted by the
parameters Angle, Frequency, Bandwidth und Orientation such that a certain frequency band and a
certain direction range in the spatial domain is filtered out in the frequency domain.
The parameters Frequency (central frequency = distance from the DC term) and Orientation (direction)
determine the center of the filter. Larger values of Frequency result in higher frequencies being passed. A value
of 0 for Orientation generates a horizontally oriented “crescent” (the bulge of the crescent points upward).
Higher values of Orientation result in the counterclockwise rotation of the crescent.
The parameters Angle and Bandwidth are used to determine the range of frequencies and angles being passed
by the filter. The larger Angle is, the smaller the range of angles passed by the filter gets (because the “cres-

HALCON 8.0.2
174 CHAPTER 3. FILTER

cent” gets narrower). The larger Bandwidth is, the smaller the frequency band being passed gets (because the
“crescent” gets thinner).
To achieve a maximum efficiency of the filtering operation, the parameter Norm can be used to specify the normal-
ization factor of the filter. If fft_generic and Norm = ’n’ is used the normalization in the FFT can be avoided.
Mode can be used to determine where the DC term of the filter lies. If fft_generic is used, ’dc_edge’ can be
used to gain efficiency. If fft_image and fft_image_inv are used for filtering, Norm = ’none’ and Mode
= ’dc_center’ must be used. Note that gen_gabor cannot create a filter that can be used with rft_generic.
The resulting image is a two-channel real-image, containing the Gabor filter in the first channel and the corre-
sponding Hilbert filter in the second channel.
Parameter

. ImageFilter (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; Hobject * : real


Gabor and Hilbert filter.
. Angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Angle range, inversely proportional to the range of orientations.
Default Value : 1.4
Suggested values : Angle ∈ {1.0, 1.2, 1.4, 1.6, 2.0, 2.5, 3.0, 5.0, 6.0, 10.0, 20.0, 30.0, 50.0, 70.0, 100.0}
Typical range of values : 1.0 ≤ Angle ≤ 500.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. Frequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Distance of the center of the filter to the DC term.
Default Value : 0.4
Suggested values : Frequency ∈ {0.0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.50, 0.55, 0.60, 0.65,
0.699}
Typical range of values : 0.0 ≤ Frequency ≤ 0.7
Minimum Increment : 0.00001
Recommended Increment : 0.005
. Bandwidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Bandwidth range, inversely proportional to the range of frequencies being passed.
Default Value : 1.0
Suggested values : Bandwidth ∈ {0.1, 0.3, 0.7, 1.0, 1.5, 2.0, 3.0, 5.0, 7.0, 10.0, 15.0, 20.0, 30.0, 50.0}
Typical range of values : 0.05 ≤ Bandwidth ≤ 100.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. Orientation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Angle of the principal orientation.
Default Value : 1.5
Suggested values : Orientation ∈ {0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0,
3.14}
Typical range of values : 0.0 ≤ Orientation ≤ 3.1416
Minimum Increment : 0.0001
Recommended Increment : 0.05
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 175

. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example (Syntax: HDevelop)

gen_gabor(Filter,1.4,0.4,1.0,1.5,’n’,’dc_edge’,512,512)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_gabor(ImageFFT,Filter,Gabor,Hilbert,’dc_edge’)
fft_generic(Gabor,GaborInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
fft_generic(Hilbert,HilbertInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
energy_gabor(GaborInv,HilbertInv,Energy)

Result
gen_gabor returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling is raised.
Parallelization Information
gen_gabor is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic
Possible Successors
convol_gabor
Alternatives
gen_bandpass, gen_bandfilter, gen_highpass, gen_lowpass
See also
fft_image_inv, energy_gabor
Module
Foundation

gen_gauss_filter ( Hobject *ImageGauss, double Sigma1, double Sigma2,


double Phi, const char *Norm, const char *Mode, Hlong Width,
Hlong Height )

T_gen_gauss_filter ( Hobject *ImageGauss, const Htuple Sigma1,


const Htuple Sigma2, const Htuple Phi, const Htuple Norm,
const Htuple Mode, const Htuple Width, const Htuple Height )

Generate a Gaussian filter in the frequency domain.


gen_gauss_filter generates a (possibly anisotropic) Gaussian filter in the frequency domain. The standard
deviations (i.e., the amount of smoothing) of the Gaussian in the spatial domain are determined by Sigma1 and
Sigma2. Sigma1 is the standard deviation in the principal direction of the filter in the spatial domain determined
by the angle Phi. To achieve a maximum efficiency of the filtering operation, the parameter Norm can be used
to specify the normalization factor of the filter. If fft_generic and Norm = ’n’ is used the normalization in
the FFT can be avoided. Mode can be used to determine where the DC term of the filter lies or whether the filter
should be used in the real-valued FFT. If fft_generic is used, ’dc_edge’ can be used to gain efficiency. If
fft_image and fft_image_inv are used for filtering, Norm = ’none’ and Mode = ’dc_center’ must be
used. If rft_generic is used, Mode = ’rft’ must be used.
Parameter
. ImageGauss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Gaussian filter as image in the frequency domain.
. Sigma1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Standard deviation of the Gaussian in the principal direction of the filter in the spatial domain.
Default Value : 1.0
Suggested values : Sigma1 ∈ {0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0}
Restriction : Sigma1 ≥ 0

HALCON 8.0.2
176 CHAPTER 3. FILTER

. Sigma2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


Standard deviation of the Gaussian perpendicular to the principal direction of the filter in the spatial domain.
Default Value : 1.0
Suggested values : Sigma2 ∈ {0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0}
Restriction : Sigma2 ≥ 0
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double
Principal direction of the filter in the spatial domain.
Default Value : 0.0
Suggested values : Phi ∈ {0.0, 0.523599, 0.785398, 1.047198, 1.570796, 2.094395, 2.356194, 2.617994,
3.141593}
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge", "rft"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example (Syntax: HDevelop)

/* Generate a smoothed derivative filter. */


gen_gauss_filter (ImageGauss, Sigma, Sigma, 0, ’n’, ’dc_edge’, 512, 512)
convert_image_type (ImageGauss, ImageGaussComplex, ’complex’)
gen_derivative_filter (ImageDerivX, ’x’, 1, ’none’, ’dc_edge’, 512, 512)
mult_image (ImageGaussComplex, ImageDerivX, ImageDerivXGauss, 1, 0)
/* Filter an image with the smoothed derivative filter. */
fft_generic (Image, ImageFFT, ’to_freq’, -1, ’none’, ’dc_edge’, ’complex’)
convol_fft (ImageFFT, ImageDerivXGauss, Filtered)
fft_generic (Filtered, ImageX, ’from_freq’, 1, ’none’, ’dc_edge’, ’real’)

Result
gen_gauss_filter returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling
is raised.
Parallelization Information
gen_gauss_filter is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
convol_fft
See also
fft_image_inv, gen_gauss_filter, gen_lowpass, gen_bandpass, gen_bandfilter,
gen_highpass
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 177

gen_highpass ( Hobject *ImageHighpass, double Frequency,


const char *Norm, const char *Mode, Hlong Width, Hlong Height )

T_gen_highpass ( Hobject *ImageHighpass, const Htuple Frequency,


const Htuple Norm, const Htuple Mode, const Htuple Width,
const Htuple Height )

Generate an ideal highpass filter.


gen_highpass generates an ideal highpass filter in the frequency domain. The parameter Frequency deter-
mines the cutoff frequency of the filter as a fraction of the maximum (horizontal and vertical) frequency that can
be represented in an image of size Width × Height, i.e., Frequency should lie between 0 and 1. To achieve a
maximum efficiency of the filtering operation, the parameter Norm can be used to specify the normalization factor
of the filter. If fft_generic and Norm = ’n’ is used the normalization in the FFT can be avoided. Mode can
be used to determine where the DC term of the filter lies or whether the filter should be used in the real-valued FFT.
If fft_generic is used, ’dc_edge’ can be used to gain efficiency. If fft_image and fft_image_inv are
used for filtering, Norm = ’none’ and Mode = ’dc_center’ must be used. If rft_generic is used, Mode = ’rft’
must be used. The resulting image has an inner part with the value 0, and an outer part with the value determined
by the normalization factor.
Parameter

. ImageHighpass (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real


Highpass filter in the frequency domain.
. Frequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Cutoff frequency.
Default Value : 0.1
Suggested values : Frequency ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : Frequency ≥ 0
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge", "rft"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example (Syntax: HDevelop)

/* Filtering with maximum efficiency with fft_generic. */


gen_highpass(Highpass,0.2,’n’,’dc_edge’,Width,Height)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_fft(ImageFFT,Highpass:ImageConvol)
fft_generic(ImageConvol,ImageResult,’from_freq’,1,’none’,’dc_edge’,’byte’)

Result
gen_highpass returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
gen_highpass is reentrant and processed without parallelization.

HALCON 8.0.2
178 CHAPTER 3. FILTER

Possible Successors
convol_fft
See also
convol_fft, gen_lowpass, gen_bandpass, gen_bandfilter, gen_gauss_filter,
gen_derivative_filter
Module
Foundation

gen_lowpass ( Hobject *ImageLowpass, double Frequency, const char *Norm,


const char *Mode, Hlong Width, Hlong Height )

T_gen_lowpass ( Hobject *ImageLowpass, const Htuple Frequency,


const Htuple Norm, const Htuple Mode, const Htuple Width,
const Htuple Height )

Generate an ideal lowpass filter.


gen_lowpass generates an ideal lowpass filter in the frequency domain. The parameter Frequency determines
the cutoff frequency of the filter as a fraction of the maximum (horizontal and vertical) frequency that can be
represented in an image of size Width × Height, i.e., Frequency should lie between 0 and 1. To achieve a
maximum efficiency of the filtering operation, the parameter Norm can be used to specify the normalization factor
of the filter. If fft_generic and Norm = ’n’ is used the normalization in the FFT can be avoided. Mode can
be used to determine where the DC term of the filter lies or whether the filter should be used in the real-valued FFT.
If fft_generic is used, ’dc_edge’ can be used to gain efficiency. If fft_image and fft_image_inv
are used for filtering, Norm = ’none’ and Mode = ’dc_center’ must be used. If rft_generic is used, Mode
= ’rft’ must be used. The resulting image has an inner part with the value set to the normalization factor, and an
outer part with the value 0.
Parameter
. ImageLowpass (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Lowpass filter in the frequency domain.
. Frequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Cutoff frequency.
Default Value : 0.1
Suggested values : Frequency ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : Frequency ≥ 0
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge", "rft"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Example

/* Filtering with maximum efficiency with fft_generic. */


gen_lowpass(Lowpass,0.2,’n’,’dc_edge’,Width,Height)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 179

convol_fft(ImageFFT,Lowpass:ImageConvol)
fft_generic(ImageConvol,ImageResult,’from_freq’,1,’none’,’dc_edge’,’byte’)

Result
gen_lowpass returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling is raised.
Parallelization Information
gen_lowpass is reentrant and processed without parallelization.
Possible Successors
convol_fft
See also
gen_highpass, gen_bandpass, gen_bandfilter, gen_gauss_filter,
gen_derivative_filter
Module
Foundation

gen_sin_bandpass ( Hobject *ImageFilter, double Frequency,


const char *Norm, const char *Mode, Hlong Width, Hlong Height )

T_gen_sin_bandpass ( Hobject *ImageFilter, const Htuple Frequency,


const Htuple Norm, const Htuple Mode, const Htuple Width,
const Htuple Height )

Generate a bandpass filter with sinusoidal shape.


gen_sin_bandpass generates a rotationally invariant bandpass filter with the response being a sinusoidal func-
tion in the frequency domain. The maximum of the sine is determined by Frequency, which is given as a fraction
of the maximum (horizontal and vertical) frequency that can be represented in an image of size Width × Height,
i.e., Frequency should lie between 0 and 1. To achieve a maximum efficiency of the filtering operation, the pa-
rameter Norm can be used to specify the normalization factor of the filter. If fft_generic and Norm = ’n’ is
used the normalization in the FFT can be avoided. Mode can be used to determine where the DC term of the filter
lies or whether the filter should be used in the real-valued FFT. If fft_generic is used, ’dc_edge’ can be used
to gain efficiency. If fft_image and fft_image_inv are used for filtering, Norm = ’none’ and Mode =
’dc_center’ must be used. If rft_generic is used, Mode = ’rft’ must be used. The filter is always zero for the
DC term, rises with the sine function up to Frequency, and drops for higher frequencies accordingly. The range
of the sine used is from 0 to π. All other points are set to zero.
Parameter
. ImageFilter (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Bandpass filter as image in the frequency domain.
. Frequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Distance of the filter’s maximum from the DC term.
Default Value : 0.1
Suggested values : Frequency ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : Frequency ≥ 0
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge", "rft"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}

HALCON 8.0.2
180 CHAPTER 3. FILTER

. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Result
gen_sin_bandpass returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling
is raised.
Parallelization Information
gen_sin_bandpass is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
convol_fft
Alternatives
gen_std_bandpass
See also
fft_image_inv, gen_gauss_filter, gen_derivative_filter, gen_bandpass,
gen_bandfilter, gen_highpass, gen_lowpass
Module
Foundation

gen_std_bandpass ( Hobject *ImageFilter, double Frequency,


double Sigma, const char *Type, const char *Norm, const char *Mode,
Hlong Width, Hlong Height )

T_gen_std_bandpass ( Hobject *ImageFilter, const Htuple Frequency,


const Htuple Sigma, const Htuple Type, const Htuple Norm,
const Htuple Mode, const Htuple Width, const Htuple Height )

Generate a bandpass filter with Gaussian or sinusoidal shape.


gen_std_bandpass generates a rotationally invariant bandpass filter with the response being determined by the
parameters Frequency and Sigma: Frequency determines the location of the maximum response with respect
to the DC term, while Sigma determines the width of the frequency band that passes the filter. Frequency and
Sigma are specified as a fraction of the maximum (horizontal and vertical) frequency that can be represented in
an image of size Width × Height. Frequency should lie between 0 and 1. For Type = ’gauss’, a Gaussian
response is generated with Sigma being the standard deviation. For Type = ’sin’, a sine function is generated with
the maximum at Frequency and the extent Sigma. To achieve a maximum efficiency of the filtering operation,
the parameter Norm can be used to specify the normalization factor of the filter. If fft_generic and Norm =
’n’ is used the normalization in the FFT can be avoided. Mode can be used to determine where the DC term of the
filter lies or whether the filter should be used in the real-valued FFT. If fft_generic is used, ’dc_edge’ can be
used to gain efficiency. If fft_image and fft_image_inv are used for filtering, Norm = ’none’ and Mode
= ’dc_center’ must be used. If rft_generic is used, Mode = ’rft’ must be used.
Parameter
. ImageFilter (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Bandpass filter as image in the frequency domain.
. Frequency (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Distance of the filter’s maximum from the DC term.
Default Value : 0.1
Suggested values : Frequency ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : Frequency ≥ 0
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Bandwidth of the filter (standard deviation).
Default Value : 0.01
Suggested values : Sigma ∈ {0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 1.0}
Restriction : Sigma ≥ 0

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 181

. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *


Filter type.
Default Value : "sin"
List of values : Type ∈ {"sin", "gauss"}
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the filter.
Default Value : "none"
List of values : Norm ∈ {"none", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge", "rft"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image (filter).
Default Value : 512
List of values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048, 4096, 8192}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the image (filter).
Default Value : 512
List of values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048, 4096, 8192}
Result
gen_std_bandpass returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling
is raised.
Parallelization Information
gen_std_bandpass is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
convol_fft
Alternatives
gen_sin_bandpass
See also
fft_image_inv, gen_gauss_filter, gen_derivative_filter, gen_bandpass,
gen_bandfilter, gen_highpass, gen_lowpass
Module
Foundation

optimize_fft_speed ( Hlong Width, Hlong Height, const char *Mode )


T_optimize_fft_speed ( const Htuple Width, const Htuple Height,
const Htuple Mode )

Optimize the runtime of the FFT.


optimize_fft_speed determines a method that achieves an optimum runtime of the FFT for an image of
size Width × Height. The data that are determined for one image size do not influence the methods used for
other image sizes. Consequently, optimize_fft_speed can be called multiple times with different values
for Width and Height to achieve an optimum runtime for all image sizes that are used in an application. The
parameter Mode determines the thoroughness of the search for the fastest method. For Mode = ’standard’ a fast
search is used, which typically takes a few seconds. The method thus determined results in very good runtimes,
which are not always optimal. For Mode = ’patient’ a more thorough search is performed, which typically takes
several seconds and in most cases leads to optimum runtimes. For Mode = ’exhaustive’ an exhaustive search is
performed, which typically takes several minutes and always results in the optimum runtime. In most applications,
Mode = ’standard’ results in the best compromise between the runtime of the FFT and the time required for
the search of the optimum runtime. The data determined with optimize_fft_speed can be saved with
write_fft_optimization_data and can be loaded with read_fft_optimization_data.

HALCON 8.0.2
182 CHAPTER 3. FILTER

optimize_fft_speed influences the runtime of the following operators, which use the FFT: fft_generic,
fft_image, fft_image_inv, wiener_filter, wiener_filter_ni, phot_stereo,
sfs_pentland, sfs_mod_lr, sfs_orig_lr.
Parameter

. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Width of the image for which the runtime should be optimized.
Default Value : 512
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the image for which the runtime should be optimized.
Default Value : 512
Suggested values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Thoroughness of the search for the optimum runtime.
Default Value : "standard"
List of values : Mode ∈ {"standard", "patient", "exhaustive"}
Result
optimize_fft_speed returns H_MSG_TRUE if all parameters are correct. If necessary, an exception han-
dling is raised.
Parallelization Information
optimize_fft_speed is reentrant and processed without parallelization.
Possible Successors
fft_generic, fft_image, fft_image_inv, wiener_filter, wiener_filter_ni,
phot_stereo, sfs_pentland, sfs_mod_lr, sfs_orig_lr, write_fft_optimization_data
Alternatives
read_fft_optimization_data
See also
optimize_rft_speed
Module
Foundation

optimize_rft_speed ( Hlong Width, Hlong Height, const char *Mode )


T_optimize_rft_speed ( const Htuple Width, const Htuple Height,
const Htuple Mode )

Optimize the runtime of the real-valued FFT.


optimize_rft_speed determines a method that achieves an optimum runtime of the real-valued FFT for an
image of size Width × Height. The data that are determined for one image size do not influence the methods
used for other image sizes. Consequently, optimize_rft_speed can be called multiple times with different
values for Width and Height to achieve an optimum runtime for all image sizes that are used in an application.
The parameter Mode determines the thoroughness of the search for the fastest method. For Mode = ’standard’ a
fast search is used, which typically takes a few seconds. The method thus determined results in very good runtimes,
which are not always optimal. For Mode = ’patient’ a more thorough search is performed, which typically takes
several seconds and in most cases leads to optimum runtimes. For Mode = ’exhaustive’ an exhaustive search is
performed, which typically takes several minutes and always results in the optimum runtime. In most applications,
Mode = ’standard’ results in the best compromise between the runtime of the real-valued FFT and the time
required for the search of the optimum runtime. The data determined with optimize_rft_speed can be saved
with write_fft_optimization_data and can be loaded with read_fft_optimization_data.
optimize_rft_speed influences the runtime of rft_generic.

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 183

Parameter
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image for which the runtime should be optimized.
Default Value : 512
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the image for which the runtime should be optimized.
Default Value : 512
Suggested values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Thoroughness of the search for the optimum runtime.
Default Value : "standard"
List of values : Mode ∈ {"standard", "patient", "exhaustive"}
Result
optimize_rft_speed returns H_MSG_TRUE if all parameters are correct. If necessary, an exception han-
dling is raised.
Parallelization Information
optimize_rft_speed is reentrant and processed without parallelization.
Possible Successors
rft_generic, write_fft_optimization_data
Alternatives
read_fft_optimization_data
See also
optimize_fft_speed
Module
Foundation

phase_deg ( const Hobject ImageComplex, Hobject *ImagePhase )


T_phase_deg ( const Hobject ImageComplex, Hobject *ImagePhase )

Return the phase of a complex image in degrees.


phase_deg computes the phase of a complex image in degrees. The following formula is used:

90
phase = atan2(imaginary part, real part) .
π
Hence, ImagePhase contains half the phase angle. For negative phase angles, 180 is added.
Parameter
. ImageComplex (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image in frequency domain.
. ImagePhase (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : direction
Phase of the image in degrees.
Example

read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
phase_deg(FFT,&Phase);
disp_image(Phase,WindowHandle);

Result
phase_deg returns H_MSG_TRUE if the image is of correct type. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.

HALCON 8.0.2
184 CHAPTER 3. FILTER

Parallelization Information
phase_deg is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
disp_image
Alternatives
phase_rad
See also
fft_image_inv
Module
Foundation

phase_rad ( const Hobject ImageComplex, Hobject *ImagePhase )


T_phase_rad ( const Hobject ImageComplex, Hobject *ImagePhase )

Return the phase of a complex image in radians.


phase_rad computes the phase of a complex image in radians. The following formula is used:

phase = atan2(imaginary part, real part) .

Parameter

. ImageComplex (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex


Input image in frequency domain.
. ImagePhase (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Phase of the image in radians.
Example

read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
phase_rad(FFT,&Phase);
disp_image(Phase,WindowHandle);

Result
phase_rad returns H_MSG_TRUE if the image is of correct type. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
phase_rad is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
disp_image
Alternatives
phase_deg
See also
fft_image_inv, fft_generic, rft_generic
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 185

power_byte ( const Hobject Image, Hobject *PowerByte )


T_power_byte ( const Hobject Image, Hobject *PowerByte )

Return the power spectrum of a complex image.


power_byte computes the power spectrum from the real and imaginary parts of a Fourier-transformed image
(see fft_image), i.e., the modulus of the frequencies. The result image is of type ’byte’. The following formula
is used:
p
realpart2 + imaginarypart2 .

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image in frequency domain.
. PowerByte (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte
Power spectrum of the input image.
Example

read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_byte(FFT,&Power);
disp_image(Power,WindowHandle);

Result
power_byte returns H_MSG_TRUE if the image is of correct type. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
power_byte is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic, convol_fft, convol_gabor
Possible Successors
disp_image
Alternatives
abs_image, convert_image_type, power_real, power_ln
See also
fft_image, fft_generic, rft_generic
Module
Foundation

power_ln ( const Hobject Image, Hobject *ImageResult )


T_power_ln ( const Hobject Image, Hobject *ImageResult )

Return the power spectrum of a complex image.


power_ln computes the power spectrum from the real and imaginary parts of a Fourier-transformed image (see
fft_image), i.e., the modulus of the frequencies. Additionally, the natural logarithm is applied to the result. The
result image is of type ’real’. The following formula is used:
p 
ln realpart2 + imaginarypart2 .

HALCON 8.0.2
186 CHAPTER 3. FILTER

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image in frequency domain.
. ImageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Power spectrum of the input image.
Example

read_image(&Image,"monkey");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_ln(FFT,&Power);
disp_image(Power,WindowHandle);

Result
power_ln returns H_MSG_TRUE if the image is of correct type. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
power_ln is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic, convol_fft, convol_gabor
Possible Successors
disp_image, convert_image_type, scale_image
Alternatives
abs_image, convert_image_type, power_real, power_byte
See also
fft_image, fft_generic, rft_generic
Module
Foundation

power_real ( const Hobject Image, Hobject *ImageResult )


T_power_real ( const Hobject Image, Hobject *ImageResult )

Return the power spectrum of a complex image.


power_real computes the power spectrum from the real and imaginary parts of a Fourier-transformed image
(see fft_image), i.e., the modulus of the frequencies. The result image is of type ’real’. The following formula
is used:
p
realpart2 + imaginarypart2 .

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image in frequency domain.
. ImageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Power spectrum of the input image.
Example

read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_real(FFT,&Power);
disp_image(Power,WindowHandle);

HALCON/C Reference Manual, 2008-5-13


3.6. FFT 187

Result
power_real returns H_MSG_TRUE if the image is of correct type. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
power_real is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic, convol_fft, convol_gabor
Possible Successors
disp_image, convert_image_type, scale_image
Alternatives
abs_image, convert_image_type, power_byte, power_ln
See also
fft_image, fft_generic, rft_generic
Module
Foundation

read_fft_optimization_data ( const char *FileName )


T_read_fft_optimization_data ( const Htuple FileName )

Load FFT speed optimization data from a file.


read_fft_optimization_data loads data for optimizing the runtime of the FFT from the file given by
FileName. The optimization data must have been determined previously with optimize_fft_speed
and must have been stored with write_fft_optimization_data. If the stored data have been deter-
mined for the image sizes to be used in the application, a call to optimize_fft_speed is unnecessary. It
should be noted that the data should only be used on the same machine on which they were determined with
optimize_fft_speed. If this is not observed the runtimes will not be optimal. Furthermore, it should be
noted that optimization data that were created with Standard HALCON cannot be used with Parallel HALCON
and vice versa.
read_fft_optimization_data influences the runtime of the following operators, which use the
FFT: fft_generic, fft_image, fft_image_inv, wiener_filter, wiener_filter_ni,
phot_stereo, sfs_pentland, sfs_mod_lr, sfs_orig_lr.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name of the optimization data.
Default Value : "fft_opt.dat"
Result
read_fft_optimization_data returns H_MSG_TRUE if all parameters are correct. If necessary, an ex-
ception handling is raised.
Parallelization Information
read_fft_optimization_data is reentrant and processed without parallelization.
Possible Successors
fft_generic, fft_image, fft_image_inv, rft_generic, wiener_filter,
wiener_filter_ni, phot_stereo, sfs_pentland, sfs_mod_lr, sfs_orig_lr
Alternatives
optimize_fft_speed, optimize_rft_speed
See also
write_fft_optimization_data
Module
Foundation

HALCON 8.0.2
188 CHAPTER 3. FILTER

rft_generic ( const Hobject Image, Hobject *ImageFFT,


const char *Direction, const char *Norm, const char *ResultType,
Hlong Width )

T_rft_generic ( const Hobject Image, Hobject *ImageFFT,


const Htuple Direction, const Htuple Norm, const Htuple ResultType,
const Htuple Width )

Compute the real-valued fast Fourier transform of an image.


rft_generic computes the fast Fourier transform of the input image Image. In contrast to fft_generic,
fft_image, and fft_image_inv, the fact that the input image in the forward transform is a real-valued
image (i.e., not a complex image) is used. In this case, the complex output image has a redundancy. The values
in the right half of the image are the complex conjugates of the corresponding values in the left half of the image.
Consequently, runtime and memory can be saved by only computing and storing the left half of the complex image.
The parameter ResultType can be used to specify the result image type of the reverse transform (Direction
= ’from_freq’). In the forward transform (Direction = ’to_freq’), ResultType must be set to ’complex’.
The parameter direction determines whether the transform should be performed to the frequency domain or back
into the spatial domain. For Direction = ’to_freq’ the input image must have a real-valued type, i.e., a complex
image may not be used as input. All image types that can be converted into an image of type real are supported. In
this case, the output is a complex image of dimension (w/2 + 1) × h, where w and h are the width and height of
the input image. In this mode, the exponent -1 is used in the transform (see fft_generic). For Direction =
’from_freq’, the input image must be complex. In this case, the size of the input image is insufficient to determine
the size of the output image. This must be done by setting Width to a valid value, i.e., to 2w − 2 or 2w − 1, where
w is the width of the complex image. In this mode, the exponent 1 is used in the transform.
The normalizing factor can be set with Norm, and can take on the values ’none’, ’sqrt’ and ’n’. The user must
ensure the consistent use of the parameters. This means that the normalizing factors used for the forward and
backward transform must yield wh when multiplied.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex
Input image.
. ImageFFT (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex
Fourier-transformed image.
. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Calculate forward or reverse transform.
Default Value : "to_freq"
List of values : Direction ∈ {"to_freq", "from_freq"}
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the transform.
Default Value : "sqrt"
List of values : Norm ∈ {"none", "sqrt", "n"}
. ResultType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Image type of the output image.
Default Value : "complex"
List of values : ResultType ∈ {"complex", "byte", "int1", "int2", "uint2", "int4", "real", "direction",
"cyclic"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image for which the runtime should be optimized.
Default Value : 512
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048}
Result
rft_generic returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
rft_generic is reentrant and automatically parallelized (on tuple level).

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 189

Possible Predecessors
optimize_rft_speed, read_fft_optimization_data
Possible Successors
convol_fft, convert_image_type, power_byte, power_real, power_ln, phase_deg,
phase_rad
Alternatives
fft_generic, fft_image, fft_image_inv
Module
Foundation

write_fft_optimization_data ( const char *FileName )


T_write_fft_optimization_data ( const Htuple FileName )

Store FFT speed optimization data in a file.


write_fft_optimization_data stores the data for the optimization of the runtime of the FFT that were
determined with optimize_fft_speed in the file given by FileName. The data can be loaded with
read_fft_optimization_data.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name of the optimization data.
Default Value : "fft_opt.dat"
Result
write_fft_optimization_data returns H_MSG_TRUE if all parameters are correct. If necessary, an
exception handling is raised.
Parallelization Information
write_fft_optimization_data is reentrant and processed without parallelization.
Possible Predecessors
optimize_fft_speed, optimize_rft_speed
See also
fft_generic, fft_image, fft_image_inv, wiener_filter, wiener_filter_ni,
phot_stereo, sfs_pentland, sfs_mod_lr, sfs_orig_lr, read_fft_optimization_data
Module
Foundation

3.7 Geometric-Transformations
T_affine_trans_image ( const Hobject Image, Hobject *ImageAffinTrans,
const Htuple HomMat2D, const Htuple Interpolation,
const Htuple AdaptImageSize )

Apply an arbitrary affine 2D transformation to images.


affine_trans_image applies an arbitrary affine 2D transformation, i.e., scaling, rotation, translation, and
slant (skewing), to the images given in Image and returns the transformed images in ImageAffinTrans.
The affine transformation is described by the homogeneous transformation matrix given in HomMat2D, which
can be created using the operators hom_mat2d_identity, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_translate, etc., or be the result of operators like vector_angle_to_rigid.
The components of the homogeneous transformation matrix are interpreted as follows: The row coordinate of the
image corresponds to x and the col coordinate corresponds to y of the coordinate system in which the transforma-
tion matrix was defined. This is necessary to obtain a right-handed coordinate system for the image. In particular,
this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices quite
naturally corresponds to the usual (row,column) order for coordinates in the image.

HALCON 8.0.2
190 CHAPTER 3. FILTER

The region of the input image is ignored, i.e., assumed to be the full rectangle of the image. The region of the
resulting image is set to the transformed rectangle of the input image. If necessary, the resulting image is filled
with zero (black) outside of the region of the original image.
Generally, transformed points will lie between pixel coordinates. Therefore, an appropriate interpolation scheme
has to be used. The interpolation can also be used to avoid aliasing effects for scaled images. The quality and
speed of the interpolation can be set by the parameter Interpolation:
none Nearest-neighbor interpolation: The gray value is determined from the nearest pixel’s gray value (pos-
sibly low quality, very fast).
constant Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of mean
filter is used to prevent aliasing effects (medium quality and run time).
weighted Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of Gaussian
filter is used to prevent aliasing effects (best quality, slow).
In addition, the system parameter ’int_zooming’ (see set_system) affects the accuracy of the transformation. If
’int_zooming’ is set to ’true’, the transformation for byte, int2 and uint2 images is carried out internally using fixed
point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed gray values
is smaller in this case. For byte images, the differences to the more accurate calculation (using ’int_zooming’ =
’false’) is typically less than two gray levels. Correspondingly, for int2 and uint2 images, the gray value differences
are less than 1/128 times the dynamic gray value range of the image, i.e., they can be as large as 512 gray levels if
the entire dynamic range of 16 bit is used. Additionally, if a large scale factor is applied and a large output image
is obtained, then undefined gray values at the lower and at the right image border may result. The maximum width
Bmax of this border of undefined gray values can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale
factor in one dimension and I is the size of the output image in the corresponding dimension. For real images, the
parameter ’int_zooming’ does not affect the accuracy, since the internal calculations are always done using floating
point arithmetic.
The size of the target image can be controlled by the parameter AdaptImageSize: With value ’true’ the size
will be adapted so that no clipping occurs at the right or lower edge. With value ’false’ the target image has the
same size as the input image. Note that, independent of AdaptImageSize, the image is always clipped at the
left and upper edge, i.e., all image parts that have negative coordinates after the transformation are clipped.
Attention
The region of the input image is ignored.
The used coordinate system is the same as in affine_trans_pixel. This means that in fact not HomMat2D
is applied but a modified version. Therefore, applying affine_trans_image corresponds to the following
chain of transformations, which is applied to each point (Row_i, Col_i) of the image (input and output pixels as
homogeneous vectors):
       
RowT rans_i 1 0 −0.5 1 0 +0.5 Row_i
 ColT rans_i  =  0 1 −0.5  · HomMat2D ·  0 1 +0.5  ·  Col_i 
1 0 0 1 0 0 1 1

As an effect, you might get unexpected results when creating affine transformations based on coordinates that are
derived from the image, e.g., by operators like area_center_gray. For example, if you use this operator to
calculate the center of gravity of a rotationally symmetric image and then rotate the image around this point using
hom_mat2d_rotate, the resulting image will not lie on the original one. In such a case, you can compensate
this effect by applying the following translations to HomMat2D before using it in affine_trans_image:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_image(Image, ImageAffinTrans, HomMat2DAdapted, ’constant’,
’false’)

Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . .(multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / real


Input image.

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 191

. ImageAffinTrans (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2


/ real
Transformed image.
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of interpolation.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
. AdaptImageSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Adaption of size of result image.
Default Value : "false"
List of values : AdaptImageSize ∈ {"true", "false"}
Example (Syntax: HDevelop)

/* Reduction of an image (512 x 512 Pixels) by 50%, rotation */


/* by 180 degrees and translation to the upper-left corner: */

hom_mat2d_identity(Matrix1)
hom_mat2d_scale(Matrix1,0.5,0.5,256.0,256.0,Matrix2)
hom_mat2d_rotate(Matrix2,3.14,256.0,256.0,Matrix3)
hom_mat2d_translate(Matrix3,-128.0,-128.0,Matrix4,)
affine_trans_image(Image,TransImage,Matrix4,1).

/* Enlarging the part of an image in the interactively */


/* chosen rectangular window sector: */

draw_rectangle2(WindowHandle,L,C,Phi,L1,L2)
hom_mat2d_identity(Matrix1)
get_system(width,Width)
get_system(height,Height)
hom_mat2d_translate(Matrix1,Height/2.0-L,Width/2.0-C,Matrix2)
hom_mat2d_rotate(Matrix2,3.14-Phi,Height/2.0,Width/2.0,Matrix3)
hom_mat2d_scale(Matrix3,Height/(2.0*L2),Width/(2.0*L1),
Height/2.0,Width/2.0,Matrix4)
affine_trans_image(Image,Matrix4,TransImage,1).

Result
If the matrix HomMat2D represents an affine transformation (i.e., not a projective transformation),
affine_trans_image returns H_MSG_TRUE. If the input is empty the behavior can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
affine_trans_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_rotate, hom_mat2d_scale
Alternatives
affine_trans_image_size, zoom_image_size, zoom_image_factor, mirror_image,
rotate_image, affine_trans_region
See also
set_part_style
Module
Foundation

HALCON 8.0.2
192 CHAPTER 3. FILTER

T_affine_trans_image_size ( const Hobject Image,


Hobject *ImageAffinTrans, const Htuple HomMat2D,
const Htuple Interpolation, const Htuple Width, const Htuple Height )

Apply an arbitrary affine 2D transformation to an image and specify the output image size.
affine_trans_image_size applies an arbitrary affine 2D transformation, i.e., scaling, rotation, translation,
and slant (skewing), to the images given in Image and returns the transformed images in ImageAffinTrans.
The affine transformation is described by the homogeneous transformation matrix given in HomMat2D, which
can be created using the operators hom_mat2d_identity, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_translate, etc., or be the result of operators like vector_angle_to_rigid.
The components of the homogeneous transformation matrix are interpreted as follows: The row coordinate of the
image corresponds to x and the col coordinate corresponds to y of the coordinate system in which the transforma-
tion matrix was defined. This is necessary to obtain a right-handed coordinate system for the image. In particular,
this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices quite
naturally corresponds to the usual (row,column) order for coordinates in the image.
The region of the input image is ignored, i.e., assumed to be the full rectangle of the image. The region of the
resulting image is set to the transformed rectangle of the input image. If necessary, the resulting image is filled
with zero (black) outside of the region of the original image.
Generally, transformed points will lie between pixel coordinates. Therefore, an appropriate interpolation scheme
has to be used. The interpolation can also be used to avoid aliasing effects for scaled images. The quality and
speed of the interpolation can be set by the parameter Interpolation:
none Nearest-neighbor interpolation: The gray value is determined from the nearest pixel’s gray value (pos-
sibly low quality, very fast).
constant Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of mean
filter is used to prevent aliasing effects (medium quality and run time).
weighted Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of Gaussian
filter is used to prevent aliasing effects (best quality, slow).
In addition, the system parameter ’int_zooming’ (see set_system) affects the accuracy of the transformation. If
’int_zooming’ is set to ’true’, the transformation for byte, int2 and uint2 images is carried out internally using fixed
point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed gray values
is smaller in this case. For byte images, the differences to the more accurate calculation (using ’int_zooming’ =
’false’) is typically less than two gray levels. Correspondingly, for int2 and uint2 images, the gray value differences
are less than 1/128 times the dynamic gray value range of the image, i.e., they can be as large as 512 gray levels if
the entire dynamic range of 16 bit is used. Additionally, if a large scale factor is applied and a large output image
is obtained, then undefined gray values at the lower and at the right image border may result. The maximum width
Bmax of this border of undefined gray values can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale
factor in one dimension and I is the size of the output image in the corresponding dimension. For real images, the
parameter ’int_zooming’ does not affect the accuracy, since the internal calculations are always done using floating
point arithmetic.
The size of the target image is specifed by the parameters Width and Height. Note that the image is always
clipped at the left and upper edge, i.e., all image parts that have negative coordinates after the transformation are
clipped. If the affine transformation (in particular, the translation) is chosen appropriately, a part of the image
can be transformed as well as cropped in one call. This is useful, for example, when using the variation model
(see compare_variation_model), because with this mechanism only the parts of the image that should be
examined, are transformed.
Attention
The region of the input image is ignored.
The used coordinate system is the same as in affine_trans_pixel. This means that in fact not HomMat2D
is applied but a modified version. Therefore, applying affine_trans_image_size corresponds to the
following chain of transformations, which is applied to each point (Row_i, Col_i) of the image (input and output
pixels as homogeneous vectors):

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 193

       
RowT rans_i 1 0 −0.5 1 0 +0.5 Row_i
 ColT rans_i  =  0 1 −0.5  · HomMat2D ·  0 1 +0.5  ·  Col_i 
1 0 0 1 0 0 1 1

As an effect, you might get unexpected results when creating affine transformations based on coordinates that
are derived from the image, e.g., by operators like area_center_gray. For example, if you use this op-
erator to calculate the center of gravity of a rotationally symmetric image and then rotate the image around
this point using hom_mat2d_rotate, the resulting image will not lie on the original one. In such a
case, you can compensate this effect by applying the following translations to HomMat2D before using it in
affine_trans_image_size:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_image_size(Image, ImageAffinTrans, HomMat2DAdapted,
’constant’, Width, Height)

Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . .(multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / real


Input image.
. ImageAffinTrans (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
/ real
Transformed image.
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of interpolation.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Htuple . Hlong
Width of the output image.
Default Value : 640
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Htuple . Hlong
Height of the output image.
Default Value : 480
Suggested values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576}
Result
If the matrix HomMat2D represents an affine transformation (i.e., not a projective transformation),
affine_trans_image_size returns H_MSG_TRUE. If the input is empty the behavior can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
affine_trans_image_size is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_rotate, hom_mat2d_scale
Alternatives
affine_trans_image, zoom_image_size, zoom_image_factor, mirror_image,
rotate_image, affine_trans_region
See also
set_part_style
Module
Foundation

HALCON 8.0.2
194 CHAPTER 3. FILTER

T_gen_bundle_adjusted_mosaic ( const Hobject Images,


Hobject *MosaicImage, const Htuple HomMatrices2D,
const Htuple StackingOrder, const Htuple TransformRegion,
Htuple *TransMat2D )

Combine multiple images into a mosaic image.


gen_bundle_adjusted_mosaic combines the input images contained in the object Images into a mosaic
image MosaicImage. The relative positions of the images are defined by 3×3 projective transformation matrices.
The array HomMatrices2D contains a sequence of these linearized matrices. The transformation matrices can
be computed with bundle_adjust_mosaic.
The origin of MosaicImage and its size are automatically chosen so that all of the input images are completely
visible.
The order in which the images are added to the mosaic is given by the array StackingOrder. The first index in
this array will end up at the bottom of the image stack while the last one will be on top. If ’default’ is given instead
of an array of integers, the canonical order (images in the order used in Images) will be used.
The parameter TransformRegion can be used to determine whether the domains of Images are also trans-
formed. Since the transformation of the domains costs runtime, this parameter should be used to specify whether
this is desired or not. If TransformRegion is set to ’false’ the domain of the input images is ignored and the
complete images are transformed.
On output, the parameter TransMat2D contains a 3 × 3 projective transformation matrix that describes the
translation that was necessary to transform all images completely into the output image.
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; Hobject : byte / uint2 / real
Input images.
. MosaicImage (output_object) . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2 / real
Output image.
. HomMatrices2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Array of 3 × 3 projective transformation matrices.
. StackingOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong
Stacking order of the images in the mosaic.
Default Value : "default"
Suggested values : StackingOrder ∈ {"default"}
. TransformRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should the domains of the input images also be transformed?
Default Value : "false"
List of values : TransformRegion ∈ {"true", "false"}
. TransMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
3 × 3 projective transformation matrix that describes the translation that was necessary to transform all images
completely into the output image.
Parallelization Information
gen_bundle_adjusted_mosaic is reentrant and processed without parallelization.
Possible Predecessors
bundle_adjust_mosaic
Alternatives
gen_projective_mosaic
See also
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 195

Module
Matching

T_gen_cube_map_mosaic ( const Hobject Images, Hobject *Front,


Hobject *Rear, Hobject *Left, Hobject *Right, Hobject *Top,
Hobject *Bottom, const Htuple CameraMatrices,
const Htuple RotationMatrices, const Htuple CubeMapDimension,
const Htuple StackingOrder, const Htuple Interpolation )

Create 6 cube map images of a spherical mosaic.


gen_cube_map_mosaic creates 6 cube map images of a spherical mosaic Front, Left, Rear,
Right, Top and Bottom from the input images passed in Images. The pose of the images in space,
which is used to compute the position of the images with respect to the surface of the sphere, can
be determined with stationary_camera_self_calibration. The camera and rotation matrices
computed with stationary_camera_self_calibration can be used in CameraMatrices and
RotationMatrices. A spherical mosaic can only be created from images that were taken with a stationary
camera (see stationary_camera_self_calibration).
The width and height of the output cube map images can be selected by setting the parameter
CubeMapDimension. The value represents the width and height in pixels.
The mode in which the images are added to the mosaic is given by StackingOrder. For StackingOrder =
0
voronoi 0 , the points in the mosaic image are determined from the Voronoi cell of the respective input image. This
means that the gray values are taken from the points of the input image to whose center the pixel in the mosaic
image has the smallest distance on the sphere. This mode has the advantage that vignetting and uncorrected
radial distortions are less noticeable in the mosaic image because they typically are symmetric with respect to the
image center. Alternatively, with the choice of parameters described in the following, a mode can be selected that
has the same effect as if the images were painted successively into the mosaic image. Here, the order in which
the images are added to the mosaic image is important. Therefore, an array of integer values can be passed in
StackingOrder. The first index in this array will end up at the bottom of the image stack while the last one
will be on top. If ’default’ is given instead of an array of integers, the canonical order (images in the order used
in Images) will be used. Hence, if neither ’voronoi’ nor ’default’ are used, StackingOrder must contain a
permutation of the numbers 1,...,n, where n is the number of images passed in Images. It should be noted that
the mode ’voronoi’ cannot always be used. For example, at least two images must be passed to use this mode.
Furthermore, for very special configurations of the positions of the image centers on the sphere, the Voronoi cells
cannot be determined uniquely. With StackingOrder = 0 blend 0 , an additional mode is available, which blends
the images of the mosaic smoothly. This way seams between the images become less apparent. The seam lines
between the images are the same as in ’voronoi’. This mode leads to visually more appealing images, but requires
significantly more resources. If the mode ’voronoi’ or ’blend’ cannot be used for whatever reason the mode is
switched internally to ’default’ automatically.
The parameter Interpolation can be used to select the desired interpolation mode for creating the cube maps.
Bilinear and bicubic interpolation is available.
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; Hobject : byte / uint2 / real
Input images.
. Front (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2 / real
Front cube map.
. Rear (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2 / real
Rear cube map.
. Left (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2 / real
Left cube map.
. Right (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2 / real
Right cube map.
. Top (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2 / real
Top cube map.
. Bottom (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2 / real
Bottom cube map.

HALCON 8.0.2
196 CHAPTER 3. FILTER

. CameraMatrices (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double


(Array of) 3 × 3 projective camera matrices that determine the interior camera parameters.
. RotationMatrices (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Array of 3 × 3 transformation matrices that determine rotation of the camera in the respective image.
. CubeMapDimension (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Width and height of the resulting cube maps.
Default Value : 1000
Restriction : CubeMapDimension ≥ 0
. StackingOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong
Mode of adding the images to the mosaic image.
Default Value : "voronoi"
Suggested values : StackingOrder ∈ {"blend", "voronoi", "default"}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Mode of image interpolation.
Default Value : "bilinear"
Suggested values : Interpolation ∈ {"bilinear", "bicubic"}
Example (Syntax: HDevelop)

* For the input data to stationary_camera_self_calibration, please


* refer to the example for stationary_camera_self_calibration.
stationary_camera_self_calibration (4, 640, 480, 1, From, To,
HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches,
’gold_standard’,
[’focus’,’principal_point’],
’true’, CameraMatrix, Kappa,
RotationMatrices, X, Y, Z, Error)
gen_cube_map_mosaic (Images, Front, Left, Rear, Right, Top, Bottom,
CameraMatrix, RotationMatrices, 1000, ’default’,
’bicubic’)

* Alternatively, if kappa should be determined, the following calls


* can be made:
stationary_camera_self_calibration (4, 640, 480, 1, From, To,
HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches,
’gold_standard’,
[’focus’,’principal_point’,’kappa’],
’true’, CameraMatrix, Kappa,
RotationMatrices, X, Y, Z, Error)
cam_mat_to_cam_par (CameraMatrix, Kappa, 640, 480, CamParam)
change_radial_distortion_cam_par (’fixed’, CamParam, 0, CamParOut)
gen_radial_distortion_map (Map, CamParam, CamParOut, ’bilinear’)
map_image (Images, Map, ImagesRect)
gen_cube_map_mosaic (Images, Front, Left, Rear, Right, Top, Bottom,
CameraMatrix, RotationMatrices, 1000, ’default’,
’bicubic’)

Result
If the parameters are valid, the operator gen_cube_map_mosaic returns the value H_MSG_TRUE. If neces-
sary an exception handling is raised.
Parallelization Information
gen_cube_map_mosaic is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
Alternatives
gen_spherical_mosaic, gen_projective_mosaic

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 197

References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching

T_gen_projective_mosaic ( const Hobject Images, Hobject *MosaicImage,


const Htuple StartImage, const Htuple MappingSource,
const Htuple MappingDest, const Htuple HomMatrices2D,
const Htuple StackingOrder, const Htuple TransformRegion,
Htuple *MosaicMatrices2D )

Combine multiple images into a mosaic image.


gen_projective_mosaic combines the input images contained in the object Images into a mosaic image
MosaicImage. The relative positions of the images are defined by 3 × 3 projective transformation matrices. The
array HomMatrices2D contains a sequence of these linearized matrices. The values in MappingSource and
MappingDest are the indices of the images that the corresponding matrix applies to. MappingSource=4 and
MappingDest=7 means that the matrix describes the transformation of the image number 4 into the projective
plane of image 7. The transformation matrices between the respective image pairs given by MappingSource
and MappingDest are typically determined with proj_match_points_ransac.
As usual for operators that access image objects (e.g., select_obj), the images are numbered starting from 1,
i.e., MappingSource, MappingDest, StartImage, and StackingOrder) must contain values between
1 and the number of images passed in Images.
The parameter StartImage states which image defines the image plane of the final image, that is, which input
image remains unchanged in the output image. This is usually an image that is located near the center of the image
mosaic.
The origin of MosaicImage and its size are automatically chosen so that all of the input images are completely
visible.
The order in which the images are added to the mosaic is given by the array StackingOrder. The first index in
this array will end up at the bottom of the image stack while the last one will be on top. If ’default’ is given instead
of an array of integers, the canonical order (images in the order used in Images) will be used.
The parameter TransformRegion can be used to determine whether the domains of Images are also trans-
formed. Since the transformation of the domains costs runtime, this parameter should be used to specify whether
this is desired or not. If TransformRegion is set to ’false’ the domain of the input images is ignored and the
complete images are transformed.
On output, the parameter MosaicMatrices2D contains a set of 3 × 3 projective transformation matrices that
describe for each image in Images the mapping of the image to its position in the mosaic.
Parameter

. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; Hobject : byte / uint2 / real


Input images.
. MosaicImage (output_object) . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2 / real
Output image.
. StartImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Index of the central input image.
. MappingSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Indices of the source images of the transformations.
. MappingDest (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Indices of the target images of the transformations.
. HomMatrices2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Array of 3 × 3 projective transformation matrices.

HALCON 8.0.2
198 CHAPTER 3. FILTER

. StackingOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong


Stacking order of the images in the mosaic.
Default Value : "default"
Suggested values : StackingOrder ∈ {"default"}
. TransformRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should the domains of the input images also be transformed?
Default Value : "false"
List of values : TransformRegion ∈ {"true", "false"}
. MosaicMatrices2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Array of 3 × 3 projective transformation matrices that determine the position of the images in the mosaic.
Example (Syntax: HDevelop)

gen_empty_obj (Images)
for J := 1 to 6 by 1
read_image (Image, ’mosaic/pcb_’+J$’02’)
concat_obj (Images, Image, Images)
endfor
From := [1,2,3,4,5]
To := [2,3,4,5,6]
Num := |From|
ProjMatrices := []
for J := 0 to Num-1 by 1
F := From[J]
T := To[J]
select_obj (Images, F, ImageF)
select_obj (Images, T, ImageT)
points_foerstner (ImageF, 1, 2, 3, 200, 0.3, ’gauss’, ’false’,
RowJunctionsF, ColJunctionsF, CoRRJunctionsF,
CoRCJunctionsF, CoCCJunctionsF, RowAreaF,
ColAreaF, CoRRAreaF, CoRCAreaF, CoCCAreaF)
points_foerstner (ImageT, 1, 2, 3, 200, 0.3, ’gauss’, ’false’,
RowJunctionsT, ColJunctionsT, CoRRJunctionsT,
CoRCJunctionsT, CoCCJunctionsT, RowAreaT,
ColAreaT, CoRRAreaT, CoRCAreaT, CoCCAreaT)
proj_match_points_ransac (ImageF, ImageT, RowJunctionsF,
ColJunctionsF, RowJunctionsT,
ColJunctionsT, ’ncc’, 21, 0, 0, 480, 640,
0, 0.5, ’gold_standard’, 1, 4364537,
ProjMatrix, Points1, Points2)
ProjMatrices := [ProjMatrices,ProjMatrix]
endfor
gen_projective_mosaic (Images, MosaicImage, 2, From, To, ProjMatrices,
’default’, ’false’, MosaicMatrices2D)

Parallelization Information
gen_projective_mosaic is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac, vector_to_proj_hom_mat2d,
hom_vector_to_proj_hom_mat2d
See also
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 199

Module
Matching

T_gen_spherical_mosaic ( const Hobject Images, Hobject *MosaicImage,


const Htuple CameraMatrices, const Htuple RotationMatrices,
const Htuple LatMin, const Htuple LatMax, const Htuple LongMin,
const Htuple LongMax, const Htuple LatLongStep,
const Htuple StackingOrder, const Htuple Interpolation )

Create a spherical mosaic image.


gen_spherical_mosaic creates a spherical mosaic image MosaicImage from the input images passed in
Images. The pose of the images in space, which is used to compute the position of the images with respect
to the surface of the sphere, can be determined with stationary_camera_self_calibration. The
camera and rotation matrices computed with stationary_camera_self_calibration can be used in
CameraMatrices and RotationMatrices. A spherical mosaic can only be created from images that were
taken with a stationary camera (see stationary_camera_self_calibration).
The mosaic is computed in spherical coordinates (longitude and latitude). The row axis of MosaicImage corre-
sponds to the latitude, while the column axis corresponds to the longitude. The part of the sphere that is computed
by gen_spherical_mosaic is determined by LatMin, LatMax, LongMin, and LongMax. These parame-
ters are specified in degrees and determine a rectangular part of the latitude and longitude coordinates. The latitude
-90 corresponds to the north pole (i.e., the straight up viewing direction), while 90 corresponds to the south pole
(i.e., the straight down viewing direction). The longitude 0 corresponds to the straight ahead viewing direction.
Negative longitudes correspond to viewing directions to the left, while positive longitudes correspond to viewing
directions to the right. Hence, to obtain a complete image of the sphere, LatMin = -90, LatMax = 90, LongMin
= -180, and LongMax = 180 must be used. In many cases, the mosaic will not cover the entire sphere. In these
cases, it is useful to select the desired part of the sphere with the above parameters. This can be done by explicitly
specifying the desired rectangle. However, often it is desirable to determine the smallest rectangle that encloses
all images automatically. This can be done by using LatMin < -90, LatMax > 90, LongMin < -180, and
LongMax > 180. Only the parameters that lie outside the normal range of values are determined automatically.
The angle step per pixel in MosaicImage can be selected with LatLongStep, which also is an angle specified
in degrees. With this, the resolution of the mosaic image can be controlled. If LatLongStep is set to 0 the angle
step is calculated automatically by trying to preserve the pixel size of the original images as well as possible.
The mode in which the images are added to the mosaic is given by StackingOrder. For StackingOrder =
0
voronoi 0 , the points in the mosaic image are determined from the Voronoi cell of the respective input image. This
means that the gray values are taken from the points of the input image to whose center the pixel in the mosaic
image has the smallest distance on the sphere. This mode has the advantage that vignetting and uncorrected radial
distortions are less noticeable in the mosaic image because they typically are symmetric with respect to the image
center. Alternatively, with the choice of parameters described described in the following, a mode can be selected
that has the same effect as if the images were painted successively into the mosaic image. Here, the order in which
the images are added to the mosaic image is important. Therefore, an array of integer values can be passed in
StackingOrder. The first index in this array will end up at the bottom of the image stack while the last one
will be on top. If ’default’ is given instead of an array of integers, the canonical order (images in the order used
in Images) will be used. Hence, if neither ’voronoi’ nor ’default’ are used, StackingOrder must contain a
permutation of the numbers 1,...,n, where n is the number of images passed in Images. It should be noted that
the mode ’voronoi’ cannot always be used. For example, at least two images must be passed to use this mode.
Furthermore, for very special configurations of the positions of the image centers on the sphere, the Voronoi cells
cannot be determined uniquely. With StackingOrder = 0 blend 0 , an additional mode is available, which blends
the images of the mosaic smoothly. This way seams between the images become less apparent. The seam lines
between the images are the same as in ’voronoi’. This mode leads to visually more appealing images, but requires
significantly more resources. If the mode ’voronoi’ or ’blend’ cannot be used for whatever reason the mode is
switched internally to ’default’ automatically.
The parameter Interpolation can be used to select the desired interpolation mode for creating the mosaic.
Bilinear and bicubic interpolation is available.

HALCON 8.0.2
200 CHAPTER 3. FILTER

Parameter

. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; Hobject : byte / uint2 / real


Input images.
. MosaicImage (output_object) . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2 / real
Output image.
. CameraMatrices (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
(Array of) 3 × 3 projective camera matrices that determine the interior camera parameters.
. RotationMatrices (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Array of 3 × 3 transformation matrices that determine rotation of the camera in the respective image.
. LatMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; Htuple . double / Hlong
Minimum latitude of points in the spherical mosaic image.
Default Value : -90
Suggested values : LatMin ∈ {-100, -90, -80, -70, -60, -50, -40, -30, -20, -10}
Restriction : LatMin ≤ 90
. LatMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; Htuple . double / Hlong
Maximum latitude of points in the spherical mosaic image.
Default Value : 90
Suggested values : LatMax ∈ {10, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : (LatMax ≥ -90) ∧ (LatMax > LatMin)
. LongMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; Htuple . double / Hlong
Minimum longitude of points in the spherical mosaic image.
Default Value : -180
Suggested values : LongMin ∈ {-200, -180, -160, -140, -120, -100, -90, -80, -70, -60, -50, -40, -30, -20, -10}
Restriction : LongMin ≤ 180
. LongMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; Htuple . double / Hlong
Maximum longitude of points in the spherical mosaic image.
Default Value : 180
Suggested values : LongMax ∈ {10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 140, 160, 180, 200}
Restriction : (LongMax ≥ -90) ∧ (LongMax > LongMin)
. LatLongStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; Htuple . double / Hlong
Latitude and longitude angle step width.
Default Value : 0.1
Suggested values : LatLongStep ∈ {0, 0.02, 0.05, 0.1, 0.2, 0.5, 1}
Restriction : LatLongStep ≥ 0
. StackingOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong
Mode of adding the images to the mosaic image.
Default Value : "voronoi"
Suggested values : StackingOrder ∈ {"blend", "voronoi", "default"}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char * / Hlong
Mode of interpolation when creating the mosaic image.
Default Value : "bilinear"
Suggested values : Interpolation ∈ {"bilinear", "bicubic"}
Example (Syntax: HDevelop)

* For the input data to stationary_camera_self_calibration, please


* refer to the example for stationary_camera_self_calibration.
stationary_camera_self_calibration (4, 640, 480, 1, From, To,
HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches,
’gold_standard’,
[’focus’,’principal_point’],
’true’, CameraMatrix, Kappa,
RotationMatrices, X, Y, Z, Error)
gen_spherical_mosaic (Images, MosaicImage, CameraMatrix,
RotationMatrices, -100, 100, -200, 200, 0,
’default’)

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 201

* Alternatively, if kappa should be determined, the following calls


* can be made:
stationary_camera_self_calibration (4, 640, 480, 1, From, To,
HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches,
’gold_standard’,
[’focus’,’principal_point’,’kappa’],
’true’, CameraMatrix, Kappa,
RotationMatrices, X, Y, Z, Error)
cam_mat_to_cam_par (CameraMatrix, Kappa, 640, 480, CamParam)
change_radial_distortion_cam_par (’fixed’, CamParam, 0, CamParOut)
gen_radial_distortion_map (Map, CamParam, CamParOut, ’bilinear’)
map_image (Images, Map, ImagesRect)
gen_spherical_mosaic (ImagesRect, MosaicImage, CameraMatrix,
RotationMatrices, -100, 100, -200, 200, 0,
’default’)

Result
If the parameters are valid, the operator gen_spherical_mosaic returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
gen_spherical_mosaic is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
Alternatives
gen_cube_map_mosaic, gen_projective_mosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching

map_image ( const Hobject Image, const Hobject Map,


Hobject *ImageMapped )

T_map_image ( const Hobject Image, const Hobject Map,


Hobject *ImageMapped )

Apply a general transformation to an image.


map_image transforms an image Image using an arbitrary transformation Map which, for example, was pre-
viously generated using gen_image_to_world_plane_map or gen_radial_distortion_map. The
multi-channel image Map must be organized as follows:
The height and the width of Map define the size of the output image ImageMapped. The number of channels
in Map defines whether no interpolation or bilinear interpolation should be used. If Map only consists of one
channel, no interpolation is applied during the transformation. This channel containes ’int4’ values that describe
the geometric transformation: For each pixel in the output image ImageMapped the linearized coordinate of the
pixel in the input image Image from which the gray value should be taken is stored.
If bilinear interpolation between the pixels in the input image should be applied, Map must consist of 5 channels.
The first channel again consists of an ’int4’ image and describes the geometric transformation. The channels 2-5
consist of an ’uint2’ image each and contain the weights [0...1] of the four neighboring pixels that are used during
bilinear interpolation. If the overall brightness of the output image ImageMapped should not differ from the
overall brighntess of the input image Image, the sum of the four unscaled weights must be 1 for each pixel. The

HALCON 8.0.2
202 CHAPTER 3. FILTER

weights [0...1] are scaled to the range of values of the ’uint2’ image and therefore hold integer values from 0 bis
65535.
Furthermore, the weights must be chosen in a way that the range of values of the output image ImageMapped is
not exceeded. The geometric relation between the four channels 2-5 is illustrated in the following sketch:
2 3
4 5
The reference point of the four pixels is the upper left pixel. The linearized coordinate of the reference point is
stored in the first channel.
Attention
The weights must be choosen in a way that the range of values of the output image ImageMapped is not exceeded.
For runtime reasons during the mapping process, it is not checked whether the linearized coordinates which are
stored in the first channel of Map, lie inside the input image. Thus, it must be ensured by the user that this constraint
is fulfilled. Otherwise, the program may crash!
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Image to be mapped.
. Map (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : int4 / uint2
Image containing the mapping data.
. ImageMapped (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Mapped image.
Result
map_image returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception handling is
raised.
Parallelization Information
map_image is reentrant and processed without parallelization.
Possible Predecessors
gen_image_to_world_plane_map, gen_radial_distortion_map
See also
affine_trans_image, rotate_image
Module
Foundation

mirror_image ( const Hobject Image, Hobject *ImageMirror,


const char *Mode )

T_mirror_image ( const Hobject Image, Hobject *ImageMirror,


const Htuple Mode )

Mirror an image.
mirror_image reflects an image Image about one of three possible axes. If Mode is set to ’row’, it is reflected
about the horizontal axis, if Mode is set to ’column’, about the vertical axis, and if Mode is set to ’main’, about
the main diagonal x = y.
Parameter
. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Input image.
. ImageMirror (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 / int4
/ real
Reflected image.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Axis of reflection.
Default Value : "row"
List of values : Mode ∈ {"row", "column", "main"}

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 203

Example

read_image(&Image,"affe");
disp_image(Image,WindowHandle);
mirror_image(Image,&MirImage,"row");
disp_image(MirImage,WindowHandle);

Parallelization Information
mirror_image is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
hom_mat2d_rotate, affine_trans_image, rotate_image
See also
rotate_image, hom_mat2d_rotate
Module
Foundation

polar_trans_image ( const Hobject ImageXY, Hobject *ImagePolar,


Hlong Row, Hlong Column, Hlong Width, Hlong Height )

T_polar_trans_image ( const Hobject ImageXY, Hobject *ImagePolar,


const Htuple Row, const Htuple Column, const Htuple Width,
const Htuple Height )

Transform an image to polar coordinates


polar_trans_image transforms an image in cartesian coordinates to an image in polar coordinates. The size
of the resulting image is selected with Width and Height. Width determines the angular resolution, while
Height determines the resolution of the radius. Row and Column determine the center of the polar coordinate
system in the original image ImageXY. This point is mapped to the upper row of ImagePolar.
A point (x’,y’) in the result image corresponds to the point (x,y) in the original image in the following manner:

x = y 0 cos(2π(x0 /resultwidth)) + Columny = y 0 sin(2π(x0 /resultwidth)) + Row .

Parameter

. ImageXY (input_object) . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Input image in cartesian coordinates.
. ImagePolar (output_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Result image in polar coordinates.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the center of the coordinate system.
Default Value : 100
Suggested values : Row ∈ {0, 10, 100, 200}
Typical range of values : 0 ≤ Row ≤ 512
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the center of the coordinate system.
Default Value : 100
Suggested values : Column ∈ {0, 10, 100, 200}
Typical range of values : 0 ≤ Column ≤ 512
Minimum Increment : 1
Recommended Increment : 1

HALCON 8.0.2
204 CHAPTER 3. FILTER

. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong


Width of the result image.
Default Value : 314
Suggested values : Width ∈ {100, 200, 157, 314, 512}
Typical range of values : 2 ≤ Width ≤ 512
Minimum Increment : 1
Recommended Increment : 10
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the result image.
Default Value : 200
Suggested values : Height ∈ {100, 128, 256, 512}
Typical range of values : 2 ≤ Height ≤ 512
Minimum Increment : 1
Recommended Increment : 10
Example

read_image(&Image,"affe");
disp_image(Image,WindowHandle);
polar_trans_image(Image,&PolarImage,100,100,314,200);
disp_image(PolarImage,WindowHandle);

Parallelization Information
polar_trans_image is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
polar_trans_image_ext
See also
polar_trans_image_inv, polar_trans_region, polar_trans_region_inv,
polar_trans_contour_xld, polar_trans_contour_xld_inv, affine_trans_image
Module
Foundation

polar_trans_image_ext ( const Hobject Image, Hobject *PolarTransImage,


double Row, double Column, double AngleStart, double AngleEnd,
double RadiusStart, double RadiusEnd, Hlong Width, Hlong Height,
const char *Interpolation )

T_polar_trans_image_ext ( const Hobject Image,


Hobject *PolarTransImage, const Htuple Row, const Htuple Column,
const Htuple AngleStart, const Htuple AngleEnd,
const Htuple RadiusStart, const Htuple RadiusEnd, const Htuple Width,
const Htuple Height, const Htuple Interpolation )

Transform an annular arc in an image to polar coordinates.


polar_trans_image_ext transforms the annular arc specified by the center point (Row, Column), the radii
RadiusStart and RadiusEnd and the angles AngleStart and AngleEnd in the image Image to its polar
coordinate version in the image PolarTransImage of the dimensions Width × Height.
The upper left pixel in the output image always corresponds to the point in the input image that is specified by
RadiusStart and AngleStart. Analogously, the lower right pixel in the output image always corresponds to
the point in the input image that is specified by RadiusEnd and AngleEnd. In the usual mode (AngleStart
< AngleEnd and RadiusStart < RadiusEnd), the polar transformation is performed in the mathemati-
cally positive orientation (counterclockwise). Furthermore, points with smaller radii lie in the upper part of the
output image. By suitably exchanging the values of these parameters (e.g., AngleStart > AngleEnd or
RadiusStart > RadiusEnd), any desired orientation of the output image can be achieved.
The parameter Interpolation is used to select the interpolation method ’bilinear’ or ’nearest_neighbor’. With
’nearest_neighbor’, the gray value of a pixel in the output image is determined by the gray value of the closest

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 205

pixel in the input image. With ’bilinear’, the gray value of a pixel in the output image is determined by bilinear
interpolation of the gray values of the four closest pixels in the input image. The mode ’bilinear’ results in images
of better quality, but is slower than the mode ’nearest_neighbor’.
The angles can be chosen from all real numbers. Center point and radii can be real as well. However, if they are
both integers and the difference of RadiusEnd and RadiusStart equals the height Height of the destination
image, calculation will be sped up through an optimized routine.
The radii and angles are inclusive, which means that the first row of the target image contains the circle with radius
RadiusStart and the last row contains the circle with radius RadiusEnd. For complete circles, where the
difference between AngleStart and AngleEnd equals 2π (360 degrees), this also means that the first column
of the target image will be the same as the last.
1
To avoid this, do not make this difference 2π, but 2π(1 − Width ) degrees instead.
The call:
polar_trans_image(Image, PolarTransImage, Row, Column, Width, Height)
produces the same result as the call:
polar_trans_image_ext(Image, PolarTransImage, Row-0.5, Column-0.5,
6.2831853, 6.2831853/Width, 0, Height-1, Width, Height, ’nearest_neighbor’)
The offset of 0.5 is necessary since polar_trans_image does not do exact nearest neighbor interpola-
tion and the radii and angles can be calculated using the information in the above paragraph and knowing that
polar_trans_image does not handle its arguments inclusively. The start angle is bigger than the end angle to
make polar_trans_image_ext go clockwise, just like polar_trans_image does.
Attention
For speed reasons, the domain of the input image is ignored. The output image always has a complete rectangle as
its domain.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. PolarTransImage (output_object) . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Output image.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Row coordinate of the center of the arc.
Default Value : 256
Suggested values : Row ∈ {0, 16, 32, 64, 128, 240, 256, 480, 512}
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Column coordinate of the center of the arc.
Default Value : 256
Suggested values : Column ∈ {0, 16, 32, 64, 128, 256, 320, 512, 640}
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Angle of the ray to be mapped to the first column of the output image.
Default Value : 0.0
Suggested values : AngleStart ∈ {0.0, 0.78539816, 1.57079632, 3.141592654, 6.2831853,
12.566370616}
. AngleEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Angle of the ray to be mapped to the last column of the output image.
Default Value : 6.2831853
Suggested values : AngleEnd ∈ {0.0, 0.78539816, 1.57079632, 3.141592654, 6.2831853, 12.566370616}
. RadiusStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the circle to be mapped to the first row of the output image.
Default Value : 0
Suggested values : RadiusStart ∈ {0, 16, 32, 64, 100, 128, 256, 512}
Typical range of values : 0 ≤ RadiusStart
. RadiusEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the circle to be mapped to the last row of the output image.
Default Value : 100
Suggested values : RadiusEnd ∈ {0, 16, 32, 64, 100, 128, 256, 512}
Typical range of values : 0 ≤ RadiusEnd

HALCON 8.0.2
206 CHAPTER 3. FILTER

. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong


Width of the output image.
Default Value : 512
Suggested values : Width ∈ {256, 320, 512, 640, 800, 1024}
Typical range of values : 0 ≤ Width ≤ 32767
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Height of the output image.
Default Value : 512
Suggested values : Height ∈ {240, 256, 480, 512, 600, 1024}
Typical range of values : 0 ≤ Height ≤ 32767
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Interpolation method for the transformation.
Default Value : "nearest_neighbor"
List of values : Interpolation ∈ {"nearest_neighbor", "bilinear"}
Parallelization Information
polar_trans_image_ext is reentrant and automatically parallelized (on tuple level, channel level).
See also
polar_trans_image, polar_trans_image_inv, polar_trans_region,
polar_trans_region_inv, polar_trans_contour_xld, polar_trans_contour_xld_inv
Module
Foundation

polar_trans_image_inv ( const Hobject PolarImage,


Hobject *XYTransImage, double Row, double Column, double AngleStart,
double AngleEnd, double RadiusStart, double RadiusEnd, Hlong Width,
Hlong Height, const char *Interpolation )

T_polar_trans_image_inv ( const Hobject PolarImage,


Hobject *XYTransImage, const Htuple Row, const Htuple Column,
const Htuple AngleStart, const Htuple AngleEnd,
const Htuple RadiusStart, const Htuple RadiusEnd, const Htuple Width,
const Htuple Height, const Htuple Interpolation )

Transform an image in polar coordinates back to cartesian coordinates


polar_trans_image_inv transforms the polar coordinate representation of an image, stored in
PolarImage, back onto an annular arc in cartesian coordinates, described by the radii RadiusStart and
RadiusEnd and the angles AngleStart and AngleEnd with the center point located at (Row, Column). All
of these values can be chosen as real numbers. The overall size of the target image will be Width × Height
pixels.
The parameter Interpolation is used to select the interpolation method ’bilinear’ or ’nearest_neighbor’. With
’nearest_neighbor’, the gray value of a pixel in the output image is determined by the gray value of the closest
pixel in the input image. With ’bilinear’, the gray value of a pixel in the output image is determined by bilinear
interpolation of the gray values of the four closest pixels in the input image. The mode ’bilinear’ results in images
of better quality, but is slower than the mode ’nearest_neighbor’.
The angles and radii are inclusive, which means that the first row of the input image will be mapped onto a circle
with a distance of RadiusStart pixels from the specified center and the last row will be mapped onto a circle
of radius RadiusEnd.
polar_trans_image_inv is the inverse function of polar_trans_image_ext.
The call sequence:
polar_trans_image_ext(Image, PolarImage, Row, Column, rad(360), 0, 0,
Radius, Width, Height, Interpolation)
polar_trans_image_inv(PolarImage, XYTransImage, Row, Column, rad(360), 0,
0, Radius, Width, Height, Interpolation)
returns the image Image, restricted to the circle around (Row, Column) with radius Radius, as its output image
XYTransImage.

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 207

Parameter

. PolarImage (input_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Input image.
. XYTransImage (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Output image.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Row coordinate of the center of the arc.
Default Value : 256
Suggested values : Row ∈ {0, 16, 32, 64, 128, 240, 256, 480, 512}
Typical range of values : 0 ≤ Row ≤ 32767
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Column coordinate of the center of the arc.
Default Value : 256
Suggested values : Column ∈ {0, 16, 32, 64, 128, 256, 320, 512, 640}
Typical range of values : 0 ≤ Column ≤ 32767
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Angle of the ray to map the first column of the input image to.
Default Value : 0.0
Suggested values : AngleStart ∈ {0.0, 0.78539816, 1.57079632, 3.141592654, 6.2831853}
. AngleEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Angle of the ray to map the last column of the input image to.
Default Value : 6.2831853
Suggested values : AngleEnd ∈ {0.0, 0.78539816, 1.57079632, 3.141592654, 6.2831853}
. RadiusStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the circle to map the first row of the input image to.
Default Value : 0
Suggested values : RadiusStart ∈ {0, 16, 32, 64, 100, 128, 256, 512}
Typical range of values : 0 ≤ RadiusStart
. RadiusEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the circle to map the last row of the input image to.
Default Value : 100
Suggested values : RadiusEnd ∈ {0, 16, 32, 64, 100, 128, 256, 512}
Typical range of values : 0 ≤ RadiusEnd
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Width of the output image.
Default Value : 512
Suggested values : Width ∈ {256, 320, 512, 640, 800, 1024}
Typical range of values : 0 ≤ Width ≤ 32767
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Height of the output image.
Default Value : 512
Suggested values : Height ∈ {240, 256, 480, 512, 600, 1024}
Typical range of values : 0 ≤ Height ≤ 32767
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Interpolation method for the transformation.
Default Value : "nearest_neighbor"
List of values : Interpolation ∈ {"nearest_neighbor", "bilinear"}
Parallelization Information
polar_trans_image_inv is reentrant and automatically parallelized (on tuple level, channel level).
See also
polar_trans_image, polar_trans_image_ext, polar_trans_region,
polar_trans_region_inv, polar_trans_contour_xld, polar_trans_contour_xld_inv
Module
Foundation

HALCON 8.0.2
208 CHAPTER 3. FILTER

T_projective_trans_image ( const Hobject Image, Hobject *TransImage,


const Htuple HomMat2D, const Htuple Interpolation,
const Htuple AdaptImageSize, const Htuple TransformRegion )

Apply a projective transformation to an image.


projective_trans_image applies the projective transformation (homography) determined by the homoge-
neous transformation matrix HomMat2D on the input image Image and stores the result into the output image
TransImage.
If the parameter AdaptImageSize ist set to ’false’, TransImage will have the same size as Image; if
AdaptImageSize is ’true’, the output image size will be automatically adapted so that all non-negative points
of the transformed image are visible.
The parameter Interpolation determines, which interpolation method is used to determine the gray values
of the output image. For Interpolation = ’nearest_neighbor’, the gray value is determined from the nearest
pixel in the input image. This mode is very fast, but also leads to the typical “jagged” appearance for large
enlargements of the image. For Interpolation = ’bilinear’, the gray values are interpolated bilinearly, leading
to longer runtimes, but also to significantly improved results.
The parameter TransformRegion can be used to determine whether the domain of Image is also transformed.
Since the transformation of the domain costs runtime, this parameter should be used to specify whether this is
desired or not. If TransformRegion is set to ’false’ the domain of the input image is ignored and the complete
image is transformed.
The projective transformation matrix could for example be created using the operator
vector_to_proj_hom_mat2d.
In a homography the points to be projected are represented by homogeneous vectors of the form (x, y, w). A
x y
Euclidean point can be derived as (x’,y’) = ( w , w ).
Just like in affine_trans_image, x represents the row coordinate while y represents the column coordinate
in projective_trans_image. With this convention, affine transformations are a special case of projective
transformations in which the last row of HomMat2D is of the form (0, 0, c).
For images of type byte or uint2 the system parameter ’int_zooming’ selects between fast calculation in fixed point
arithmetics (’int_zooming’ = ’true’) and highly accurate calculation in floating point arithmetics (’int_zooming’ =
’false’). Especially for Interpolation = ’bilinear’, however, fixed point calculation can lead to minor gray
1
value deviations since the faster algorithm achieves an accuracy of no more than 16 pixels. Therefore, when
applying large scales ’int_zooming’ = ’false’ is recommended.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2 / real
Input image.
. TransImage (output_object) . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2 / real
Output image.
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Homogeneous projective transformation matrix.
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Interpolation method for the transformation.
Default Value : "bilinear"
List of values : Interpolation ∈ {"nearest_neighbor", "bilinear"}
. AdaptImageSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Adapt the size of the output image automatically?
Default Value : "false"
List of values : AdaptImageSize ∈ {"true", "false"}
. TransformRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should the domain of the input image also be transformed?
Default Value : "false"
List of values : TransformRegion ∈ {"true", "false"}
Parallelization Information
projective_trans_image is reentrant and automatically parallelized (on tuple level, channel level).

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 209

Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image_size, projective_trans_contour_xld,
projective_trans_region, projective_trans_point_2d, projective_trans_pixel
Module
Foundation

T_projective_trans_image_size ( const Hobject Image,


Hobject *TransImage, const Htuple HomMat2D,
const Htuple Interpolation, const Htuple Width, const Htuple Height,
const Htuple TransformRegion )

Apply a projective transformation to an image and specify the output image size.
projective_trans_image_size applies the projective transformation (homography) determined by the
homogeneous transformation matrix HomMat2D on the input image Image and stores the result into the output
image TransImage.
TransImage will be clipped at the output dimensions Height×Width. Apart from this,
projective_trans_image_size is identical to its alternative version projective_trans_image.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2 / real


Input image.
. TransImage (output_object) . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2 / real
Output image.
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Homogeneous projective transformation matrix.
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Interpolation method for the transformation.
Default Value : "bilinear"
List of values : Interpolation ∈ {"nearest_neighbor", "bilinear"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Htuple . Hlong
Output image width.
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Htuple . Hlong
Output image height.
. TransformRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should the domain of the input image also be transformed?
Default Value : "false"
List of values : TransformRegion ∈ {"true", "false"}
Parallelization Information
projective_trans_image_size is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image, projective_trans_contour_xld, projective_trans_region,
projective_trans_point_2d, projective_trans_pixel
Module
Foundation

HALCON 8.0.2
210 CHAPTER 3. FILTER

rotate_image ( const Hobject Image, Hobject *ImageRotate, double Phi,


const char *Interpolation )

T_rotate_image ( const Hobject Image, Hobject *ImageRotate,


const Htuple Phi, const Htuple Interpolation )

Rotate an image about its center.


rotate_image rotates the image Image counterclockwise by Phi degrees about its center. This operator is
much faster if Phi is a multiple of 90 degrees than the general operator affine_trans_image. For rotations
by 90, 180, and 270 degrees, the region is rotated accordingly. For all other rotations the region is set to the
maximum region, i.e., to the extent of the resulting image. The effect of the parameter Interpolation is the
same as in affine_trans_image. It is ignored for rotations by 90, 180, and 270 degrees. The size of the
resulting image is the same as that of the input image, with the exception of rotations by 90 and 270 degrees, where
the width and height will be exchanged.
Attention
The angle Phi is given in degrees, not in radians.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageRotate (output_object) . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Rotated image.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; double / Hlong
Rotation angle.
Default Value : 90
Suggested values : Phi ∈ {90, 180, 270}
Typical range of values : 0 ≤ Phi ≤ 360
Minimum Increment : 0.001
Recommended Increment : 0.2
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of interpolation.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
Example

read_image(&Image,"affe");
disp_image(Image,WindowHandle);
rotate_image(Image,&RotImage,270);
disp_image(RotImage,WindowHandle);

Parallelization Information
rotate_image is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
hom_mat2d_rotate, affine_trans_image
See also
mirror_image
Module
Foundation

zoom_image_factor ( const Hobject Image, Hobject *ImageZoomed,


double ScaleWidth, double ScaleHeight, const char *Interpolation )

T_zoom_image_factor ( const Hobject Image, Hobject *ImageZoomed,


const Htuple ScaleWidth, const Htuple ScaleHeight,
const Htuple Interpolation )

Zoom an image by a given factor.

HALCON/C Reference Manual, 2008-5-13


3.7. GEOMETRIC-TRANSFORMATIONS 211

zoom_image_factor scales the image Image by a factor of ScaleWidth in width and a factor
ScaleHeight in height. The parameter Interpolation determines the type of interpolation used (see
affine_trans_image).
Attention
If the system parameter ’int_zooming’ is set to ’true’, the internally used integer arithmetic may lead to errors in
the following two cases: First, if zoom_image_factor is used on an uint2 or int2 image with high dynamics
(i.e. images containing values close to the respective limits) in combination with scale factors smaller than 0.5,
then the gray values of the output image may be erroneous. Second, if Interpolation is set to a value other
than ’none’, a large scale factor is applied, and a large output image is obtained, then undefined gray values at the
lower and at the right image border may result. The maximum width Bmax of this border of undefined gray values
can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale factor in one dimension and I is the size of the
output image in the corresponding dimension. In both cases, it is recommended to set ’int_zooming’ to ’false’ via
the operator set_system.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . .(multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / real


Input image.
. ImageZoomed (output_object) . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 / real
Scaled image.
. ScaleWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; double
Scale factor for the width of the image.
Default Value : 0.5
Suggested values : ScaleWidth ∈ {0.25, 0.5, 1.5, 2.0}
Typical range of values : 0.001 ≤ ScaleWidth ≤ 10.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. ScaleHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; double
Scale factor for the height of the image.
Default Value : 0.5
Suggested values : ScaleHeight ∈ {0.25, 0.5, 1.5, 2.0}
Typical range of values : 0.001 ≤ ScaleHeight ≤ 10.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of interpolation.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
Example

read_image(&Image,"affe");
disp_image(Image,WindowHandle);
zoom_image_factor(Image,&ZooImage,0,0.5,0.5);
disp_image(ZooImage,WindowHandle);

Parallelization Information
zoom_image_factor is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
zoom_image_size, affine_trans_image, hom_mat2d_scale
See also
hom_mat2d_scale, affine_trans_image
Module
Foundation

HALCON 8.0.2
212 CHAPTER 3. FILTER

zoom_image_size ( const Hobject Image, Hobject *ImageZoom, Hlong Width,


Hlong Height, const char *Interpolation )

T_zoom_image_size ( const Hobject Image, Hobject *ImageZoom,


const Htuple Width, const Htuple Height, const Htuple Interpolation )

Zoom an image to a given size.


zoom_image_size scales the image Image to the size given by Width and Height. The parameter
Interpolation determines the type of interpolation used (see affine_trans_image).
Attention
If the system parameter ’int_zooming’ is set to ’true’, the internally used integer arithmetic may lead to errors in
the following two cases: First, if zoom_image_size is used on an uint2 or int2 image with high dynamics
(i.e. images containing values close to the respective limits) in combination with scale factors (ratio of output
to input image size) smaller than 0.5, then the gray values of the output image may be erroneous. Second, if
Interpolation is set to a value other than ’none’, a large scale factor is applied, and a large output image is
obtained, then undefined gray values at the lower and at the right image border may result. The maximum width
Bmax of this border of undefined gray values can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale
factor in one dimension and I is the size of the output image in the corresponding dimension. In both cases, it is
recommended to set ’int_zooming’ to ’false’ via the operator set_system.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . .(multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / real
Input image.
. ImageZoom (output_object) . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 / real
Scaled image.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the resulting image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512}
Typical range of values : 2 ≤ Width ≤ 512
Minimum Increment : 1
Recommended Increment : 10
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the resulting image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512}
Typical range of values : 2 ≤ Height ≤ 512
Minimum Increment : 1
Recommended Increment : 10
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of interpolation.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
Example

read_image(&Image,"affe");
disp_image(Image,WindowHandle);
zoom_image_size(Image,&ZooImage,0,200,200);
disp_image(ZooImage,WindowHandle);

Parallelization Information
zoom_image_size is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
zoom_image_factor, affine_trans_image, hom_mat2d_scale
See also
hom_mat2d_scale, affine_trans_image
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.8. INPAINTING 213

3.8 Inpainting

harmonic_interpolation ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, double Precision )

T_harmonic_interpolation ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, const Htuple Precision )

Perform a harmonic interpolation on an image region.


The operator harmonic_interpolation reconstructs the destroyed image data of Image inside the region
Region by solving the discrete Laplace equation uxx + uyy = 0 for the corresponding gray value function u.
The unique solution, which exists under Dirichlet boundary conditions given by Image outside of Region, is
returned in InpaintedImage.
This technique is called harmonic interpolation since in function theory the solutions of the Laplace equation are
referred to as harmonic functions.
If Region touches the border of the gray value matrix of Image and thus some Dirichlet boundary values do
not exist, von Neumann boundary conditions are used instead. This means that the gray values are mirrored at the
border of Image. If no Dirichlet boundary values exist at all, a constant image with gray value 0 is returned.
The spatial derivatives are discretized as uxx (x, y) = u(x − 1, y) − 2u(x, y) + u(x + 1, y) and
uyy (x, y) = u(x, y − 1) − 2u(x, y) + u(x, y + 1). The equation is solved by an iterative conjugate gradi-
ent solver, which iteratively improves the computational error until the maximum norm of its update step becomes
a smaller fraction than Precision of the norm of the input data or a maximum of 1000 iterations is reached.
Precision = 0 .01 thus means a relative computational accuracy of 1%.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. Precision (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Computational accuracy.
Default Value : 0.001
Suggested values : Precision ∈ {0.0, 0.0001, 0.001, 0.01}
Restriction : Precision ≥ 0.0
Parallelization Information
harmonic_interpolation is reentrant and automatically parallelized (on tuple level).
Alternatives
inpainting_ct, inpainting_aniso, inpainting_mcf, inpainting_texture,
inpainting_ced
References
L.C. Evans; “Partial Differential Equations”; AMS, Providence; 1998.
W. Hackbusch; “Iterative Lösung großer schwachbesetzter Gleichungssysteme”; Teubner, Stuttgart;1991.
Module
Foundation

inpainting_aniso ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, const char *Mode, double Contrast,
double Theta, Hlong Iterations, double Rho )

T_inpainting_aniso ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, const Htuple Mode, const Htuple Contrast,
const Htuple Theta, const Htuple Iterations, const Htuple Rho )

Perform an inpainting by anisotropic diffusion.

HALCON 8.0.2
214 CHAPTER 3. FILTER

The operator inpainting_aniso uses the anisotropic diffusion according to the model of Perona and Malik,
to continue image edges that cross the border of the region Region and to connect them inside of Region.
With this, the structure of the edges in Region will be made consistent with the surrounding image matrix, so that
an occlusion of errors or unwanted objects in the input image, a so called inpainting, is less visible to the human
beholder, since there remain no obvious artefacts or smudges.
Considering the image as a gray value function u, the algorithm is a discretization of the partial differential equation

ut = div(g(|∇u|2 , c)∇u)

with the initial value u = u0 defined by Image at a time t0 = 0. The equation is iterated Iterations times in
time steps of length Theta, so that the output image InpaintedImage contains the gray value function at the
time Iterations · Theta.
The primary goal of the anisotropic diffusion, which is also referred to as nonlinear isotropic diffusion, is the
elimination of image noise in constant image patches while preserving the edges in the image. The distinction
between edges and constant patches is achieved using the threshold Contrast on the magnitude of the gray
value differences between adjacent pixels. Contrast is referred to as the contrast parameter and is abbreviated
with the letter c. If the edge information is distributed in an environment of the already existing edges by smoothing
the edge amplitude matrix, it is furthermore possible to continue edges into the computation area Region. The
standard deviation of this smoothing process is determined by the parameter Rho.
The algorithm used is basically the same as in the anisotropic diffusion filter anisotropic_diffusion,
except that here, border treatment is not done by mirroring the gray values at the border of Region. Instead, this
procedure is only applicable on regions that keep a distance of at least 3 pixels to the border of the image matrix
of Image, since the gray values on this band around Region are used to define the boundary conditions for the
respective differential equation and thus assure consistency with the neighborhood of Region. Please note that
the inpainting progress is restricted to those pixels that are included in the ROI of the input image Image. If the
ROI does not include the entire region Region, a band around the intersection of Region and the ROI is used to
define the boundary values.
The result of the diffusion process depends on the gray values in the computation area of the input image Image.
It must be pointed out that already exisiting image edges are preserved within Region. In particular, this holds
for gray value jumps at the border of Region, which can result for example from a previous inpainting with
constant gray value. If the procedure is to be used for inpainting, it is recommended to apply the operator
harmonic_interpolation first to remove all unwanted edges inside the computation area and to minimize
the gray value difference between adjacent pixels, unless the input image already contains information inside
Region that should be preserved.
The variable diffusion coefficient g can be chosen to follow different monotonically decreasing functions with
values between 0 and 1 and determines the response of the diffusion process to an edge. With the parameter Mode,
the following functions can be selected:

1
g1 (x, c) = p
1 + 2 cx2

Choosing the function g1 by setting Mode to ’parabolic’ guarantees that the associated differential equation is
parabolic, so that a well-posedness theory exists for the problem and the procedure is stable for an arbitrary step
size Theta. In this case however, there remains a slight diffusion even across edges of an amplitude larger than c.

1
g2 (x, c) =
1 + cx2

The choice of ’perona-malik’ for Mode, as used in the publication of Perona and Malik, does not possess the
theoretical properties of g1 , but in practice it has proved to be sufficiently stable and is thus widely used. The
theoretical instability results in a slight sharpening of strong edges.

c8
g3 (x, c) = 1 − exp(−C )
x4
The function g3 with the constant C = 3.31488, proposed by Weickert, and selectable by setting Mode to ’weick-
ert’ is an improvement of g2 with respect to edge sharpening. The transition between smoothing and sharpening
happens very abruptly at x = c2 .

HALCON/C Reference Manual, 2008-5-13


3.8. INPAINTING 215

Furthermore, the choice of the value ’shock’ is possible for Mode to select a contrast invariant modification of the
anisotropic diffusion. In this variant, the generation of edges is not achieved by variation of the diffusion coefficient
g, but the constant coefficient g = 1 and thus isotropic diffusion is used. Additionally, a shock filter of type

ut = −sgn(∇|∇u|)|∇u|

is applied, which, just like a negative diffusion coefficient, causes a sharpening of the edges, but works independent
of the absolute value of |∇u|. In this mode, Contrast does not have the meaning of a contrast parameter,
but specifies the ratio between the diffusion and the shock filter part applied at each iteration step. Hence, the
value 0 would correspond to pure isotropic diffusion, as used in the operator isotropic_diffusion. The
parameter is scaled in such a way that diffusion and sharpening cancel each other out for Contrast = 1 . A
value Contrast > 1 should not be used, since it would make the algorithm unstable.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of edge sharpening algorithm.
Default Value : "weickert"
List of values : Mode ∈ {"weickert", "perona-malik", "parabolic", "shock"}
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Contrast parameter.
Default Value : 5.0
Suggested values : Contrast ∈ {0.5, 2.0, 5.0, 10.0, 20.0, 50.0, 100.0}
Restriction : Contrast > 0
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Step size.
Default Value : 0.5
Suggested values : Theta ∈ {0.5, 1.0, 5.0, 10.0, 30.0, 100.0}
Restriction : Theta > 0
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 3, 10, 100, 500}
Restriction : Iterations ≥ 1
. Rho (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Smoothing coefficient for edge information.
Default Value : 3.0
Suggested values : Rho ∈ {0.0, 0.1, 0.5, 1.0, 3.0, 10.0}
Restriction : Rho ≥ 0
Example (Syntax: HDevelop)

read_image (Image, ’fabrik’)


gen_rectangle1 (Rectangle, 270, 180, 320, 230)
harmonic_interpolation (Image, Rectangle, InpaintedImage, 0.01)
inpainting_aniso (InpaintedImage, Rectangle, InpaintedImage2,
’perona-malik’, 5.0, 100, 50, 0.5)
dev_display(InpaintedImage2)

Parallelization Information
inpainting_aniso is reentrant and automatically parallelized (on tuple level).
Alternatives
harmonic_interpolation, inpainting_ct, inpainting_mcf, inpainting_texture,
inpainting_ced

HALCON 8.0.2
216 CHAPTER 3. FILTER

References
J. Weickert; “’Anisotropic Diffusion in Image Processing’; PhD Thesis; Fachbereich Mathematik, Universität
Kaiserslautern; 1996.
P. Perona, J. Malik; “Scale-space and edge detection using anisotropic diffusion”; Transactions on Pattern Analysis
and Machine Intelligence 12(7), pp. 629-639; IEEE; 1990.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation

inpainting_ced ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, double Sigma, double Rho, double Theta,
Hlong Iterations )

T_inpainting_ced ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, const Htuple Sigma, const Htuple Rho,
const Htuple Theta, const Htuple Iterations )

Perform an inpainting by coherence enhancing diffusion.


The operator inpainting_ced performs an anisotropic diffusion process on the region Region of the input
image Image with the objective of completing discontinuous image edges diffusively by increasing the coherence
of the image structures contained in Image and without smoothing these edges perpendicular to their dominating
direction. The mechanism is the same as in the operator coherence_enhancing_diff, which is based on a
discretization of the anisotropic diffusion equation

ut = div(G(u)∇u)

formulated by Weickert. With a 2 × 2 coefficient matrix G that depends on the gray values in Image, this is an
enhancement of the mean curvature flow or intrinsic heat equation

∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|

on the gray value function u defined by the input image Image at a time t0 = 0. The smoothing opera-
tor mean_curvature_flow is a direct application of the mean curvature flow equation. With the opera-
tor inpainting_mcf, it can also be used for image inpainting. The discrete diffusion equation is solved in
Iterations time steps of length Theta, so that the output image InpaintedImage contains the gray value
function at the time Iterations · Theta.
To detect the image direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter Sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator isotropic_diffusion for isotropic image smoothing.
Similar to the operator inpainting_mcf, the structure of the image data in Region is simplified by smoothing
the level lines of Image. By this, image errors and unwanted objects can be removed from the image, while the
edges in the neighborhood are extended continuously. This procedure is called image inpainting. The objective is
to introduce a minimum amount of artefacts or smoothing effects, so that the image manipulation is least visible to
a human beholder.
While the matrix G is given by

1
GM CF (u) = I − ∇u(∇u)T ,
|∇u|2

in the case of the operator inpainting_mcf, where I denotes the unit matrix, GM CF is again smoothed
componentwise by a Gaussian filter of standard deviation Rho for coherence_enhancing_diff. Then, the
final coefficient matrix

HALCON/C Reference Manual, 2008-5-13


3.8. INPAINTING 217

GCED = g1 (λ1 − λ2 )2 w1 (w1 )T + g2 (λ1 − λ2 )2 w2 (w2 )T


 

is constructed from the eigenvalues λ1 , λ2 and eigenvectors w1 , w2 of the resulting intermediate matrix, where the
functions

g1 (p) = 0.001  
−1
g2 (p) = 0.001 + 0.999 exp
p

were determined empirically and taken from the publication of Weickert.


Hence, the diffusion direction in mean_curvature_flow is only determined by the local direction of the gray
value gradient, while GCED considers the macroscopic structure of the image objects on the scale Rho and the
magnitude of the diffusion in coherence_enhancing_diff depends on how well this structure is defined.
To achieve the highest possible consistency of the newly created edges with the image data from the neighbour-
hood, the gray values are not mirrored at the border of Region to compute the convolution with the smoothing
filter mask of scale Rho on the pixels close to the border, although this would be the common approach for filter
operators. Instead, the existence of gray values on a band of width ceil(3.1 ∗ Rho) + 2 pixels around Region
is presumed and these values are used in the convolution. This means that Region must keep this much dis-
tance to the border of the image matrix Image. By involving the gray values and directional information from
this extended area, it can be achieved that the continuation of the edges is not only continuous, but also smooth,
which means without kinks. Please note that the inpainting progress is restricted to those pixels that are included
in the ROI of the input image Image. If the ROI does not include the entire region Region, a band around the
intersection of Region and the ROI is used to define the boundary values.
To decrease the number of iterations required for attaining a satisfactory result, it may be useful to initialize the gray
value matrix in Region with the harmonic interpolant, a continuous function of minimal curvature, by applying
the operator harmonic_interpolation to Image before calling inpainting_ced.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real


Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Smoothing for derivative operator.
Default Value : 0.5
Suggested values : Sigma ∈ {0.0, 0.1, 0.5, 1.0}
Restriction : Sigma ≥ 0
. Rho (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Smoothing for diffusion coefficients.
Default Value : 3.0
Suggested values : Rho ∈ {0.0, 1.0, 3.0, 5.0, 10.0, 30.0}
Restriction : Rho ≥ 0
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Time step.
Default Value : 0.5
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.5}
Restriction : (0 < Theta) ≤ 0.5
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 5, 10, 20, 50, 100, 500}
Restriction : Iterations ≥ 1

HALCON 8.0.2
218 CHAPTER 3. FILTER

Example (Syntax: HDevelop)

read_image (Image, ’fabrik’)


gen_rectangle1 (Rectangle, 270, 180, 320, 230)
harmonic_interpolation (Image, Rectangle, InpaintedImage, 0.01)
inpainting_ced (InpaintedImage, Rectangle, InpaintedImage2,
0.5, 3.0, 0.5, 1000)
dev_display(InpaintedImage2)

Parallelization Information
inpainting_ced is reentrant and automatically parallelized (on tuple level).
Alternatives
harmonic_interpolation, inpainting_ct, inpainting_aniso, inpainting_mcf,
inpainting_texture
References
J. Weickert, V. Hlavac, R. Sara; “Multiscale texture enhancement”; Computer analysis of images and patterns,
Lecture Notes in Computer Science, Vol. 970, pp. 230-237; Springer, Berlin; 1995.
J. Weickert, B. ter Haar Romeny, L. Florack, J. Koenderink, M. Viergever; “A review of nonlinear diffusion
filtering”; Scale-Space Theory in Computer Vision, Lecture Notes in Comp. Science, Vol. 1252, pp. 3-28;
Springer, Berlin; 1997.
Module
Foundation

inpainting_ct ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, double Epsilon, double Kappa, double Sigma,
double Rho, double ChannelCoefficients )

T_inpainting_ct ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, const Htuple Epsilon, const Htuple Kappa,
const Htuple Sigma, const Htuple Rho,
const Htuple ChannelCoefficients )

Perform an inpainting by coherence transport.


The operator inpainting_ct inpaints a missing region Region of an image Image by transporting image
information from the region’s boundary along the coherence direction into this region.
Since this operator’s basic concept is inpainting by continuing broken contour lines, the image content and in-
painting region must be such that this idea makes sense. That is, if a contour line hits the region to inpaint at a
pixel p, there should be some opposite point q where this contour line continues so that the continuation of contour
lines from two opposite sides can succeed. In cases where there is less geometry in the image, a diffusion-based
inpainter, e.g., harmonic_interpolation may yield better results. Alternatively, Kappa can be set to 0.
An extreme situation with little global geometries are pure textures. Then the idea behind this operator will fail to
produce good results (think of a checkerboard with a big region to inpaint relative to the checker fields). For these
kinds of images, a texture-based inpaiting, e.g., inpainting_texture, can be used instead.
The operator uses a so-called upwind scheme to assign gray values to the missing pixels, i.e.,:

• The order of the pixels to process is given by their Euclidean distance to the boundary of the region to inpaint.
• A new value ui is computed as a weighted average of already known values uj within a disc of radius
Epsilon around the current pixel. The disc is restricted to already known pixels.
• The size of this scheme’s mask depends on Epsilon.

The initially used image data comes from a stripe of thickness Epsilon around the region to inpaint. Thus,
Epsilon must be at least 1 for the scheme to work, but should be greater. The maximum value for Epsilon
depends on the gray values that should be transported into the region. Choosing Epsilon = 5 can be used in
many cases.

HALCON/C Reference Manual, 2008-5-13


3.8. INPAINTING 219

Since the goal is to close broken contour lines, the direction of the level lines must be estimated and used in the
weight. This estimated direction is called the coherence direction, and is computed by means of the structure tensor
S.

S = Gρ ∗ DvDv T

and

v = Gσ ∗ u

where ∗ denotes the convolution, u denotes the gray value image, D the derivative and G Gaussian kernels with
standard deviation σ and ρ. These standard deviations are defined by the operator’s parameters Sigma and Rho.
Sigma should have the size of the noise or uninportant little objects, which are then not considered in the estima-
tion step by the pre-smoothing. Rho gives the size of the window around a pixel that will be used for direction
estimation. The coherence direction c then is given by the eigendirection of S with respect to the minimal eigen-
value λ, i.e.

Sc = λc, |c| = 1

For multichannel or color images, the scheme above is applied to each channel separately, but the weights must be
the same for all channels to propagate information in the same direction. Since the weight depends on the coherence
direction, the common direction is given by the eigendirection of a composite structure tensor. If u1 , ..., un denote
the n channels of the image, the channel structure tensors S1 , ..., Sn are computed and then combined to the
composite structure tensor S.
n
X
S= ai Si
i=1

The coefficients ai are passed in ChannelCoefficients, which is a tuple of length n or length 1. If the tuple’s
length is 1, the arithmetic mean is used, i.e., ai = 1/n. If the length of ChannelCoefficients matches the
number of channels, the ai are set to
ChannelCoefficientsi
ai = Pn
i=1 ChannelCoefficientsi

in order to get a well-defined convex combination. Hence, the ChannelCoefficients must be greater than or
equal to zero and their sum must be greater than zero. If the tuple’s length is neither 1 nor the number of channels
or the requirement above is not satisfied, the operator returns an error message.
The purpose of using other ChannelCoefficients than the arithmetic mean is to adapt to different color
codes. The coherence direction is a geometrical information of the composite image, which is given by high
contrasts such as edges. Thus the more contrast a channel has, the more geometrical information it contains, and
consequently the greater its coefficient should be chosen (relative to the others). For RGB images, [0.299, 0.587,
0.114] is a good choice.
The weight in the scheme is the product of a directional component and a distance component. If p is the 2D
coordinate vector of the current pixel to be inpainted and q the 2D coordinate of a pixel in the neighborhood (the
disc restricted to already known pixels), the directional component measures the deviation of the vector p − q
from the coherence direction. If the deviation exponentially scaled by β is large, a low directional component is
assigned, whereas if it is small, a large directional component is assigned. β is controlled by Kappa (in percent):

β = 20 ∗ Epsilon ∗ Kappa/100

Kappa defines how important it is to propagate information along the coherence direction, so a large Kappa
yields sharp edges, while a low Kappa allows for more diffusion.
A special case is when Kappa is zero: In this case the directional component of the weight is constant (one).
The direction estimation step is then skipped to save computational costs and the parameters Sigma, Rho,
ChannelCoefficients become meaningless, i.e, the propagation of information is not based on the struc-
tures visible in the image.
The distance component is 1/|p − q|. Consequently, if q is far away from p, a low distance component is assigned,
whereas if it is near to p, a high distance component is assigned.

HALCON 8.0.2
220 CHAPTER 3. FILTER

Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2 / real


Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2 / real
Output image.
. Epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Radius of the pixel neighborhood.
Default Value : 5.0
Typical range of values : 1.0 ≤ Epsilon ≤ 20.0
Minimum Increment : 1.0
Recommended Increment : 1.0
. Kappa (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Sharpness parameter in percent.
Default Value : 25.0
Typical range of values : 0.0 ≤ Kappa ≤ 100.0
Minimum Increment : 1.0
Recommended Increment : 1.0
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Pre-smoothing parameter.
Default Value : 1.41
Typical range of values : 0.0 ≤ Sigma ≤ 20.0
Minimum Increment : 0.001
Recommended Increment : 0.01
. Rho (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Smoothing parameter for the direction estimation.
Default Value : 4.0
Typical range of values : 0.001 ≤ Rho ≤ 20.0
Minimum Increment : 0.001
Recommended Increment : 0.01
. ChannelCoefficients (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double
Channel weights.
Default Value : 1
Example (Syntax: HDevelop)

read_image (Image, ’claudia’)


gen_circle (Circle, 333, 164, 35)
inpainting_ct (Image, Circle, InpaintedImage, 15, 25, 1.5, 3,1.0)

Parallelization Information
inpainting_ct is reentrant and automatically parallelized (on tuple level).
Alternatives
harmonic_interpolation, inpainting_aniso, inpainting_mcf, inpainting_ced,
inpainting_texture
References
Folkmar Bornemann, Tom März: “Fast Image Inpainting Based On Coherence Transport”; Journal of Mathemati-
cal Imaging and Vision; vol. 28, no. 3; pp. 259-278; 2007.
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.8. INPAINTING 221

inpainting_mcf ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, double Sigma, double Theta,
Hlong Iterations )

T_inpainting_mcf ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, const Htuple Sigma, const Htuple Theta,
const Htuple Iterations )

Perform an inpainting by smoothing of level lines.


The operator inpainting_mcf extends the image edges that adjoin the region Region of the input image
Image into the interior of Region and connects their ends by smoothing the level lines of the gray value function
of Image.
This happens through the application of the mean curvature flow or intrinsic heat equation

∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|

on the gray value function u defined in the region Region by the input image Image at a time t0 = 0.
The discretized equation is solved in Iterations time steps of length Theta, so that the output image
InpaintedImage contains the gray value function at the time Iterations · Theta.
A stationary state of the mean curvature flow equation, which is also the basis of the operator
mean_curvature_flow, has the special property that the level lines of u all have the curvature 0. This means
that after sufficiently many iterations there are only straight edges left inside the computation area of the output
image InpaintedImage. By this, the structure of objects inside of Region can be simplified, while the re-
maining edges are continuously connected to those of the surrounding image matrix. This allows for a removal of
image errors and unwanted objects in the input image, a so called image inpainting, which is only weakly visible
to a human beholder since there remain no obvious artefacts or smudges.
To detect the image direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter Sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator isotropic_diffusion for isotropic image smoothing.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real


Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Smoothing for derivative operator.
Default Value : 0.5
Suggested values : Sigma ∈ {0.0, 0.1, 0.5, 1.0}
Restriction : Sigma ≥ 0
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Time step.
Default Value : 0.5
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.5}
Restriction : (0 < Theta) ≤ 0.5
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 5, 10, 20, 50, 100, 500}
Restriction : Iterations ≥ 1
Parallelization Information
inpainting_mcf is reentrant and automatically parallelized (on tuple level).

HALCON 8.0.2
222 CHAPTER 3. FILTER

Alternatives
harmonic_interpolation, inpainting_ct, inpainting_aniso, inpainting_ced,
inpainting_texture
References
M. G. Crandall, P. Lions; “Convergent Difference Schemes for Nonlinear Parabolic Equations and Mean Curvature
Motion”; Numer. Math. 75 pp. 17-41; 1996.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation

inpainting_texture ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, Hlong MaskSize, Hlong SearchSize,
double Anisotropy, const char *PostIteration, double Smoothness )

T_inpainting_texture ( const Hobject Image, const Hobject Region,


Hobject *InpaintedImage, const Htuple MaskSize,
const Htuple SearchSize, const Htuple Anisotropy,
const Htuple PostIteration, const Htuple Smoothness )

Perform an inpainting by texture propagation.


The operator inpainting_texture is used for removing large objects and image errors from the region
Region of the input image Image. Image blocks of side length MaskSize are copied from the intact part of the
image to the border of the computation area, until that area has been filled up with new gray values. This process is
called image inpainting. Hence, the computation area is also referred to as the inpainting area and is reduced with
every inserted rectangle, starting with Region. Let the center of the current block be at the point x. Since x is
always chosen from the border of the inpainting area the current block overlaps with the known or already filled-in
gray values. The gray value correlation with the overlapping part of this block is used to determine which other
image block fits at the position x. As the correlation function, the sum of the squared gray value differences is
used. The image blocks that are taken into account for the correlation, and hence as candidates for the data source
of the next inpainting step, are called comparison blocks. The search area for suitable gray value patterns in which
the centers of the comparison blocks is searched is limited to a square of side length 2 · SearchSize around the
point x.
On the one hand, the order in which the pixels of Region are filled in depends on the size of the overlapping area
and thus the number of pixels available for the correlation. On the other hand, the absolute value of the derivative
of the gray value function tangential to the border of the computation area is also considered. The larger the value
of the parameter Anisotropy is, the more the points in which the derivative is large are preferred. This way it
can be achieved that, e.g., straight lines which are represented by large gradients, are continued through the entire
computation area without being interrupted by the inpainting of image structures from other parts of the border
when the size of the inpainting area becomes small. On the other hand, a large value of Anisotropy also means
that possible phantom edges, i.e., unwanted random structures that have developed during the inpainting process,
are also propagated and the magnitude of those image disturbances is increased.
To confine the formation of such artifacts, the original algorithm can be extended by a post-iteration step that selects
smooth and inconspicuous image patches as data sources for the inpainting. If the parameter PostIteration
is set to ’min_grad’ the sum of the squares of the gray value gradients is minimized on the comparison blocks.
With the value ’min_range_extension’, the growth of the gray value interval of the comparison blocks with respect
to the reference block around the point x is minimized. If PostIteration has the value ’none’ no post-
iteration is performed. The choice of feasible blocks for this minimization process is determined by the parameter
Smoothness, which is an upper limit to the permitted increase of the mean absolute gray value difference
between the comparison blocks and the reference block with respect to the block that was selected by the original
algorithm. With increasing value of Smoothness, the inpainting result becomes smoother and loses structure.
The matching accuracy of the selected comparison blocks decreases. If Smoothness is set to 0, the post-iteration
only considers comparison blocks with an equally high correlation to the reference block.
If the inpainting process cannot be completed because there are points x, for which no complete block of intact gray
value information is contained in the search area of size SearchSize, the remaining pixels keep their initial gray
value and the ROI of the output image InpaintedImage is reduced by the region that could not be processed.

HALCON/C Reference Manual, 2008-5-13


3.9. LINES 223

If the structure size of the ROI of Image or of the computation area Region is smaller than MaskSize, the
execution time of the algorithm can increase extremely. Hence, it is recommended to only use clearly structured
input regions.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real


Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Size of the inpainting blocks.
Default Value : 9
Suggested values : MaskSize ∈ {7, 9, 11, 15, 21}
Restriction : (MaskSize ≥ 3) ∧ odd(MaskSize)
. SearchSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Size of the search window.
Default Value : 30
Suggested values : SearchSize ∈ {15, 30, 50, 100, 1000}
Restriction : (2 · SearchSize) > MaskSize
. Anisotropy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Influence of the edge amplitude on the inpainting order.
Default Value : 1.0
Suggested values : Anisotropy ∈ {0.0, 0.01, 0.1, 0.5, 1.0, 10.0}
Restriction : Anisotropy ≥ 0
. PostIteration (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Post-iteration for artifact reduction.
Default Value : "none"
List of values : PostIteration ∈ {"none", "min_grad", "min_range_extension"}
. Smoothness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Gray value tolerance for post-iteration.
Default Value : 1.0
Suggested values : Smoothness ∈ {0.0, 0.1, 0.2, 0.5, 1.0}
Restriction : Smoothness ≥ 0
Parallelization Information
inpainting_texture is reentrant and processed without parallelization.
Module
Foundation

3.9 Lines
bandpass_image ( const Hobject Image, Hobject *ImageBandpass,
const char *FilterType )

T_bandpass_image ( const Hobject Image, Hobject *ImageBandpass,


const Htuple FilterType )

Edge extraction using bandpass filters.


bandpass_image serves as an edge filter. It applies a linear filter with the following convolution mask to
Image:

FilterType: ’lines’
In contrast to the edge operator sobel_amp this filter detects lines instead of edges, i.e., two closely adjacent
edges.

HALCON 8.0.2
224 CHAPTER 3. FILTER

0 −2 −2 −2 0
−2 0 3 0 −2
−2 3 12 3 −2
−2 0 3 0 −2
0 −2 −2 −2 0

At the border of the image the gray values are mirrored. Over- and underflows of gray values are clipped. The
resulting images are returned in ImageBandpass.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2


Input images.
. ImageBandpass (output_object) . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Bandpass-filtered images.
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Filter type: currently only ’lines’ is supported.
Default Value : "lines"
List of values : FilterType ∈ {"lines"}
Example

bandpass_image(Image,&LineImage,"lines");
threshold(LineImage,&Lines,60.0,255.0);
skeleton(Lines,&ThinLines);

Result
bandpass_image returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
bandpass_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold, skeleton
Alternatives
convol_image, topographic_sketch, texture_laws
See also
highpass_image, gray_skeleton
Module
Foundation

lines_color ( const Hobject Image, Hobject *Lines, double Sigma,


double Low, double High, const char *ExtractWidth,
const char *CompleteJunctions )

T_lines_color ( const Hobject Image, Hobject *Lines,


const Htuple Sigma, const Htuple Low, const Htuple High,
const Htuple ExtractWidth, const Htuple CompleteJunctions )

Detect color lines and their width.


lines_color extracts color lines from the input image Image and returns the extracted lines as subpixel precise
XLD-contours in Lines. Color lines are defined as dark lines in the amplitude image of the color edge filter (see
edges_color). lines_color always uses the Canny color edge filter. Hence, the required partial derivatives
of the image are always computed by convolution with the respective partial derivatives of the Gaussian smoothing
masks (see derivate_gauss). The corresponding smoothing is determined by the parameter Sigma.

HALCON/C Reference Manual, 2008-5-13


3.9. LINES 225

By defining color lines as dark lines in the amplitude image, in contrast to lines_gauss, for single-channel
images no distinction is made whether the lines are darker or brighter than their surroundings. Furthermore,
lines_color also returns staircase lines, i.e., lines for which the gray value of the lines lies between the gray
values in the surrounding area to the left and right sides of the line. In multi-channel images, the above definition
allows each channel to have a different line type. For example, in a three-channel image the first channel may have
a dark line, the second channel a bright line, and the third channel a staircase line at the same position.
If ExtractWidth is set to ’true’ the line width is extracted for each line point. Because the line extractor is
unable to extract certain junctions because of differential geometric reasons, it tries to extract these by different
means if CompleteJunctions is set to ’true’.
lines_color links the line points into lines by using an algorithm similar to a hysteresis threshold op-
eration, which is also used in lines_gauss and edges_color_sub_pix. Points with an amplitude
larger than High are immediately accepted as belonging to a line, while points with an amplitude smaller
than Low are rejected. All other points are accepted as lines if they are connected to accepted line points (see
also lines_gauss). Here, amplitude means the line amplitude of the dark line (see lines_gauss and
lines_facet). This value corresponds to the third directional derivative of the smoothed input image in the
direction perpendicular to the line.
For the choice of the thresholds High and Low one has to keep in mind that the third directional derivative depends
on the amplitude and width of the line as well as the choice of Sigma. The value of the third derivative depends
linearly on the amplitude, i.e., the larger the amplitude, the larger the response. For the width of the line there
is an inverse dependence: The wider the line is, the smaller the response gets. This holds analogously for the
dependence on Sigma: The larger Sigma is chosen, the smaller the second derivative will be. This means that
for larger smoothing correspondingly smaller values for High and Low should be chosen.
The extracted lines are returned in a topologically sound data structure in Lines. This means that lines are
correctly split at junction points.
lines_color defines the following attributes for each line point if ExtractWidth was set to ’false’:
’angle’ The angle of the direction perpendicular to the line (oriented such that the normal vectors point to
the right side of the line as the line is traversed from start to end point; the angles are given with
respect to the row axis of the image.)
’response’ The magnitude of the second derivative
If ExtractWidth was set to ’true’, additionally the following attributes are defined:
’width_left’ The line width to the left of the line
’width_right’ The line width to the right of the line
All these attributes can be queried via the operator get_contour_attrib_xld.
Attention √
In general, but in particular if the line width is to be extracted, Sigma ≥ w/ 3 should be selected, where w is
the width (half the diameter) of the lines in the image. As the lowest allowable value Sigma ≥ w/2.5 must be
selected. If, for example, lines with a width of 4 pixels (diameter 8 pixels) are to be extracted, Sigma ≥ 2.3
should be selected. If it is expected that staircase lines are present in at least one channel, and if such lines should
be extracted, in addition to the above restriction, Sigma ≤ w should be selected. This is necessary because
staircase lines turn into normal step edges for large amounts of smoothing, and therefore no longer appear as dark
lines in the amplitude image of the color edge filter.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Input image.
. Lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted lines.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of Gaussian smoothing to be applied.
Default Value : 1.5
Suggested values : Sigma ∈ {1, 1.2, 1.5, 1.8, 2, 2.5, 3, 4, 5}
Typical range of values : 0.7 ≤ Sigma ≤ 20
Recommended Increment : 0.1

HALCON 8.0.2
226 CHAPTER 3. FILTER

. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong


Lower threshold for the hysteresis threshold operation.
Default Value : 3
Suggested values : Low ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10}
Typical range of values : 0 ≤ Low
Recommended Increment : 0.5
Restriction : Low ≥ 0
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Upper threshold for the hysteresis threshold operation.
Default Value : 8
Suggested values : High ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10, 12, 15, 18, 20, 25}
Typical range of values : 0 ≤ High
Recommended Increment : 0.5
Restriction : (High ≥ 0) ∧ (High ≥ Low)
. ExtractWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Should the line width be extracted?
Default Value : "true"
List of values : ExtractWidth ∈ {"true", "false"}
. CompleteJunctions (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Should junctions be added where they cannot be extracted?
Default Value : "true"
List of values : CompleteJunctions ∈ {"true", "false"}
Result
lines_color returns H_MSG_TRUE if all parameters are correct and no error occurs during execution. If the
input is empty the behavior can be set via set_system(’no_object_result’,<Result>). If necessary,
an exception handling is raised.
Parallelization Information
lines_color is reentrant and processed without parallelization.
Possible Successors
gen_polygons_xld
Alternatives
lines_gauss, lines_facet
See also
edges_color, edges_color_sub_pix
References
C. Steger: “Subpixel-Precise Extraction of Lines and Edges”; International Archives of Photogrammetry and
Remote Sensing, vol. XXXIII, part B3; pp. 141-156; 2000.
C. Steger: “An Unbiased Detector of Curvilinear Structures”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; vol. 20, no. 2; pp. 113-125; 1998.
C. Steger: “Unbiased Extraction of Curvilinear Structures from 2D and 3D Images”; Herbert Utz Verlag, München;
1998.
Module
2D Metrology

lines_facet ( const Hobject Image, Hobject *Lines, Hlong MaskSize,


double Low, double High, const char *LightDark )

T_lines_facet ( const Hobject Image, Hobject *Lines,


const Htuple MaskSize, const Htuple Low, const Htuple High,
const Htuple LightDark )

Detection of lines using the facet model.


The operator lines_facet can be used to extract lines (curvilinear structures) from the image Image. The
extracted lines are returned in Lines as sub-pixel precise XLD-contours. The parameter LightDark determines,
whether bright or dark lines are extracted.

HALCON/C Reference Manual, 2008-5-13


3.9. LINES 227

The extraction is done by using the facet model, i.e., a least squares fit, to determine the parameters of a quadratic
polynomial in x and y for each point of the image. The parameter MaskSize determines the size of the window
used for the least squares fit. Larger values of MaskSize lead to a larger smoothing of the image, but can
lead to worse localization of the line. The parameters of the polynomial are used to calculate the line direction
for each pixel. Pixels which exhibit a local maximum in the second directional derivative perpendicular to the
line direction are marked as line points. The line points found in this manner are then linked to contours. This
is done by immediately accepting line points that have a second derivative larger than High. Points that have
a second derivative smaller than Low are rejected. All other line points are accepted if they are connected to
accepted points by a connected path. This is similar to a hysteresis threshold operation with infinite path length
(see hysteresis_threshold). However, this function is not used internally since it does not allow the
extraction of sub-pixel precise contours.
The gist of how to select the thresholds in the description of lines_gauss also holds for this operator. A value
of Sigma = 1.5 there roughly corresponds to a MaskSize of 5 here.
The extracted lines are returned in a topologically sound data structure in Lines. This means that lines are
correctly split at junction points.
lines_facet defines the following attributes for each line point:
’angle’ The angle of the direction perpendicular to the line
’response’ The magnitude of the second derivative
These attributes can be queried via the operator get_contour_attrib_xld.
Attention
The smaller the filter size MaskSize is chosen, the more short, fragmented lines will be extracted. This can lead
to considerably longer execution times.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted lines.
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Size of the facet model mask.
Default Value : 5
List of values : MaskSize ∈ {3, 5, 7, 9, 11}
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Lower threshold for the hysteresis threshold operation.
Default Value : 3
Suggested values : Low ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10}
Typical range of values : 0 ≤ Low ≤ 20
Recommended Increment : 0.5
Restriction : Low ≥ 0
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Upper threshold for the hysteresis threshold operation.
Default Value : 8
Suggested values : High ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10, 12, 15, 18, 20, 25}
Typical range of values : 0 ≤ High ≤ 35
Recommended Increment : 0.5
Restriction : (High ≥ 0) ∧ (High ≥ Low)
. LightDark (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Extract bright or dark lines.
Default Value : "light"
List of values : LightDark ∈ {"dark", "light"}
Example

/* Detection of lines in an aerial image */


read_image(&Image,"mreut4_3");
lines_facet(Image:&Lines:5,3,8,"light");
disp_xld(Lines,WindowHandle);

HALCON 8.0.2
228 CHAPTER 3. FILTER

Complexity
Let A be the number of pixels in the domain of Image. Then the runtime complexity is O(A ∗ MaskSize).
Let S = Width ∗ Height be the number of pixels of Image. Then lines_facet requires at least 55 ∗ S bytes
of temporary memory during execution.
Result
lines_facet returns H_MSG_TRUE if all parameters are correct and no error occurs during execution. If
the input is empty the behaviour can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
lines_facet is reentrant and processed without parallelization.
Possible Successors
gen_polygons_xld
Alternatives
lines_gauss
See also
bandpass_image, dyn_threshold, topographic_sketch
References
A. Busch: "‘Fast Recognition of Lines in Digital Images Without User-Supplied Parameters"’. In H. Ebner, C.
Heipke, K.Eder, eds., "‘Spatial Information from Digital Photogrammetry and Computer Vision"’, International
Archives of Photogrammetry and Remote Sensing, Vol. 30, Part 3/1, pp. 91-97, 1994.
Module
2D Metrology

lines_gauss ( const Hobject Image, Hobject *Lines, double Sigma,


double Low, double High, const char *LightDark,
const char *ExtractWidth, const char *CorrectPositions,
const char *CompleteJunctions )

T_lines_gauss ( const Hobject Image, Hobject *Lines,


const Htuple Sigma, const Htuple Low, const Htuple High,
const Htuple LightDark, const Htuple ExtractWidth,
const Htuple CorrectPositions, const Htuple CompleteJunctions )

Detect lines and their width.


The operator lines_gauss can be used to extract lines (curvilinear structures) from the image Image. The
extracted lines are returned in Lines as sub-pixel precise XLD-contours. The parameter LightDark deter-
mines, whether bright or dark lines are extracted. If ExtractWidth is set to ’true’ the line width is extracted
for each line point. If CorrectPositions is set to ’true’, lines_gauss compensates the effect of asym-
metrical lines (lines having different contrast on each side of the line), and corrects the position and width of
the line. This parameter is only meaningful if ExtractWidth=’true’. Because the line extractor is unable to
extract certain junctions because of differential geometric reasons, it tries to extract these by different means if
CompleteJunctions is set to ’true’.
The extraction is done by using partial derivatives of a Gaussian smoothing kernel to determine the parameters
of a quadratic polynomial in x and y for each point of the image. The parameter Sigma determines the amount
of smoothing to be performed. Larger values of Sigma lead to a larger smoothing of the image, but can lead
to worse localization of the line. Generally, the localization will be much better than that of lines returned by
lines_facet with comparable parameters. The parameters of the polynomial are used to calculate the line
direction for each pixel. Pixels which exhibit a local maximum in the second directional derivative perpendicular
to the line direction are marked as line points. The line points found in this manner are then linked to contours.
This is done by immediately accepting line points that have a second derivative larger than High. Points that
have a second derivative smaller than Low are rejected. All other line points are accepted if they are connected to
accepted points by a connected path. This is similar to a hysteresis threshold operation with infinite path length (see
hysteresis_threshold). However, this function is not used internally since it does not allow the extraction
of sub-pixel precise contours.

HALCON/C Reference Manual, 2008-5-13


3.9. LINES 229

For the choice of the thresholds High and Low one has to keep in mind that the second directional derivative
depends on the amplitude and width of the line as well as the choice of Sigma. The value of the second derivative
depends linearly on the amplitude, i.e., the larger the amplitude, the larger the response. For the width of the
line there is an approximately inverse exponential dependence: The wider the line is, the smaller the response
gets. This holds analogously for the dependence on Sigma: The larger Sigma is chosen, the smaller the second
derivative will be. This means that for larger smoothing correspondingly smaller values for High and Low have
to be chosen. Two examples help to illustrate this: If 5 pixel wide lines with an amplitude larger than 100 are to be
extracted from an image with a smoothing of Sigma = 1.5, High should be chosen larger than 14. If, on the other
hand, 10 pixel wide lines with an amplitude larger than 100 and a Sigma = 3 are to be detected, High should be
chosen larger than 3.5. For the choice of Low values between 0.25 High and 0.5 High are appropriate.
The extracted lines are returned in a topologically sound data structure in Lines. This means that lines are
correctly split at junction points.
lines_gauss defines the following attributes for each line point if ExtractWidth was set to ’false’:
’angle’ The angle of the direction perpendicular to the line
’response’ The magnitude of the second derivative
If ExtractWidth was set to ’true’ and CorrectPositions to ’false’, the following attributes are defined in
addition to the above ones:
’width_left’ The line width to the left of the line
’width_right’ The line width to the right of the line
Finally, if CorrectPositions was set to ’true’, additionally the following attributes are defined:
’asymmetry’ The asymmetry of the line point
’contrast’ The contrast of the line point
Here, the asymmetry is positive if the asymmetric part, i.e., the part with the weaker gradient, is on the right side of
the line, while it is negative if the asymmetric part is on the left side of the line. All these attributes can be queried
via the operator get_contour_attrib_xld.
Attention √
In general, but in particular if the line width is to be extracted, Sigma ≥ w/ 3 should be selected, where w is
the width (half the diameter) of the lines in the image. As the lowest allowable value Sigma ≥ w/2.5 must be
selected. If, for example, lines with a width of 4 pixels (diameter 8 pixels) are to be extracted, Sigma ≥ 2.3
should be selected.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted lines.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of Gaussian smoothing to be applied.
Default Value : 1.5
Suggested values : Sigma ∈ {1, 1.2, 1.5, 1.8, 2, 2.5, 3, 4, 5}
Typical range of values : 0.7 ≤ Sigma ≤ 20
Recommended Increment : 0.1
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Lower threshold for the hysteresis threshold operation.
Default Value : 3
Suggested values : Low ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10}
Typical range of values : 0 ≤ Low ≤ 20
Recommended Increment : 0.5
Restriction : Low ≥ 0
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Upper threshold for the hysteresis threshold operation.
Default Value : 8
Suggested values : High ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10, 12, 15, 18, 20, 25}
Typical range of values : 0 ≤ High ≤ 35
Recommended Increment : 0.5
Restriction : (High ≥ 0) ∧ (High ≥ Low)

HALCON 8.0.2
230 CHAPTER 3. FILTER

. LightDark (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *


Extract bright or dark lines.
Default Value : "light"
List of values : LightDark ∈ {"dark", "light"}
. ExtractWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Should the line width be extracted?
Default Value : "true"
List of values : ExtractWidth ∈ {"true", "false"}
. CorrectPositions (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Should the line position and width be corrected?
Default Value : "true"
List of values : CorrectPositions ∈ {"true", "false"}
. CompleteJunctions (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Should junctions be added where they cannot be extracted?
Default Value : "true"
List of values : CompleteJunctions ∈ {"true", "false"}
Example

/* Detection of lines in an aerial image */


read_image(&Image,"mreut4_3");
lines_gauss(Image:&Lines:1.5,3,8,"light","true","true","true");
disp_xld(Lines,WindowHandle);

Complexity
Let A be the number of pixels in the domain of Image. Then the runtime complexity is O(A ∗ Sigma).
Let S = Width ∗ Height be the number of pixels of Image. Then lines_gauss requires at least 55 ∗ S bytes
of temporary memory during execution.
Result
lines_gauss returns H_MSG_TRUE if all parameters are correct and no error occurs during execution. If
the input is empty the behaviour can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
lines_gauss is reentrant and processed without parallelization.
Possible Successors
gen_polygons_xld
Alternatives
lines_facet
See also
bandpass_image, dyn_threshold, topographic_sketch
References
C. Steger: “Extracting Curvilinear Structures: A Differential Geometric Approach”. In B. Buxton, R. Cipolla, eds.,
“Fourth European Conference on Computer Vision”, Lecture Notes in Computer Science, Volume 1064, Springer
Verlag, pp. 630-641, 1996.
C. Steger: “Extraction of Curved Lines from Images”. In “13th International Conference on Pattern Recognition”,
Volume II, pp. 251-255, 1996.
C. Steger: “An Unbiased Detector of Curvilinear Structures”. Technical Report FGBV-96-03, Forschungsgruppe
Bildverstehen (FG BV), Informatik IX, Technische Universit"at M"unchen, July 1996.
Module
2D Metrology

HALCON/C Reference Manual, 2008-5-13


3.10. MATCH 231

3.10 Match
exhaustive_match ( const Hobject Image, const Hobject RegionOfInterest,
const Hobject ImageTemplate, Hobject *ImageMatch, const char *Mode )

T_exhaustive_match ( const Hobject Image,


const Hobject RegionOfInterest, const Hobject ImageTemplate,
Hobject *ImageMatch, const Htuple Mode )

Matching of a template and an image.


The operator exhaustive_match matches ImageTemplate and Image within the region of interest
RegionOfInterest. Hereby the ImageTemplate will be moved over all points of Image which lie within
the RegionOfInterest. With regard to the parameter Mode, a matching criterion will be calculated. The
result values will be stored in ImageMatch.
The following matching criteria (Mode) are available:
’norm_correlation’
P
(Image[i − u][j − v] · ImageTemplate[l − u][c − v])
u,v
ImageMatch[i][j] = 255 · qP P
2 2
u,v (Image[i − u][j − v] ) · u,v (ImageTemplate[l − u][c − v] )

whereby X[i][j] indicates the grayvalue in the ith column and jth row of the image X. (l, c) is the centre of
the region of ImageTemplate. u and v are chosen so that all points of the template will be reached, i, j
run accross the RegionOfInterest. At the image frame only those parts of ImageTemplate will be
considered which lie inside the image (i.e. u and v will be restricted correspondingly). Range of values: 0 -
255 (best fit).
’dfd’ Calculating the average “displaced frame difference”:
P
u,v |Image[i − u][j − v] − ImageTemplate[l − u][c − v]|
ImageMatch[i][j] =
AREA(ImageT emplate)

The terms are the same as in ’norm_correlation’. AREA ( X ) means the area of the region X. Range of value
0 (best fit) - 255.

To calculate the normalized correlation as well as the “displaced frame difference” is (with regard to the
area of ImageTemplate) very time consuming. Therefore it is important to restrict the input region
(RegionOfInterest if possible, i.e. to apply the filter only in a very confined “region of interest”.
As far as quality is concerned, both modes return comparable results, whereby the mode ’dfd’ is faster by a factor
of about 3.5.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image.
. RegionOfInterest (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Area to be searched in the input image.
. ImageTemplate (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
This area will be “matched” by Image within the RegionOfInterest.
. ImageMatch (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte
Result image: values of the matching criterion.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Desired matching criterion.
Default Value : "dfd"
List of values : Mode ∈ {"norm_correlation", "dfd"}
Example

read_image(&Image,"monkey");
disp_image(Image,WindowHandle);

HALCON 8.0.2
232 CHAPTER 3. FILTER

/* mark one eye */


draw_rectangle2(WindowHandle,&Row,&Column,&Phi,&Length1,&Length2);
gen_rectangle2(&Rectangle,Row,Column,Phi,Length1,Length2);
reduce_domain(Image,Rectangle,&Template);
exhaustive_match(Image,Image,Template,&ImageMatch,’dfd’);
invert_image(ImageMatch,&ImageInvert);
local_max(ImageInvert,&Maxima);
union1(Maxima,&AllMaxima);
add_channels(AllMaxima,ImageInvert,&FitMaxima);
threshold(FitMaxima,&BestFit,230.0,255.0);
disp_region(BestFit,WindowHandle);

Result
If the parameter values are correct, the operator exhaustive_match returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
exhaustive_match is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
draw_region, draw_rectangle1
Possible Successors
local_max, threshold
Alternatives
exhaustive_match_mg
Module
Foundation

exhaustive_match_mg ( const Hobject Image,


const Hobject ImageTemplate, Hobject *ImageMatch, const char *Mode,
Hlong Level, Hlong Threshold )

T_exhaustive_match_mg ( const Hobject Image,


const Hobject ImageTemplate, Hobject *ImageMatch, const Htuple Mode,
const Htuple Level, const Htuple Threshold )

Matching a template and an image in a resolution pyramid.


The operator exhaustive_match_mg is an additional option for the operator exhaustive_match per-
forming a matching of the image Image and the template ImageTemplate. Hereby ImageTemplate will be
moved over all points of the region of Image, a matching criterion will be calculated with regard to the parameter
Mode and the result values will be stored in ImageMatch.
Of images having been filtered this way, normally only those areas with good matching results are of interest. The
size of the area to be searched, i.e. the region of the input image Image, determines decisively the runtime of
the matching filter. Therefore it is recommendable to use at first exhaustive_match_mg with reduced image
resolution in order to determine a “region of interest” in which good matching results can be expected; then in this
restricted area only the real matching (see also exhaustive_match) will be executed with normal resolution.
Hereby the Gauss-pyramids of Image and ImageTemplate will be composed (in particular the corresponding
regions will be transformed as well). Then on each level of the resolution pyramids - starting with the startlevel
Level - the matching inside the current “region of interest” will be executed. Whereby the “region of interest” on
the startlevel is equivalent to the region of the input image Image. After the filtering, a new “region of interest” is
determined with the help of a threshold operation and will be transformed on the next resolution level:
threshold(..0,Threshold..), if Mode = ’dfd’
threshold(..Threshold,255..), if Mode = ’norm_correlation’
The final matching in the determined “region of interest” will then be calculated with the highest resolution (Level
0). The output image ImageMatch includes the corresponding filter result and the final “region of interest”, which
is determined on the result image with the help of a threshold operation.

HALCON/C Reference Manual, 2008-5-13


3.10. MATCH 233

The operator exhaustive_match_mg therefore is not simply a filter, but can also be considered as a member
of the class of region transformations.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image.
. ImageTemplate (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
The domain of this image will be matched with Image.
. ImageMatch (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte
Result image and result region: values of the matching criterion within the determined “region of interest”.
Number of elements : ImageMatch = Image
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Desired matching criterion.
Default Value : "dfd"
List of values : Mode ∈ {"norm_correlation", "dfd"}
. Level (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Startlevel in the resolution pyramid (highest resolution: Level 0).
Default Value : 1
List of values : Level ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8}
Restriction : (Level < ld(width(Image))) ∧ (Level < ld(height(Image)))
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Threshold to determine the “region of interest”.
Default Value : 30
Suggested values : Threshold ∈ {5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95,
100, 105, 110, 115, 120, 125, 130, 135, 140, 145, 150, 155, 160, 165, 170, 175, 180, 185, 190, 195, 200, 205,
210, 215, 220, 225, 230, 235, 240, 245, 250}
Typical range of values : 0 ≤ Threshold ≤ 255
Minimum Increment : 1
Recommended Increment : 5
Example

read_image(&Image,"monkey");
disp_image(Image,WindowHandle);
draw_rectangle2(WindowHandle,&Row,&Column,&Phi,&Length1,&Length2);
gen_rectangle2(&Rectangle,Row,Column,Phi,Length1,Length2);
reduce_domain(Image,Rectangle,&Template);
exhaustive_match_mg(Image,Template,&ImageMatch,’dfd’1,30);
invert_image(ImageMatch,&ImageInvert);
local_max(ImageInvert,&BestFit);
disp_region(BestFit,WindowHandle);

Result
If the parameter values are correct, the operator exhaustive_match_mg returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
exhaustive_match_mg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
draw_region, draw_rectangle1
Possible Successors
threshold, local_max
Alternatives
exhaustive_match
See also
gen_gauss_pyramid
Module
Foundation

HALCON 8.0.2
234 CHAPTER 3. FILTER

gen_gauss_pyramid ( const Hobject Image, Hobject *ImagePyramid,


const char *Mode, double Scale )

T_gen_gauss_pyramid ( const Hobject Image, Hobject *ImagePyramid,


const Htuple Mode, const Htuple Scale )

Calculating a Gauss pyramid.


The operator gen_gauss_pyramid calculates a pyramid of scaled down images. The scale by which the next
image will be reduced is determined by the parameter Scale. For instance, a value of 0.5 for Scale will shorten
the edge length of Image by 50%. This is exactly equivalent to the “normal” pyramid.
The parameter Mode determines the way of averaging. For a more detailed description concerning this parameter
see also affine_trans_image. In the case that Scale is equal 0.5 there are the additional modes ’min’ and
’max’ available. In this case the minimum or the maximum of the four neighboring pixels is selected.
Please note that each level will be returned as an individual image, i.e., as one iconic objekt, with one matrix and
its own domain. Single or multiple levels can be selected by using select_obj or copy_obj, respectively.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2 / real


Input image.
. ImagePyramid (output_object) . . . . . . . . . . . . (multichannel-)image-array ; Hobject * : byte / uint2 / real
Output images.
Number of elements : ImagePyramid > Image
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Kind of filtermask.
Default Value : "weighted"
List of values : Mode ∈ {"none", "constant", "weighted", "min", "max"}
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Factor for scaling down.
Default Value : 0.5
Suggested values : Scale ∈ {0.2, 0.3, 0.4, 0.5, 0.6}
Typical range of values : 0.1 ≤ Scale ≤ 0.9
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : (0.1 < Scale) ∧ (Scale < 0.9)
Example

gen_gauss_pyramid(Image,Pyramid,"weighted",0.5);
count_obj(Pyramid,&num);
for (i=1; i<=num; i++)
{
select_obj(Pyramid,&Single,i);
disp_image(Single,WindowHandle);
clear(Single);
}

Parallelization Information
gen_gauss_pyramid is reentrant and automatically parallelized (on channel level).
Possible Successors
image_to_channels, count_obj, select_obj, copy_obj
Alternatives
zoom_image_size, zoom_image_factor
See also
affine_trans_image
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.11. MISC 235

monotony ( const Hobject Image, Hobject *ImageMonotony )


T_monotony ( const Hobject Image, Hobject *ImageMonotony )

Calculating the monotony operation.


The operator monotony calculates the monotony operator. Thereby the points which are strictly smaller than the
current grayvalue will be counted in the 8 neighborhood. This number will be entered into the output imaged.
If there is a strict maximum, the value 8 is returned; in case of a minimum or a plateau, the value 0 will be returned.
A ridge or a slope will return the corresponding intermediate values.
The monotony operator is often used to prepare matching operations as it is invariant with regard to lightness
modifications.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageMonotony (output_object) . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Result of the monotony operator.
Number of elements : ImageMonotony = Image
Example

/* searching the strict maximums */


gauss_image(Image,&Gauss,5);
monotony(Gauss,&Monotony);
threshold(Monotony,Maxima,8.0,8.0);

Parallelization Information
monotony is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, median_image, mean_image, smooth_image,
invert_image
Possible Successors
threshold, exhaustive_match, disp_image
Alternatives
local_max, topographic_sketch, corner_response
Module
Foundation

3.11 Misc
convol_image ( const Hobject Image, Hobject *ImageResult,
const char *FilterMask, const char *Margin )

T_convol_image ( const Hobject Image, Hobject *ImageResult,


const Htuple FilterMask, const Htuple Margin )

Convolve an image with an arbitrary filter mask.


convol_image convolves the input image Image with an arbitrary linear filter. The corresponding filter mask,
which is given in FilterMask can be generated either from a file or a tuple. Several options for the treatment at
the image’s borders can be chosen (Margin):

gray value Pixels outside of the image edges are assumued


to be constant (with the indicated gray value).
’continued’ Continuation of edge pixels.
’cyclic’ Cyclic continuation of image edges.
’mirrored’ Reflection of pixels at the image edges.

HALCON 8.0.2
236 CHAPTER 3. FILTER

All image points are convolved with the filter mask. If an overflow or underflow occurs, the resulting gray value
is clipped. Hence, if filters that result in negative output values are used (e.g., derivative filters) the input image
should be of type int2. If a filename is given in FilterMask the filter mask is read from a text file with the
following structure:
hMask sizei
hInverse weight of the maski
hMatrixi
The first line contains the size of the filter mask, given as two numbers separated by white space (e.g., 3 3 for
3 × 3). Here, the first number defines the height of the filter mask, while the second number defines its width. The
next line contains the inverse weight of the mask, i.e., the number by which the convolution of a particular image
point is divided. The remaining lines contain the filter mask as integer numbers (separated by white space), one
line of the mask per line in the file. The file must have the extension “.fil”. This extension must not be passed to
the operator. If the filter mask is to be computed from a tuple, the tuple given in FilterMask must also satisfy
the structure described above. However, in this case the line feed is omitted.
For example, lets assume we want to use the following filter mask:
 
1 2 1
1 
16
2 4 2 
1 2 1
If the filter mask should be generated from a file, then the file should look like this:
33
16
121
242
121
In contrast, if the filter mask should be generated from a tuple, then the following tuple must be passed in
FilterMask:
[3,3,16,1,2,1,2,4,2,1,2,1]
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Image to be convolved.
. ImageResult (output_object) . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject * : byte / int2 / uint2
Convolved result image.
. FilterMask (input_control) . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; (Htuple .) const char * / Hlong
Filter mask as file name or tuple.
Default Value : "sobel"
Suggested values : FilterMask ∈ {"sobel", "laplace4", "lowpas_3_3"}
. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char * / Hlong / double
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Parallelization Information
convol_image is reentrant and automatically parallelized (on tuple level, channel level).
Module
Foundation

expand_domain_gray ( const Hobject InputImage, Hobject *ExpandedImage,


Hlong ExpansionRange )

T_expand_domain_gray ( const Hobject InputImage,


Hobject *ExpandedImage, const Htuple ExpansionRange )

Expand the domain of an image and set the gray values in the expanded domain.

HALCON/C Reference Manual, 2008-5-13


3.11. MISC 237

expand_domain_gray expands the border gray values of the domain outwards. The width of the expansion
is set by the parameter ExpansionRange. All filters in HALCON use gray values of the pixels outside the
domain depending on the filter width. This may lead to undesirable side effects especially in the border region
of the domain. For example, if the foreground (domain) and the background of the image differ strongly in
brightness, the result of a filter operation may lead to undesired darkening or brightening at the border of the
domain. In order to avoid this drawback, the domain is expanded by expand_domain_gray in a preliminary
stage, copying the gray values of the border pixels to the outside of the domain. In addition, the domain itself is
also expanded to reflect the newly set pixels. Therefore, in many cases it is reasonable to reduce the domain again
( reduce_domain or change_domain) after using expand_domain_gray and call the filter operation
afterwards. ExpansionRange should be set to the half of the filter width.
Parameter
. InputImage (input_object) . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image with domain to be expanded.
. ExpandedImage (output_object) . . . . . . . . image(-array) ; Hobject * : byte / int1 / int2 / uint2 / int4 / real
Output image with new gray values in the expanded domain.
. ExpansionRange (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Radius of the gray value expansion, measured in pixels.
Default Value : 2
Suggested values : ExpansionRange ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16}
Restriction : ExpansionRange ≥ 1
Example (Syntax: HDevelop)

read_image(Fabrik, ’fabrik.tif’);
gen_rectangle2(Rectangle_Label,243,320,-1.55,62,28);
reduce_domain(Fabrik, Rectangle_Label, Fabrik_Label);
/* Character extraction without gray value expansion: */
mean_image(Fabrik_Label,Label_Mean_normal,31,31);
dyn_threshold(Fabrik_Label,Label_Mean_normal,Characters_normal,10,’dark’);
dev_display(Fabrik);
dev_display(Characters_normal);
/* The characters in the border region are not extracted ! */
stop();
/* Character extraction with gray value expansion: */
expand_domain_gray(Fabrik_Label, Label_expanded,15);
reduce_domain(Label_expanded,Rectangle_Label, Label_expanded_reduced);
mean_image(Label_expanded_reduced,Label_Mean_expanded,31,31);
dyn_threshold(Fabrik_Label,Label_Mean_expanded,Characters_expanded,10,’dark’);
dev_display(Fabrik);
dev_display(Characters_expanded);
/* Now, even in the border region the characters are recognized */

Complexity
Let L the perimeter of the domain. Then the runtime complexity is approximately O(L) ∗ ExpansionRange.
Result
expand_domain_gray returns H_MSG_TRUE if all parameters are correct. If necessary, an exception han-
dling is raised.
Parallelization Information
expand_domain_gray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
reduce_domain
Possible Successors
reduce_domain, mean_image, dyn_threshold
See also
reduce_domain, mean_image
Module
Foundation

HALCON 8.0.2
238 CHAPTER 3. FILTER

gray_inside ( const Hobject Image, Hobject *ImageDist )


T_gray_inside ( const Hobject Image, Hobject *ImageDist )

Calculate the lowest possible gray value on an arbitrary path to the image border for each point in the image.
gray_inside determines the “cheapest” path to the image border for each point in the image, i.e., the path on
which the lowest gray values have to be overcome. The resulting image contains the difference of the gray value
of the particular point and the maximum gray value on the path. Bright areas in the result image therefore signify
that these areas (which are typically dark in the original image) are surrounded by bright areas. Dark areas in the
result image signify that there are only small gray value differences between them and the image border (which
doesn’t mean that they are surrounded by dark areas; a small “gap” of dark values suffices). The value 0 (black) in
the result image signnifies that only darker or equally bright pixels exist on the path to the image border.
The operator is implemented by first segmenting into basins and watersheds the image using the watersheds
operator. If the image is regarded as a gray value mountain range, basins are the places where water accumulates
and the mountain ridges are the watersheds. Then, the watersheds are distributed to adjacent basins, thus leaving
only basins. The border of the domain (region) of the original image is now searched for the lowest gray value,
and the region in which it resides is given its result values. If the lowest gray value resides on the image border,
all result values can be calculated immediately using the gray value differences to the darkest point. If the smalles
found gray value lies in the interior of a basin, the lowest possible gray value has to be determined from the already
processed adjacent basins in order to compute the new values. An 8-neighborhood is used to determine adjacency.
The found region is subtracted from the regions yet to process, and the whole process is repeated. Thus, the image
is “stripped” form the outside.
Analogously to watersheds, it is advisable to apply a smoothing operation before calling watersheds, e.g.,
binomial_filter or gauss_image, in order to reduce the amount of regions that result from the watershed
algorithm, and thus to speed up the processing time.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte


Image being processed.
. ImageDist (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : int2
Result image.
Example

read_image(Image,"coin");
gauss_image(Image,&GaussImage,11);
open_window (0,0,512,512,0,"visible","",&WindowHandle);
gray_inside(GaussImage,Result);
disp_image(Result,WindowHandle);

Result
gray_inside always returns H_MSG_TRUE.
Parallelization Information
gray_inside is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image, mean_image, median_image
Possible Successors
select_shape, area_center, count_obj
See also
watersheds
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.11. MISC 239

gray_skeleton ( const Hobject Image, Hobject *GraySkeleton )


T_gray_skeleton ( const Hobject Image, Hobject *GraySkeleton )

Thinning of gray value images.


gray_skeleton applies a gray value thinning operation to the input image Image. Figuratively, the gray
value “mountain range” is reduced to its ridge lines by setting the gray value of “hillsides” to the gray value at
the corresponding valley bottom. The resulting ridge lines are at most two pixels wide. This operator is espe-
cially useful for thinning edge images, and is thus an alternative to nonmax_suppression_amp. In contrast
to nonmax_suppression_amp, gray_skeleton preserves contours, but is much slower. In contrast to
skeleton, this operator changes the gray values of an image while leaving its region unchanged.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte
Image to be thinned.
. GraySkeleton (output_object) . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte
Thinned image.
Example

/* Seeking leafs of a tree in an aerial picture: */


read_image(&Image,"wald1");
gray_skeleton(Image&,Skelett);
mean_image(Skelett,&MeanSkelett,7,7);
dyn_threshold(Skelett,MeanSkelett,&Leafs,3.0,"light");

Result
gray_skeleton returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_skeleton is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
mean_image
Alternatives
nonmax_suppression_amp, nonmax_suppression_dir, local_max
See also
skeleton, gray_dilation_rect
Module
Foundation

T_lut_trans ( const Hobject Image, Hobject *ImageResult,


const Htuple Lut )

Transform an image with a gray-value look-up-table


lut_trans transforms an image Image by using a gray value look-up-table Lut. This table acts as a transfor-
mation function. In the case of byte-images, Lut has to be a tuple of length 256. In the case of int2-images, Lut
has to be a tuple of length 256 <= length <= 65536. If the length of the Lut is <= 32768, the transformation is
applied to the positive gray values only, i.e., the first element of the Lut specifies the new gray value for the gray
value 0. If the Lut is longer than 32768, exactly 65536 must be passed. In this case, the positive and negative gray
values are transformed. In this case, the first element indicates the new gray value for the gray value -32768 of the
input image, while the last element of the tuple indicates the new gray value for the gray value 32767. In all cases,
the gray values of values outside the range of Lut are set to 0. In the case of uint2-images, Lut has to be a tuple
of length 256 <= length <= 65536. Gray values outside the range of Lut are set to 0.

HALCON 8.0.2
240 CHAPTER 3. FILTER

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Image whose gray values are to be transformed.
. ImageResult (output_object) . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Transformed image.
. Lut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Table containing the transformation.
Example

/* To get the inverse of an image: */


Htuple lut;
read_image(&Image,"wald1");
creat_tuple(&lut,256);
for (i=0; i<256; i++)
set_i(lut,255-i,i);
T_lut_trans(Image,&Invers,lut);

Result
The operator lut_trans returns the value H_MSG_TRUE if the parameters are correct. Otherwise an exception
is raised.
Parallelization Information
lut_trans is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Module
Foundation

symmetry ( const Hobject Image, Hobject *ImageSymmetry, Hlong MaskSize,


double Direction, double Exponent )

T_symmetry ( const Hobject Image, Hobject *ImageSymmetry,


const Htuple MaskSize, const Htuple Direction, const Htuple Exponent )

Symmentry of gray values along a row.


symmetry calculates the symmetry along a line. For each pixel the gray values of both sides of the line are
compared: The absolut value of the differences of gray values with same distance to the pixel is computed. Each
of these differences is weighted by the exponent (after division by 255) and the summed up.

MaskSize  Exponent
255 X |g(i) − g(−i)|
sym := 255 −
MaskSize i=1
255

Pixels with a high symmetry have large gray values.


Attention
Currently only horizontal search lines are implemented
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte
Input image.
. ImageSymmetry (output_object) . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte
Symmetry image.
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Extension of search area.
Default Value : 40
Suggested values : MaskSize ∈ {3, 5, 7, 10, 15, 20, 25, 30, 40, 50, 60, 70, 80, 100, 120, 140, 180}
Typical range of values : 3 ≤ MaskSize ≤ 1000
Minimum Increment : 1
Recommended Increment : 2

HALCON/C Reference Manual, 2008-5-13


3.11. MISC 241

. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double


Angle of test direction.
Default Value : 0.0
Suggested values : Direction ∈ {0.0}
Typical range of values : 0.0 ≤ Direction ≤ 0.0
. Exponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Exponent for weighting.
Default Value : 0.5
Suggested values : Exponent ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0.05 ≤ Exponent ≤ 1.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : (0 < Exponent) ∧ (Exponent ≤ 1)
Example (Syntax: HDevelop)

read_image(Image,’monkey’)
symmetry(Image,ImageSymmetry,70,0.0,0.5)
threshold(ImageSymmetry,SymmPoints,170,255)

Result
If the parameter values are correct the operator symmetry returns the value H_MSG_TRUE The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
symmetry is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold
Module
Foundation

topographic_sketch ( const Hobject Image, Hobject *Sketch )


T_topographic_sketch ( const Hobject Image, Hobject *Sketch )

Compute the topographic primal sketch of an image.


topographic_sketch computes the topographic primal sketch of the input image Image. This is done by
approximating the image locally by a bicubic polynomial (“facet model”). It serves to calculate the first and second
partial derivatives of the image, and thus to classify the image into 11 classes. These classes are coded in the output
image Sketch as numbers from 1 to 11. The classes are as follows:
Peak 1
Pit 2
Ridge 3
Ravine 4
Saddle 5
Flat 6
Hillside Slope 7
Hillside Convex 8
Hillside Concave 9
Hillside Saddle 10
Hillside Inflection 11
In order to obtain the separate classes as regions, a threshold operation has to be applied to the result image with
the appropriate thresholds.

HALCON 8.0.2
242 CHAPTER 3. FILTER

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte
Image for which the topographic primal sketch is to be computed.
. Sketch (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte
Label image containing the 11 classes.
Example

/* To extract the Ridges from a Image */


read_image(&Image,"sinus");
topographic_sketch(Image,&Sketch);
threshold(Sketch,&Ridges,3.0,3.0);

Complexity
Let n be the number of pixels in the image. Then O(n) operations are performed.
Result
topographic_sketch returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
topographic_sketch is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
threshold
References
R. Haralick, L. Shapiro: “Computer and Robot Vision, Volume I”; Reading, Massachusetts, Addison-Wesley;
1992; Kapitel 8.13.
Module
Foundation

3.12 Noise
T_add_noise_distribution ( const Hobject Image, Hobject *ImageNoise,
const Htuple Distribution )

Add noise to an image.


add_noise_distribution adds noise distributed according to Distribution to the Image. The result-
ing gray values are clipped to the range of the corresponding pixel type.
The Distribution is stored in a tuple of length 513. The individual values of this tuple define the frequency
of noise with a specific amplitude defined by the position within the tuple. The central value, i.e., the value at
the position 256 in the tuple defines the frequency of pixels that are not changed. The value at the position 255
defines the frequency of pixels for which the grayvalue is decreased by 1. The value at the position 254 defines the
respective frequency for a grayvalue decrease of 2, and so on. Analogously, the value at position 257 defines the
frequency of pixels for which the grayvalue is increased by 1.
The Distribution represents salt and pepper noise if at most one value at a position smaller than 256 is not
equal to zero and at most one value at a position larger than 256 is not equal to zero. In case of salt and pepper
noise, the noisified pixels are set to the minimum (pepper) and maximum (salt) values that can be represented by
ImageNoise if the amount of pepper is indicated by the value at position 0 and the amount of salt is indicated
by the value at position 512 in the tuple.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2
Input image.
. ImageNoise (output_object) . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2
Noisy image.
Number of elements : ImageNoise = Image
. Distribution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . distribution.values-array ; Htuple . double
Noise distribution.
Number of elements : 513

HALCON/C Reference Manual, 2008-5-13


3.12. NOISE 243

Example

read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
set_d(PerSalt,30.0,0);
set_d(PerPepper,30.0,0);
T_sp_distribution(PerSalt,PerPepper,&Dist);
T_add_noise_distribution(Image,&ImageNoise,Dist);
disp_image(ImageNoise,WindowHandle);

Result
add_noise_distribution returns H_MSG_TRUE if all parameters are correct. If the input is empty the
behaviour can be set via set_system(’no_object_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
add_noise_distribution is reentrant and automatically parallelized (on tuple level, channel level, domain
level).
Possible Predecessors
gauss_distribution, sp_distribution, noise_distribution_mean
Alternatives
add_noise_white
See also
sp_distribution, gauss_distribution, noise_distribution_mean, add_noise_white
Module
Foundation

add_noise_white ( const Hobject Image, Hobject *ImageNoise,


double Amp )

T_add_noise_white ( const Hobject Image, Hobject *ImageNoise,


const Htuple Amp )

Add noise to an image.


add_noise_white adds noise to the image Image. The noise is white noise, equally distributed in the interval
[-Amp,Amp], and is generated by using the C function “drand48” with an initial time dependent seed. The resulting
gray values are clipped to the range of the corresponding pixel type.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2


Input image.
. ImageNoise (output_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Noisy image.
Number of elements : ImageNoise = Image
. Amp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Maximum noise amplitude.
Default Value : 60.0
Suggested values : Amp ∈ {1.0, 2.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 1.0 ≤ Amp ≤ 1000.0
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Amp > 0
Example

read_image(&Image,"fabrik");

HALCON 8.0.2
244 CHAPTER 3. FILTER

disp_image(Image,WindowHandle);
add_noise_white(Image,&ImageNoise,90.0);
disp_image(ImageNoise,WindowHandle);

Result
add_noise_white returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
add_noise_white is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
add_noise_distribution
See also
add_noise_distribution, noise_distribution_mean, gauss_distribution,
sp_distribution
Module
Foundation

T_gauss_distribution ( const Htuple Sigma, Htuple *Distribution )

Generate a Gaussian noise distribution.


gauss_distribution generates a Gaussian noise distribution. The parameter Sigma determines
the noise’s standard deviation. Usually, the result Distribution is used as input for the operator
add_noise_distribution.
Parameter
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Standard deviation of the Gaussian noise distribution.
Default Value : 2.0
Suggested values : Sigma ∈ {1.5, 2.0, 3.0, 5.0, 10.0}
Typical range of values : 0.0 ≤ Sigma ≤ 100.0
Minimum Increment : 0.1
Recommended Increment : 1.0
. Distribution (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . distribution.values-array ; Htuple . double *
Resulting Gaussian noise distribution.
Number of elements : 513
Example

read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
set_d(Sigma,30.0,0);
T_gauss_distribution(Sigma,&Dist);
T_add_noise_distribution(Image,&ImageNoise,Dist);
disp_image(ImageNoise,WindowHandle);

Parallelization Information
gauss_distribution is reentrant and processed without parallelization.
Possible Successors
add_noise_distribution
Alternatives
sp_distribution, noise_distribution_mean
See also
sp_distribution, add_noise_white, noise_distribution_mean
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.12. NOISE 245

T_noise_distribution_mean ( const Hobject ConstRegion,


const Hobject Image, const Htuple FilterSize, Htuple *Distribution )

Determine the noise distribution of an image.


noise_distribution_mean calculates the noise distribution in a region of the image Image. The parameter
ConstRegion determines a region of the image with approximately constant gray values. Ideally, the changes
in gray values should only be caused by noise in this region. From this region the noise distribution is determined
by using the mean_image operator to smooth the image, and to use the gray value differences in this area as an
estimate for the noise distribution, which is returned in Distribution.
Attention
It is important to ensure that the region ConstRegion is not too close to a large gradient in the image, because
the gradient values are then used for calculating the mean. This means the the distance of ConstRegion must
be at least as large as the filter size FilterSize used for calculating the mean.
Parameter

. ConstRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region from which the noise distribution is to be estimated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Corresponding image.
. FilterSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Size of the mean filter.
Default Value : 21
Suggested values : FilterSize ∈ {5, 11, 15, 21, 31, 51, 101}
Typical range of values : 3 ≤ FilterSize ≤ 501 (lin)
Minimum Increment : 2
Recommended Increment : 2
. Distribution (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . distribution.values-array ; Htuple . double *
Noise distribution of all input regions.
Parallelization Information
noise_distribution_mean is reentrant and processed without parallelization.
Possible Predecessors
draw_region, gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, threshold,
erosion_circle, binomial_filter, gauss_image, smooth_image, sub_image
Possible Successors
add_noise_distribution, disp_distribution
See also
mean_image, gauss_distribution
Module
Foundation

T_sp_distribution ( const Htuple PercentSalt,


const Htuple PercentPepper, Htuple *Distribution )

Generate a salt-and-pepper noise distribution.


sp_distribution generates a noise distribution with the values 0 and 255. The parameters PercentSalt
and PercentPepper determine the percentage of white and black noise pixels, respectively. The sum of these
parameters must be smaller than 100. Usually, the result Distribution is used as input for the operator
add_noise_distribution.
Parameter

HALCON 8.0.2
246 CHAPTER 3. FILTER

. PercentSalt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong


Percentage of salt (white noise pixels).
Default Value : 5.0
Suggested values : PercentSalt ∈ {1.0, 2.0, 5.0, 7.0, 10.0, 15.0, 20.0, 30.0}
Typical range of values : 0.0 ≤ PercentSalt ≤ 100.0
Minimum Increment : 0.1
Recommended Increment : 1.0
Restriction : (0.0 ≤ PercentSalt) ∧ (PercentSalt ≤ 100.0)
. PercentPepper (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Percentage of pepper (black noise pixels).
Default Value : 5.0
Suggested values : PercentPepper ∈ {1.0, 2.0, 5.0, 7.0, 10.0, 15.0, 20.0, 30.0}
Typical range of values : 0.0 ≤ PercentPepper ≤ 100.0
Minimum Increment : 0.1
Recommended Increment : 1.0
Restriction : (0.0 ≤ PercentPepper) ∧ (PercentPepper ≤ 100.0)
. Distribution (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . distribution.values-array ; Htuple . double *
Resulting noise distribution.
Number of elements : 513
Example

read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
create_tuple(&PerSalt,1);
set_d(PerSalt,30.0,0);
create_tuple(&PerPepper,1);
set_d(PerPepper,30.0,0);
T_sp_distribution(PerSalt,PerPepper,&Dist);
T_add_noise_distribution(Image,&ImageNoise,Dist);
disp_image(ImageNoise,WindowHandle);

Parallelization Information
sp_distribution is reentrant and processed without parallelization.
Possible Successors
add_noise_distribution
Alternatives
gauss_distribution, noise_distribution_mean
See also
gauss_distribution, noise_distribution_mean, add_noise_white
Module
Foundation

3.13 Optical-Flow

optical_flow_mg ( const Hobject Image1, const Hobject Image2,


Hobject *VectorField, const char *Algorithm, double SmoothingSigma,
double IntegrationSigma, double FlowSmoothness,
double GradientConstancy, const char *MGParamName,
const char *MGParamValue )

T_optical_flow_mg ( const Hobject Image1, const Hobject Image2,


Hobject *VectorField, const Htuple Algorithm,
const Htuple SmoothingSigma, const Htuple IntegrationSigma,
const Htuple FlowSmoothness, const Htuple GradientConstancy,
const Htuple MGParamName, const Htuple MGParamValue )

Compute the optical flow between two images.

HALCON/C Reference Manual, 2008-5-13


3.13. OPTICAL-FLOW 247

optical_flow_mg computes the optical flow between two images. The optical flow represents information
about the movement between two consecutive images of a monocular image sequence. The movement in the
images can be caused by objects that move in the world or by a movement of the camera (or both) between the
acquisition of the two images. The projection of these 3D movements into the 2D image plane is called the optical
flow.
The two consecutive images of the image sequence are passed in Image1 and Image2. The computed optical
flow is returned in VectorField. The vectors in the vector field VectorField represent the movement in the
image plane between Image1 and Image2. The point in Image2 that corresponds to the point (r, c) in Image1
is given by (r0 , c0 ) = (r + u(r, c), c + v(r, c)), where u(r, c) and v(r, c) denote the value of the row and column
components of the vector field image VectorField at the point (r, c).
The parameter Algorithm allows the selection of three different algorithms for computing the optical flow. All
three algorithms are implemented by using multigrid solvers to ensure an efficient solution of the underlying partial
differential equations.
For Algorithm = ’fdrig’, the method proposed by Brox, Bruhn, Papenberg, and Weickert is used. This approach
is flow-driven, robust, isotropic, and uses a gradient constancy term.
For Algorithm = ’ddraw’, a robust variant of the method proposed by Nagel and Enkelmann is used. This
approach is data-driven, robust, anisotropic, and uses warping (in contrast to the original approach).
For Algorithm = ’clg’ the combined local-global method proposed by Bruhn, Weickert, Feddern, Kohlberger,
and Schnörr is used.
In all three algorithms, the input images can first be smoothed by a Gaussian filter with a standard deviation of
SmoothingSigma (see derivate_gauss).
All three approaches are variational approaches that compute the optical flow as the minimizer of a suitable energy
functional. In general, the energy functionals have the following form:

E(w) = ED (w) + αES (w),

where w = (u, v, 1) is the optical flow vector field to be determined (with a time step of 1 in the third coordinate).
The image sequence is regarded as a continuous function f (x), where x = (r, c, t) and (r, c) denotes the position
and t the time. Furthermore, ED (w) denotes the data term, while ES (w) denotes the smoothness term, and α is a
regularization parameter that determines the smoothness of the solution. The regularization parameter α is passed
in FlowSmoothness. While the data term encodes assumptions about the constancy of the object features in
consecutive images, e.g., the constancy of the gray values or the constancy of the first spatial derivative of the
gray values, the smoothness term encodes assumptions about the (piecewise) smoothness of the solution, i.e., the
smoothness of the vector field to be determined.
The FDRIG algorithm is based on the minimization of an energy functional that contains the following assump-
tions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (r + u, c + v, t + 1) = f (r, c, t). This can be written more compactly as
f (x + w) = f (x) using vector notation.
Constancy of the spatial gray value derivatives: It is assumed that corresponding pixels in consecutive images of an
image sequence additionally have have the same spatial gray value derivatives, i.e, that ∇2 f (x + u, y + v, t + 1) =
∇2 f (x, y, t) also holds, where ∇2 f = (∂x f, ∂y f ). This can be written more compactly as ∇2 f (x+w) = ∇2 f (x).
In contrast to the gray value constancy, the gradient constancy has the advantage that it is invariant to additive global
illumination changes.
Large displacements: It is assumed that large displacements, i.e., displacements larger than one pixel, occur. Under
this assumption, it makes sense to consciously abstain from using the linearization of the constancy assumptions
in the model that is typically proposed in the literature.
Statistical robustness in the data term: To reduce the influence of outliers, i.e., points that violate the constancy
assumptions, they are penalized in a statistically robust manner, i.e., the customary
√ non-robust quadratical penal-
ization ΨD (s2 ) = s2 is replaced by a linear penalization via ΨD (s2 ) = s2 + 2 , where  = 0.001 is a fixed
regularization constant.
Preservation of discontinuities in the flow field I: The solution is assumed to be piecewise smooth. While the actual
2 2
smoothness is achieved by penalizing the first
2
√ derivatives of the flow |∇2 u| + |∇2 v| , the use of a statistically
2 2
robust (linear) penalty function ΨS (s ) = s +  with  = 0.001 provides the desired preservation of edges in
the movement in the flow field to be determined. This type of smoothness term is called flow-driven and isotropic.

HALCON 8.0.2
248 CHAPTER 3. FILTER

Taking into account all of the above assumptions, the energy functional of the FDRIG algorithm can be written as

Z  
2 2
EFDRIG (w) = |f (x + w) − f (x)| + γ |∇2 f (x + w) − ∇2 f (x)| drdc
ΨD
| {z } | {z }
gray value constancy gradient constancy
Z  
+α ΨS |∇2 u(x)|2 + |∇2 v(x)|2 drdc
| {z }
smoothness assumption

Here, α is the regularization parameter passed in FlowSmoothness, while γ is the gradient constancy weight
passed in GradientConstancy. These two parameters, which constitute the model parameters of the FDRIG
algorithm, are described in more detail below.
The DDRAW algorithm is based on the minimization of an energy functional that contains the following assump-
tions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (x + w) = f (x).
Large displacements: It is assumed that large displacements, i.e., displacements larger than one pixel, occur. Under
this assumption, it makes sense to consciously abstain from using the linearization of the constancy assumptions
in the model that is typically proposed in the literature.
Statistical robustness in the data term: To reduce the influence of outliers, i.e., points that violate the constancy
assumptions, they are penalized in a statistically robust manner, i.e., the customary
√ non-robust quadratical penal-
ization ΨD (s2 ) = s2 is replaced by a linear penalization via ΨD (s2 ) = s2 + 2 , where  = 0.001 is a fixed
regularization constant.
Preservation of discontinuities in the flow field II: The solution is assumed to be piecewise smooth. In contrast to
the FDRIG algorithm, which allows discontinuities everywhere, the DDRAW algorithm only allows discontinuities
at the edges in the original image. Here, the local smoothness is controlled in such a way that the flow field is sharp
across image edges, while it is smooth along the image edges. This type of smoothness term is called data-driven
and anisotropic.
All assumptions of the DDRAW algorithm can be combined into the following energy functional:

Z  
2
EDDRAW (w) = ΨD |f (x + w) − f (x)| drdc
| {z }
gray value constancy
Z
∇2 u(x)> PNE (∇2 f (x)) ∇2 u(x) + ∇2 v(x)> PNE (∇2 f (x)) ∇2 v(x) drdc ,


| {z }
smoothness assumption

where PNE (∇2 f (x)) is a normalized projection matrix orthogonal to ∇2 f (x), for which

fc2 (x) + 2S −fr (x)fc (x)


 
1
PNE (∇f (x)) = .
|∇2 f (x)|2 + 22S −fr (x)fc (x) fr2 (x) + 2S

holds. This matrix ensures that the smoothness of the flow field is only assumed along the image edges. In
contrast, no assumption is made with respect to the smoothness across the image edges, resulting in the fact
that discontinuities in the solution may occur across the image edges. In this respect, S = 0.001 serves as a
regularization parameter that prevents the projection matrix PNE (∇2 f (x)) from becoming singular. In contrast to
the FDRIG algorithm, there is only one model parameter for the DDRAW algorithm: the regularization parameter
α. As mentioned above, α is described in more detail below.
As for the two approaces described above, the CLG algorithm uses certain assumptions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (x + w) = f (x).
Small displacements: In contrast to the two approaches above, it is assumed that only small displacements can
occur, i.e., displacements in the order of a few pixels. This facilitates a linearization of the constancy assumptions

HALCON/C Reference Manual, 2008-5-13


3.13. OPTICAL-FLOW 249

in the model, and leads to the approximation f (x) + ∇3 f (x)> w(x) = f (x), i.e., ∇3 f (x)> w(x) = 0 should
hold. Here, ∇3 f (x) denotes the gradient in the spatial as well as the temporal domain.
Local constancy of the solution: Furthermore, it is assumed that the flow field to be computed is locally constant.
This facilitates the integration of the image data in the data term over the respective neighborhood of each pixel.
This, in turn, increases the robustness of the algorithm against noise. Mathematically, this can be achieved by
reformulating the quadratic data term as (∇3 f (x)> w(x))2 = w(x)> ∇3 f (x)∇3 f (x)> w(x). By performing a
local Gaussian-weighted integration over a neighborhood specified by the ρ (passed in IntegrationSigma),
the following data term is obtained: w(x)> Gρ ∗(∇3 f (x)∇3 f (x)> ) w(x). Here, Gρ ∗. . . denotes a convolution of
the 3 × 3 matrix ∇3 f (x)∇3 f (x)> with a Gaussian filter with a standard deviation of ρ (see derivate_gauss).
General smoothness of the flow field: Finally, the solution is assumed to be smooth everywhere in the image. This
particular type of smoothness term is called homogeneous.
All of the above assumptions can be combined into the following energy functional:
Z Z
w(x)> Gρ ∗ (∇3 f (x)∇3 f (x)> ) w(x) drdc + α |∇2 u(x)|2 + |∇2 v(x)|2 drdc ,
 
ECLG (w) =
| {z } | {z }
gray value constancy smoothness assumption

The corresponding model parameters are the regularization parameter α as well as the integration scale ρ (passed
in IntegrationSigma), which determines the size of the neighborhood over which to integrate the data term.
These two parameters are described in more detail below.
To compute the optical flow vector field for two consecutive images of an image sequence with the FDRIG,
DDRAW, or CLG algorithm, the solution that best fulfills the assumptions of the respective algorithm must be
determined. From a mathematical point of view, this means that a minimization of the above energy functionals
should be performed. For the FDRIG and DDRAW algorithms, so called coarse-to-fine warping strategies play an
important role in this minimization, because they enable the calculation of large displacements. Thus, they are a
suitable means to handle the omission of the linearization of the constancy assumptions numerically in these two
approaches.
To calculate large displacements, coarse-to-fine warping strategies use two concepts that are closely interlocked:
The successive refinement of the problem (coarse-to-fine) and the successive compensation of the current image
pair by already computed displacements (warping). Algorithmically, such coarse-to-fine warping strategies can be
described as follows:
1. First, both images of the current image pair are zoomed down to a very coarse resolution level.
2. Then, the optical flow vector field is computed on this coarse resolution.
3. The vector field is required on the next resolution level: It is applied there to the second image of the image
sequence, i.e., the problem on the finer resolution level is compensated by the already computed optical flow field.
This step is also known as warping.
4. The modified problem (difference problem) is now solved on the finer resolution level, i.e., the optical flow
vector field is computed there.
5. The steps 3-4 are repeated until the finest resolution level is reached.
6. The final result is computed by adding up the vector fields from all resolution levels.
This incremental computation of the optical flow vector field has the following advantage: While the coarse-to-fine
strategy ensures that the displacements on the finest resolution level are very small, the warping strategy ensures
that the displacements remain small for the incremental displacements (optical flow vector fields of the difference
problems). Since small displacements can be computed much more accurately than larger displacements, the
accuracy of the results typically increases significantly by using such a coarse-to-fine warping strategy. However,
instead of having to solve a single correspondence problem, an entire hierarchy of these problems must now be
solved. For the CLG algorithm, such a coarse-to-fine warping strategy is unnecessary since the model already
assumes small displacements.
The maximum number of resolution levels (warping levels), the resolution ratio between two consecutive resolution
levels, as well as the finest resolution level can be specified for the FDRIG as well as the DDRAW algorithm.
Details can be found below.
The minimization of functionals is mathematically very closely related to the minimization of functions: Like
the fact that the zero crossing of the first derivative is a necessary condition for the minimum of a function, the
fulfillment of the so called Euler-Lagrange equations is a necessary condition for the minimizing function of a

HALCON 8.0.2
250 CHAPTER 3. FILTER

functional (the minimizing function corresponds to the desired optical flow vector field in this case). The Euler-
Lagrange equations are partial differential equations. By discretizing these Euler-Lagrange equations using finite
differences, large sparse nonlinear equation systems result for the FDRIG and DDRAW algorithms. Because
coarse-to-fine warping strategies are used, such an equation system must be solved for each resolution level, i.e.,
for each warping level. For the CLG algorithm, a single sparse linear equation system must be solved.
To ensure that the above nonlinear equation systems can be solved efficiently, the FDRIG and DDRAW use bidi-
rectional multigrid methods. From a numerical point of view, these strategies are among the fastest methods for
solving large linear and nonlinear equation systems. In contrast to conventional nonhierarchical iterative methods,
e.g., the different linear and nonlinear Gauss-Seidel variants, the multigrid methods have the advantage that correc-
tions to the solution can be determined efficiently on coarser resolution levels. This, in turn, leads to a significantly
faster convergence. The basic idea of multigrid methods additionally consists of hierarchically computing these
correction steps, i.e., the computation of the error on a coarser resolution level itself uses the same strategy and
efficiently computes its error (i.e., the error of the error) by correction steps on an even coarser resolution level.
Depending on whether one or two error correction steps are performed per cycle, a so called V or W cycle is
obtained. The corresponding strategies for stepping through the resolution hierarchy are as follows for two to four
resolution levels:

Bidirectional multigrid algorithm

Fine
V-Cycles W-Cycles
1 u u u u u u u u u u u u
AAs A s s As s AAs A s s s As s s
2 A  A

A A  A

s As s As A
s As s s As s s
 A 
3 A
AAs AAsAA
s AsAAs
4

Coarse

Here, iterations on the original problem are denoted by large markers, while small markers denote iterations on
error correction problems.
Algorithmically, a correction cycle can be described as follows:
1. In the first step, several (few) iterations using an interative linear or nonlinear basic solver are performed (e.g.,
a variant of the Gauss-Seidel solver). This step is called pre-relaxation step.
2. In the second step, the current error is computed to correct the current solution (the solution after step 1).
For efficiency reasons, the error is calculated on a coarser resolution level. This step, which can be performed
iteratively several times, is called coarse grid correction step.
3. In a final step, again several (few) iterations using the interative linear or nonlinear basic solver of step 1 are
performed. This step is called post-relaxation step.
In addition, the solution can be initialized in a hierarchical manner. Starting from a very coarse variant of the
original (non)linear equation system, the solution is successively refined. To do so, interpolated solutions of
coarser variants of the equation system are used as the initialization of the next finer variant. On each resolution
level itself, the V or W cycles described above are used to efficiently solve the (non)linear equation system on that
resolution level. The corresponding multigrid methods are called full multigrid methods in the literature. The full
multigrid algorithm can be visualized as follows:

Full multigrid algorithm

Fine 4→3 3→2 2→1


i w1 w2 i w1 w2 i w1 w2
1 u u u
2 u u u s

A s s s
A s s

A  A
u u uA s s sA s s s s s sA s s s s s sA s s s

3 A A A A A A A A A A
4 u u uA
 AsAA
s s A
A s s A
A s sAA
AA s s A
A s AAsAAs As As

Coarse

HALCON/C Reference Manual, 2008-5-13


3.13. OPTICAL-FLOW 251

This example represents a full multigrid algorithm that uses two W correction cycles per resolution level of the
hierarchical initialization. The interpolation steps of the solution from one resolution level to the next arew denoted
by i and the two W correction cycles by w1 and w2 . Iterations on the original problem are denoted by large markers,
while small markers denote iterations on error correction problems.
In the multigrid implementation of the FDRIG, DDRAW, and CLG algorithm, the following parameters can be
set: whether a hierarchical initialization is performed; the number of coarse grid correction steps; the maximum
number of correction levels (resolution levels); the number of pre-relaxation steps; the number of post-relaxation
steps. These parameters are described in more detail below.
The basic solver for the FDRIG algorithm is a point-coupled fixed-point variant of the linear Gauss-Seidel algo-
rithm. The basic solver for the DDRAW algorithm is an alternating line-coupled fixed-point variant of the same
type. The number of fixed-point steps can be specified for both algorithms with a further parameter. The basic
solver for the CLG algorithm is a point-coupled linear Gauss-Seidel algorithm. The transfer of the data between
the different resolution levels is performed by area-based interpolation and area-based averaging, respectively.
After the algorithms have been described, the effects of the individual parameters are discussed in the following.
The input images, along with their domains (regions of interest) are passed in Image1 and Image2. The com-
putation of the optical flow vector field VectorField is performed on the smallest surrounding rectangle of the
intersection of the domains of Image1 and Image2. The domain of VectorField is the intersection of the
two domains. Hence, by specifying reduced domains for Image1 and Image2, the processing can be focused
and runtime can potentially be saved. It should be noted, however, that all methods compute a global solution of
the optical flow. In particular, it follows that the solution on a reduced domain need not (and cannot) be identical
to the resolution on the full domain restricted to the reduced domain.
SmoothingSigma specifies the standard deviation of the Gaussian kernel that is used to smooth both input
images. The larger the value of SmoothingSigma, the larger the low-pass effect of the Gaussian kernel, i.e., the
smoother the preprocessed image. Usually, SmoothingSigma = 0.8 is a suitable choice. However, other values
in the interval [0, 2] are also possible. Larger standard deviations should only be considered if the input images are
very noisy. It should be noted that larger values of SmoothingSigma lead to slightly longer execution times.
IntegrationSigma specifies the standard deviation ρ of the Gaussian kernel Gρ that is used for the local
integration of the neighborhood information of the data term. This parameter is used only in the CLG algorithm and
has no effect on the other two algorithms. Usually, IntegrationSigma = 1.0 is a suitable choice. However,
other values in the interval [0, 3] are also possible. Larger standard deviations should only be considered if the
input images are very noisy. It should be noted that larger values of IntegrationSigma lead to slightly longer
execution times.
FlowSmoothness specifies the weight α of the smoothness term with respect to the data term. The larger the
value of FlowSmoothness, the smoother the computed optical flow field. It should be noted that choosing
FlowSmoothness too small can lead to unusable results, even though statistically robust penalty functions are
used, in particular if the warping strategy needs to predict too much information outside of the image. For byte
images with a gray value range of [0, 255], values of FlowSmoothness around 20 for the flow-driven FDRIG
algorithm and around 1000 for the data-driven DDRAW algorithm and the homogeneous CLG algorithm typically
yield good results.
GradientConstancy specifies the weight γ of the gradient constancy with respect to the gray value constancy.
This parameter is used only in the FDRIG algorithm. For the other two algorithms, it does not influence the results.
For byte images with a gray value range of [0, 255], a value of GradientConstancy = 5 is typically a good
choice, since then both constancy assumptions are used to the same extent. For large changes in illumination, how-
ever, significantly larger values of GradientConstancy may be necessary to achieve good results. It should be
noted that for large values of the gradient constancy weight the smoothness parameter FlowSmoothness must
also be chosen larger.
The parameters of the multigrid solver and for the coarse-to-fine warping strategy can be specified with the
generic parameters MGParamName and MGParamValue. Usually, it suffices to use one of the four default
parameter sets via MGParamName = ’default_parameters’ and MGParamValue = ’very_accurate’, ’accurate’,
’fast_accurate’, or ’fast’. The default parameter sets are described below. If the parameters should be speci-
fied individually, MGParamName and MGParamValue must be set to tuples of the same length. The values
corresponding to the parameters specified in MGParamName must be specified at the corresponding position in
MGParamValue.
MGParamName = ’warp_zoom_factor’ can be used to specify the resolution ratio between two consecutive warp-
ing levels in the coarse-to-fine warping hierarchy. ’warp_zoom_factor’ must be selected from the open interval
(0, 1). For performance reasons, ’warp_zoom_factor’ is typically set to 0.5, i.e., the number of pixels is halved in

HALCON 8.0.2
252 CHAPTER 3. FILTER

each direction for each coarser warping level. This leads to an increase of 33% in the calculations that need to be
performed with respect to an algorithm that does not use warping. Values for ’warp_zoom_factor’ close to 1 can
lead to slightly better results. However, they require a disproportionately larger computation time, e.g., 426% for
’warp_zoom_factor’ = 0.9.
MGParamName = ’warp_levels’ can be used to restrict the warping hierarchy to a maximum number of levels.
For ’warp_levels’ = 0, the largest possible number of levels is used. If the image size does not allow to use
the specified number of levels (taking the resolution ratio ’warp_zoom_factor’ into account), the largest possible
number of levels is used. Usually, ’warp_levels’ should be set to 0.
MGParamName = ’warp_last_level’ can be used to specify the number of warping levels for which the flow
increment should no longer be computed. Usually, ’warp_last_level’ is set to 1 or 2, i.e., a flow increment is
computed for each warping level, or the finest warping level is skipped in the computation. Since in the latter case
the computation is performed on an image of half the resolution of the original image, the gained computation
time can be used to compute a more accurate solution, e.g., by using a full multigrid algorithm with additional
iterations. The more accurate solution is then interpolated to the full resolution.
The three parameters that specify the coarse-to-fine warping strategy are only used in the FDRIG and DDRAW
algorithms. They are ignored for the CLG algorithm.
MGParamName = ’mg_solver’ can be used to specify the general multigrid strategy for solving the (non)linear
equation system (in each warping level). For ’mg_solver’ = ’multigrid’, a normal multigrid algorithm (without
coarse-to-fine initialization) is used, while for ’mg_solver’ = ’full_multigrid’ a full multigrid algorithm (with
coarse-to-fine initialization) is used. Since a resolution reduction of 0.5 is used between two consecutive levels of
the coarse-to-fine initialization (in contrast to the resolution reduction in the warping strategy, this value is hard-
coded into the algorithm), the use of a full multigrid algorithm results in an increase of the computation time by
approximately 33% with respect to the normal multigrid algorithm. Using ’mg_solver’ to ’full_multigrid’ typically
yields numerically more accurate results than ’mg_solver’ = ’multigrid’.
MGParamName = ’mg_cycle_type’ can be used to specify whether a V or W correction cycle is used per multigrid
level. Since a resolution reduction of 0.5 is used between two consecutive levels of the respective correction cycle,
using a W cycle instead of a V cycle increases the computation time by approximately 50%. Using ’mg_cycle_type’
= ’w’ typically yields numerically more accurate results than ’mg_cycle_type’ = ’v’.
MGParamName = ’mg_levels’ can be used to restrict the multigrid hierarchy for the coarse-to-fine initialization
as well as for the actual V or W correction cycles. For ’mg_levels’ = 0, the largest possible number of levels is
used. If the image size does not allow to use the specified number of levels, the largest possible number of levels
is used. Usually, ’mg_levels’ should be set to 0.
MGParamName = ’mg_cycles’ can be used to specify the total number of V or W correction cycles that are being
performed. If a full multigrid algorithm is used, ’mg_cycles’ refers to each level of the coarse-to-fine initialization.
Usually, one or two cycles are sufficient to yield a sufficiently accurate solution of the equation system. Typically,
the larger ’mg_cycles’, the more accurate the numerical results. This parameter enters almost linearly into the
computation time, i.e., doubling the number of cycles leads approximately to twice the computation time.
MGParamName = ’mg_pre_relax’ can be used to specify the number of iterations that are performed on each
level of the V or W correction cycles using the iterative basic solver before the actual error correction is performed.
Usually, one or two pre-relaxation steps are sufficient. Typically, the larger ’mg_pre_relax’, the more accurate the
numerical results.
MGParamName = ’mg_post_relax’ can be used to specify the number of iterations that are performed on each
level of the V or W correction cycles using the iterative basic solver after the actual error correction is performed.
Usually, one or two post-relaxation steps are sufficient. Typically, the larger ’mg_post_relax’, the more accurate
the numerical results.
Like when increasing the number of correction cycles, increasing the number of pre- and post-relaxation steps
increases the computation time asymptotically linearly. However, no additional restriction and prolongation oper-
ations (zooming down and up of the error correction images) are performed. Consequently, a moderate increase in
the number of relaxation steps only leads to a slight increase in the computation times.
MGParamName = ’mg_inner_iter’ can be used to specify the number of iterations to solve the linear equation
systems in each fixed-point iteration of the nonlinear basic solver. Usually, one iteration is sufficient to achieve a
sufficient convergence speed of the multigrid algorithm. The increase in computation time is slightly smaller than
for the increase in the relaxation steps. This parameter only influences the FDRIG and DDRAW algorithms since
for the CLG algorithm no nonlinear equation system needs to be solved.
As described above, usually it is sufficient to use one of the default parameter sets for the parameters described
above by using MGParamName = ’default_parameters’ and MGParamValue = ’very_accurate’, ’accurate’,

HALCON/C Reference Manual, 2008-5-13


3.13. OPTICAL-FLOW 253

’fast_accurate’, or ’fast’. If necessary, individual parameters can be modified after the default parameter set has
been chosen by specifying a subset of the above parameters and corresponding values after ’default_parameters’ in
MGParamName and MGParamValue (e.g., MGParamName = [’default_parameters’,’warp_zoom_factor’] and
MGParamValue = [’accurate’,0.6]).
The default parameter sets use the following values for the above parameters:
’default_parameters’ = ’very_accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 1,
’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 2,
’mg_post_relax’ = 2, ’mg_inner_iter’ = 1.
’default_parameters’ = ’accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 1,
’mg_solver’ = ’multigrid’, ’mg_cycle_type’ = ’v’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 1,
’mg_post_relax’ = 1, ’mg_inner_iter’ = 1.
’default_parameters’ = ’fast_accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 2,
’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 2,
’mg_post_relax’ = 2, ’mg_inner_iter’ = 1.
’default_parameters’ = ’fast’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 2, ’mg_solver’
= ’multigrid’, ’mg_cycle_type’ = ’v’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 1, ’mg_post_relax’ =
1, ’mg_inner_iter’ = 1.
It should be noted that for the CLG algorithm the two modes ’fast_accurate’ and ’fast’ are identical to the modes
’very_accurate’ and ’accurate’ since the CLG algorithm does not use a coarse-to-fine warping strategy.
Parameter

. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : byte / uint2 / real


Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : byte / uint2 / real
Input image 2.
. VectorField (output_object) . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : vector_field
Optical flow.
. Algorithm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Algorithm for computing the optical flow.
Default Value : "fdrig"
List of values : Algorithm ∈ {"fdrig", "ddraw", "clg"}
. SmoothingSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Standard deviation for initial Gaussian smoothing.
Default Value : 0.8
Suggested values : SmoothingSigma ∈ {0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0}
Restriction : SmoothingSigma ≥ 0.0
. IntegrationSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Standard deviation of the integration filter.
Default Value : 1.0
Suggested values : IntegrationSigma ∈ {0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6,
2.8, 3.0}
Restriction : IntegrationSigma ≥ 0.0
. FlowSmoothness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Weight of the smoothing term relative to the data term.
Default Value : 20
Suggested values : FlowSmoothness ∈ {10, 20, 30, 50, 70, 100, 200, 300, 500, 700, 1000, 1500, 2000}
Restriction : FlowSmoothness ≥ 0.0
. GradientConstancy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Weight of the gradient constancy relative to the gray value constancy.
Default Value : 5
Suggested values : GradientConstancy ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 30, 40, 50, 70, 100}
Restriction : GradientConstancy ≥ 0.0
. MGParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Parameter name(s) for the multigrid algorithm.
Default Value : "default_parameters"
List of values : MGParamName ∈ {"default_parameters", "mg_solver", "mg_cycle_type", "mg_levels",

HALCON 8.0.2
254 CHAPTER 3. FILTER

"mg_cycles", "mg_pre_relax", "mg_post_relax", "mg_inner_iter", "warp_levels", "warp_zoom_factor",


"warp_last_level"}
. MGParamValue (input_control) . . . . . . . . . attribute.value(-array) ; (Htuple .) const char * / double / Hlong
Parameter value(s) for the multigrid algorithm.
Default Value : "accurate"
Suggested values : MGParamValue ∈ {"very_accurate", "accurate", "fast_accurate", "fast", "multigrid",
"full_multigrid", "v", "w", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7,
0.8, 0.9}
Example (Syntax: HDevelop)

grab_image (Image1, FGHandle)


while (true)
grab_image (Image2, FGHandle)
optical_flow_mg (Image1, Image2, VectorField, ’fdrig’, 0.8, 1, 10,
5, ’default_parameters’, ’accurate’)
threshold (VectorField, Region, 1, 10000)
copy_obj (Image2, Image1, 1, 1)
endwhile

Result
If the parameter values are correct, the operator optical_flow_mg returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
optical_flow_mg is reentrant and automatically parallelized (on tuple level).
Possible Successors
threshold, vector_field_length
See also
unwarp_image_vector_field
References
T. Brox, A. Bruhn, N. Papenberg, and J. Weickert: High accuracy optic flow estimation based on a theory for
warping. In T. Pajdla and J. Matas, editors, Computer Vision - ECCV 2004, volume 3024 of Lecture Notes in
Computer Science, pages 25–36. Springer, Berlin, 2004.
A. Bruhn, J. Weickert, C. Feddern, T. Kohlberger, and C. Schnörr: Variational optical flow computation in real-
time. IEEE Transactions on Image Processing, 14(5):608-615, May 2005.
H.-H. Nagel and W. Enkelmann: An investigation of smoothness constraints for the estimation of displacement
vector fields from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(5):565-
593, September 1986.
Ulrich Trottenberg, Cornelis Oosterlee, Anton Schüller: Multigrid. Academic Press, Inc., San Diego, 2000.
Module
Foundation

unwarp_image_vector_field ( const Hobject Image,


const Hobject VectorField, Hobject *ImageUnwarped )

T_unwarp_image_vector_field ( const Hobject Image,


const Hobject VectorField, Hobject *ImageUnwarped )

Unwarp an image using a vector field.


unwarp_image_vector_field unwarps the image Image using the vector field VectorField
and returns the unwarped image in ImageUnwarped. The vector field is typically determined with
optical_flow_mg. Hence, unwarp_image_vector_field can be used to unwarp the second input
image of optical_flow_mg to the first input image. It should be noted that because of the above sematics the

HALCON/C Reference Manual, 2008-5-13


3.13. OPTICAL-FLOW 255

vector field image represents an inverse transformation from the destination image of the vector field to the source
image.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : byte / uint2 / real
Input image
. VectorField (input_object) . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : vector_field
Input vector field
. ImageUnwarped (output_object) . . . . . . . . . . singlechannel-image(-array) ; Hobject * : byte / uint2 / real
Unwarped image.
Example (Syntax: HDevelop)

optical_flow_mg (Image1, Image2, VectorField, ’fdrig’, 0.8, 1, 20,


5, ’default_parameters’, ’accurate’)
unwarp_image_vector_field (Image2, VectorField, ImageUnwarped)

Result
If the parameter values are correct, the operator unwarp_image_vector_field returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
unwarp_image_vector_field is reentrant and automatically parallelized (on domain level, tuple level).
Possible Predecessors
optical_flow_mg
Module
Foundation

vector_field_length ( const Hobject VectorField, Hobject *Length,


const char *Mode )

T_vector_field_length ( const Hobject VectorField, Hobject *Length,


const Htuple Mode )

Compute the length of the vectors of a vector field.


vector_field_length compute the length of the vectors of the vector field VectorField and returns them
in Length. The parameter Mode can be used to specify how the lengths are computed. For Mode = ’length’,
the Euclidean length of the vectors is computed. For Mode = ’squared_length’, the square of the length of the
vectors is computed. This avoids having to compute a square root internally, which is a costly operation on many
processors, and hence saves runtime on these processors.
Parameter
. VectorField (input_object) . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : vector_field
Input vector field
. Length (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : real
Length of the vectors of the vector field.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Mode for computing the length of the vectors.
Default Value : "length"
List of values : Mode ∈ {"length", "squared_length"}
Result
If the parameter values are correct, the operator vector_field_length returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
vector_field_length is reentrant and automatically parallelized (on domain level, tuple level).

HALCON 8.0.2
256 CHAPTER 3. FILTER

Possible Predecessors
optical_flow_mg
Possible Successors
threshold
Module
Foundation

3.14 Points
corner_response ( const Hobject Image, Hobject *ImageCorner,
Hlong Size, double Weight )

T_corner_response ( const Hobject Image, Hobject *ImageCorner,


const Htuple Size, const Htuple Weight )

Searching corners in images.


The operator corner_response extracts gray value corners in an image. The formula for the calculation of
the response is:

2
R(x, y) = A(x, y) · B(x, y) − C 2 (x, y) − W eight · (A(x, y) + B(x, y))
A(x, y) = W (u, v) ∗ (∇x I(x, y))2
B(x, y) = W (u, v) ∗ (∇y I(x, y))2
C(c, y) = W (u, v) ∗ (∇x I(x, y)∇y I(x, y))

where I is the input image and R the output image of the filter. The operator gauss_image is used for smoothing
(W ), the operator sobel_amp is used for calculating the derivative (∇).
The corner response function is invariant with regard to rotation. In order to achieve a suitable dependency of the
function R(x, y) on the local gradient, the parameter Weight must be set to 0.04. With this, only gray value
corners will return positive values for R(x, y), while straight edges will receive negative values. The output image
type is identical to the input image type. Therefore, the negative output values are set to 0 if byte images are
used as input images. If this is not desired, the input image should be converted into a real or int2 image with
convert_image_type.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / real
Input image.
. ImageCorner (output_object) . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject * : byte / int2 / real
Result of the filtering.
Number of elements : ImageCorner = Image
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Desired filtersize of the graymask.
Default Value : 3
List of values : Size ∈ {3, 5, 7, 9, 11}
. Weight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Weighting.
Default Value : 0.04
Typical range of values : 0.0 ≤ Weight ≤ 0.3
Minimum Increment : 0.001
Recommended Increment : 0.01
Example

read_image(&Fabrik,"fabrik");
corner_response(Fabrik,&CornerResponse,3,0.04);
local_max(CornerResponse,&LocalMax);

HALCON/C Reference Manual, 2008-5-13


3.14. POINTS 257

disp_image(Fabrik,WindowHandle);
set_color(WindowHandle,"red");
disp_region(LocalMax,WindowHandle);

Parallelization Information
corner_response is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
local_max, threshold
See also
gauss_image, sobel_amp, convert_image_type
References
C.G. Harris, M.J. Stephens, “A combined corner and edge detector”’; Proc. of the 4th Alvey Vision Conference;
August 1988; pp. 147-152.
H. Breit, “Bestimmung der Kameraeigenbewegung und Gewinnung von Tiefendaten aus monokularen Bildfol-
gen”; Diplomarbeit am Lehrstuhl f"ur Nachrichtentechnik der TU M"unchen; 30. September 1990.
Module
Foundation

dots_image ( const Hobject Image, Hobject *DotImage, Hlong Diameter,


const char *FilterType, Hlong PixelShift )

T_dots_image ( const Hobject Image, Hobject *DotImage,


const Htuple Diameter, const Htuple FilterType,
const Htuple PixelShift )

Enhance circular dots in an image.


dots_image enhances circular dots of diameter Diameter in the input image Image. Hence, dots_image
is especially suited for the segmentation of dot prints, e.g., in OCR applications. The enhancement is performed
by using matched filters with filter masks that are tuned for a particular dot size. For example, for Diameter = 5
the filter mask is given by:
 
−21 −21 −21

 −21 16 16 16 −21 

 −21 16 16 16 16 16 −21 
1  −21

16 16 16 16 16 −21 
336 
 −21

 16 16 16 16 16 −21 

 −21 16 16 16 −21 
−21 −21 −21

The parameter FilterType selects whether dark, light, or all dots in the image should be enhanced. The
PixelShift can be used either to increase the contrast of the output image (PixelShift > 0) or to dampen
the values in extremly bright areas that would be cut off otherwise (PixelShift = −1).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. DotImage (output_object) . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Output image.
. Diameter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Diameter of the dots to be enhanced.
Default Value : 5
List of values : Diameter ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23}
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Enhance dark, light, or all dots.
Default Value : "light"
List of values : FilterType ∈ {"dark", "light", "all"}

HALCON 8.0.2
258 CHAPTER 3. FILTER

. PixelShift (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Shift of the filter response.
Default Value : 0
List of values : PixelShift ∈ {-1, 0, 1, 2}
Parallelization Information
dots_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold
Alternatives
laplace, laplace_of_gauss, diff_of_gauss, derivate_gauss, convol_image
Module
Foundation

T_points_foerstner ( const Hobject Image, const Htuple SigmaGrad,


const Htuple SigmaInt, const Htuple SigmaPoints,
const Htuple ThreshInhom, const Htuple ThreshShape,
const Htuple Smoothing, const Htuple EliminateDoublets,
Htuple *RowJunctions, Htuple *ColJunctions, Htuple *CoRRJunctions,
Htuple *CoRCJunctions, Htuple *CoCCJunctions, Htuple *RowArea,
Htuple *ColArea, Htuple *CoRRArea, Htuple *CoRCArea,
Htuple *CoCCArea )

Detect points of interest using the Förstner operator.


points_foerstner extracts significant points from an image. Significant points are points that differ from
their neighborhood, i.e., points where the image function changes in two dimensions. These changes occur on the
one hand at the intersection of image edges (called junction points), and on the other hand at places where color or
brightness differs from the surrounding neighborhood (called area points).
The point extraction takes place in two steps: In the first step the point regions, i.e., the inhomogeneous, isotropic
regions, are extracted from the image. To do so, the smoothed matrix
 n
X n
X 
2
 Ix,c Ix,c Iy,c 
 c=1 c=1
M =S∗ X

n Xn 
 2 
Ix,c Iy,c Iy,c
c=1 c=1

is calculated, where Ix,c and Iy,c are the first derivatives of each image channel and S stands for a smoothing.
If Smoothing is ’gauss’, the derivatives are computed with Gaussian derivatives of size SigmaGrad and the
smoothing is performed by a Gaussian of size SigmaInt. If Smoothing is ’mean’, the derivatives are computed
with a 3 × 3 Sobel filter (and hence SigmaGrad is ignored) and the smoothing is performed by a SigmaInt ×
SigmaInt mean filter. Then

inhomogeneity = TraceM

is the degree of inhomogeneity in the image and

DetM
isotropy = 4 ·
(TraceM )2

is the degree of the isotropy of the texture in the image. Image points that have an inhomogeneity greater or equal to
ThreshInhom and at the same time an isotropy greater or equal to ThreshShape are subsequently examined
further.
In the second step, two optimization functions are calculated for the resulting points. Essentially, these optimiza-
tion functions average for each point the distances to the edge directions (for junction points) and the gradient

HALCON/C Reference Manual, 2008-5-13


3.14. POINTS 259

directions (for area points) within an observation window around the point. If Smoothing is ’gauss’, the aver-
aging is performed by a Gaussian of size SigmaPoints, if Smoothing is ’mean’, the averaging is performed
by a SigmaPoints × SigmaPoints mean filter. The local minima of the optimization functions determine
the extracted points. Their subpixel precise position is returned in (RowJunctions, ColJunctions) and
(RowArea, ColArea).
In addition to their position, for each extracted point the elements CoRRJunctions, CoRCJunctions, and
CoCCJunctions (and CoRRArea, CoRCArea, and CoCCArea, respectively) of the corresponding covariance
matrix are returned. This matrix facilitates conclusions about the precision of the calculated point position. To
obtain the actual values, it is necessary to estimate the amount of noise in the input image and to multiply all
components of the covariance matrix with the variance of the noise. (To estimate the amount of noise, apply
intensity to homgeneous image regions or plane_deviation to image regions, where the gray values
form a plane. In both cases the amount of noise is returned in the parameter Deviation.) This is illustrated by the
example program
%HALCONROOT%\examples\hdevelop\Filter\Points\ points_foerstner_ellipses.dev .
It lies in the nature of this operator that corners often result in two distinct points: One junction point, where the
edges of the corner actually meet, and one area point inside the corner. Such doublets will be eliminated automati-
cally, if EliminateDoublets is ’true’. To do so, each pair of one junction point and one area point is examined.
If the points lie within each others’ observation window of the optimization function, for both points the precision
of the point position is calculated and the point with the lower precision is rejected. If EliminateDoublets is
’false’, every detected point is returned.
Attention
Note that only odd values for SigmaInt and SigmaPoints are allowed, if Smoothing is ’mean’. Even
values automatically will be replaced by the next larger odd value.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2 / real


Input image.
. SigmaGrad (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Amount of smoothing used for the calculation of the gradient. If Smoothing is ’mean’, SigmaGrad is
ignored.
Default Value : 1.0
Suggested values : SigmaGrad ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Typical range of values : 0.7 ≤ SigmaGrad ≤ 50.0
Recommended Increment : 0.1
Restriction : SigmaGrad > 0.0
. SigmaInt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Amount of smoothing used for the integration of the gradients.
Default Value : 2.0
Suggested values : SigmaInt ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Typical range of values : 0.7 ≤ SigmaInt ≤ 50.0
Recommended Increment : 0.1
Restriction : SigmaInt > 0.0
. SigmaPoints (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Amount of smoothing used in the optimization functions.
Default Value : 3.0
Suggested values : SigmaPoints ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Typical range of values : 0.7 ≤ SigmaPoints ≤ 50.0
Recommended Increment : 0.1
Restriction : (SigmaPoints ≥ SigmaInt) ∧ (SigmaPoints > 0.6)
. ThreshInhom (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Threshold for the segmentation of inhomogeneous image areas.
Default Value : 200
Suggested values : ThreshInhom ∈ {50, 100, 200, 500, 1000}
Restriction : ThreshInhom ≥ 0.0

HALCON 8.0.2
260 CHAPTER 3. FILTER

. ThreshShape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double


Threshold for the segmentation of point areas.
Default Value : 0.3
Suggested values : ThreshShape ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.7}
Typical range of values : 0.01 ≤ ThreshShape ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : (0.0 ≤ ThreshShape) ∧ (ThreshShape ≤ 1.0)
. Smoothing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Used smoothing method.
Default Value : "gauss"
List of values : Smoothing ∈ {"gauss", "mean"}
. EliminateDoublets (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Elimination of multiply detected points.
Default Value : "false"
List of values : EliminateDoublets ∈ {"false", "true"}
. RowJunctions (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected junction points.
. ColJunctions (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected junction points.
. CoRRJunctions (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Row part of the covariance matrix of the detected junction points.
. CoRCJunctions (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Mixed part of the covariance matrix of the detected junction points.
. CoCCJunctions (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Column part of the covariance matrix of the detected junction points.
. RowArea (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected area points.
. ColArea (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected area points.
. CoRRArea (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Row part of the covariance matrix of the detected area points.
. CoRCArea (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Mixed part of the covariance matrix of the detected area points.
. CoCCArea (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Column part of the covariance matrix of the detected area points.
Result
points_foerstner returns H_MSG_TRUE if all parameters are correct and no error occurs during the execu-
tion. If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>).
If necessary, an exception handling is raised.
Parallelization Information
points_foerstner is reentrant and processed without parallelization.
Possible Successors
gen_cross_contour_xld, disp_cross
Alternatives
points_harris
References
W. Förstner, E. Gülch: “A Fast Operator for Detection and Precise Location of Distinct Points, Corners and Cir-
cular features”. In Proceedings of the Intercommission Conference on Fast Processing of Photogrametric Data,
Interlaken, pp. 281-305, 1987.
W. Förstner: “Statistische Verfahren für die automatische Bildanalyse und ihre Bewertung bei der Objekterkennung
und -vermessung”. Volume 370, Series C, Deutsche Geodätische Kommission, München, 1991.
W. Förstner: “A Framework for Low Level Feature Extraction”. European Conference on Computer Vision, LNCS
802, pp. 383-394, Springer Verlag, 1994.

HALCON/C Reference Manual, 2008-5-13


3.14. POINTS 261

C. Fuchs: “Extraktion polymorpher Bildstrukturen und ihre topologische und geometrische Gruppierung”. Volume
502, Series C, Deutsche Geodätische Kommission, München, 1998.
Module
Foundation

T_points_harris ( const Hobject Image, const Htuple SigmaGrad,


const Htuple SigmaSmooth, const Htuple Alpha, const Htuple Threshold,
Htuple *Row, Htuple *Col )

Detect points of interest using the Harris operator.


points_harris extracts points of interest from an image. The Harris operator is based upon the smoothed
matrix
 n
X n
X 
2
 Ix,c Ix,c Iy,c 
 c=1 c=1
M = Gσ ∗  X  ,

n Xn
 2 
Ix,c Iy,c Iy,c
c=1 c=1

where Gσ stands for a Gaussian smoothing of size SigmaSmooth and Ix,c and Iy,c are the first derivatives of
each image channel, computed with Gaussian derivatives of size SigmaGrad. The resulting points are the positive
local extrema of

DetM − Alpha ∗ (TraceM )2 .

If necessary, they can be restricted to points with a minimum filter response of Threshold. The coordinates of
the points are calculated with subpixel accuracy.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2 / real
Input image.
. SigmaGrad (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Amount of smoothing used for the calculation of the gradient.
Default Value : 0.7
Suggested values : SigmaGrad ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Typical range of values : 0.7 ≤ SigmaGrad ≤ 50.0
Recommended Increment : 0.1
Restriction : SigmaGrad > 0.0
. SigmaSmooth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Amount of smoothing used for the integration of the gradients.
Default Value : 2.0
Suggested values : SigmaSmooth ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Typical range of values : 0.7 ≤ SigmaSmooth ≤ 50.0
Recommended Increment : 0.1
Restriction : SigmaSmooth > 0.0
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Weight of the squared trace of the squared gradient matrix.
Default Value : 0.04
Suggested values : Alpha ∈ {0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08}
Typical range of values : 0.001 ≤ Alpha ≤ 0.1
Minimum Increment : 0.001
Recommended Increment : 0.01
Restriction : Alpha > 0.0
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Minimum filter response for the points.
Default Value : 0.0
Restriction : Threshold ≥ 0.0

HALCON 8.0.2
262 CHAPTER 3. FILTER

. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *


Row coordinates of the detected points.
. Col (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected points.
Result
points_harris returns H_MSG_TRUE if all parameters are correct and no error occurs during the execution.
If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
points_harris is reentrant and processed without parallelization.
Possible Successors
gen_cross_contour_xld
Alternatives
points_foerstner
References
C. Harris, M. Stephens: “A combined corner and edge detector”. Proceedings of the 4th Alvey Vision Conference,
pp. 147-151, 1988.
V. Gouet, N.Boujemaa: “Object-based queries using color points of interest”. IEEE Workshop on Content-Based
Access of Image and Video Libraries, CVPR/CBAIVL 2001, Hawaii, USA, 2001.
Module
Foundation

T_points_sojka ( const Hobject Image, const Htuple MaskSize,


const Htuple SigmaW, const Htuple SigmaD, const Htuple MinGrad,
const Htuple MinApparentness, const Htuple MinAngle,
const Htuple Subpix, Htuple *Row, Htuple *Col )

Find corners using the Sojka operator.


points_sojka defines a corner as the point of intersection of two straight, non-collinear gray value edges. To
decide whether a point of the input image Image is a corner or not, a neighbourhood of MaskSize × MaskSize
points is inspected. Only those image regions that are relevant for the decision are considered. Pixels with a
magnitude of the gradient of less than MinGrad are ignored from the outset.
Furthermore, only those of the remaining points are used that belong to one of the two gray value edges that form
the corner. For this, the so called Apparentness is calculated, which is an indicator of the probability that the
examined point actually is a corner point. Essentially, it is determined by the number of relevant points and their
gradients. A point can only be accepted as a corner when its Apparentness is at least MinApparentness.
Typical values of MinApparentness should range in the region of a few multiples of MinGrad.
To calculate the Apparentness, each mask point is weighted according to two criteria: First, the influence of a
mask point is weighted with a Gaussian of size SigmaW according to its distance from the possible corner point.
SigmaW should be roughly a quarter to the half of MaskSize to obtain a reasonable proportion of the size of the
weighting function to the mask size. Secondly, the distance of the point from the (assumed) ideal gray value edge
is estimated and the point is weighted with a Gaussian of size SigmaD according to that distance. I.e., pixels that
(due to the discretization of the input image) lie farther from the ideal gray value edge have less influence on the
result than pixels with a smaller distance. Typically, it is not necessary to modify the default value 0.75 of SigmaD
.
As a further criterion, the angle is calculated, by which the gray value edges change their direction in the corner
point. A point can only be accepted as a corner when this angle is greater than MinAngle.
The position of the detected corner points is returned in (Row, Col). Row und Col are calculated with subpixel
accuracy if Subpix is ’true’. They are calculated only with pixel accuracy if Subpix is ’false’.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.

HALCON/C Reference Manual, 2008-5-13


3.14. POINTS 263

. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Required filter size.
Default Value : 9
List of values : MaskSize ∈ {5, 7, 9, 11, 13}
. SigmaW (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Sigma of the weight function according to the distance to the corner candidate.
Default Value : 2.5
Suggested values : SigmaW ∈ {2.0, 2.2, 2.4, 2.5, 2.6, 2.8, 3.0}
. SigmaD (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Sigma of the weight function for the distance to the ideal gray value edge.
Default Value : 0.75
Suggested values : SigmaD ∈ {0.6, 0.7, 0.75, 0.8, 0.9, 1.0}
Restriction : (0.6 ≤ SigmaD) ∧ (SigmaD ≤ 1.0)
. MinGrad (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Threshold for the magnitude of the gradient.
Default Value : 30.0
Suggested values : MinGrad ∈ {20.0, 15.0, 30.0, 35.0, 40.0}
. MinApparentness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Threshold for Apparentness.
Default Value : 90.0
Suggested values : MinApparentness ∈ {30.0, 60.0, 90.0, 150.0, 300.0, 600.0, 1500.0}
. MinAngle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Threshold for the direction change in a corner point (radians).
Default Value : 0.5
Restriction : (0.0 ≤ MinAngle) ∧ (MinAngle ≤ pi)
. Subpix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Subpixel precise calculation of the corner points.
Default Value : "false"
List of values : Subpix ∈ {"false", "true"}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected corner points.
. Col (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected corner points.
Result
points_sojka returns H_MSG_TRUE if all parameters are correct and no error occurs during the execution.
If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
points_sojka is reentrant and processed without parallelization.
References
Eduard Sojka: “A New and Efficient Algorithm for Detecting the Corners in Digital Images”. Pattern Recognition,
Luc Van Gool (Editor), LNCS 2449, pp. 125-132, Springer Verlag, 2002.
Module
Foundation

HALCON 8.0.2
264 CHAPTER 3. FILTER

3.15 Smoothing

anisotrope_diff ( const Hobject Image, Hobject *ImageAniso,


Hlong Percent, Hlong Mode, Hlong Iteration, Hlong neighborhoodType )

T_anisotrope_diff ( const Hobject Image, Hobject *ImageAniso,


const Htuple Percent, const Htuple Mode, const Htuple Iteration,
const Htuple neighborhoodType )

Smooth an image by edge-preserving anisotropic diffusion.


anisotrope_diff is obsolete and is only provided for reasons of backward compatibility. New applica-
tions should use anisotropic_diffusion instead.
The operator anisotrope_diff carries out an iterative, anisotropic smoothing process on the mathematical
basis of physical diffusion. In analogy to the physical diffusion process describing the concentration balance
between molecules dependent on the density gradient, the diffusion filter carries out a smoothing of the gray
values dependent on the local gray value gradients.
For iterative calculation of the gray value of a pixel the gray value differences in relation to the four or eight
neighbors, respectively, are used. These gray value differences, however, are evaluated differently, i.e., a non-
linear diffusion process is carried out.
The evaluation is carried out by using a diffusion function (two different functions were implemented, namely
Mode = 1 and/or 2), which — depending on the gradient — ensures that within homogenous regions the smoothing
is stronger than over the margins of regions so that the edges remain sharp. The diffusion function is adjusted to
the noise ratio of the image by a histogram analysis in the gradient image (according to Canny). A high value for
Percent increases the smoothing effect but blurs the edges a little more (values from 80 - 90 percent are typical).
The parameter Iteration determines the number of iterations (typically 3–7).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte
Image to be smoothed.
. ImageAniso (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte
Smoothed image.
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
For histogram analysis; higher values increase the smoothing effect, typically: 80-90.
Default Value : 80
Suggested values : Percent ∈ {65, 70, 75, 80, 85, 90}
Typical range of values : 50 ≤ Percent ≤ 100
Minimum Increment : 1
Recommended Increment : 5
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Selection of diffusion function.
Default Value : 1
List of values : Mode ∈ {1, 2}
. Iteration (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations, typical values: 3-7.
Default Value : 5
Suggested values : Iteration ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Typical range of values : 1 ≤ Iteration ≤ 30
Minimum Increment : 1
Recommended Increment : 1
. neighborhoodType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Required neighborhood type.
Default Value : 8
List of values : neighborhoodType ∈ {4, 8}
Example

read_image(&Image,"fabrik");

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 265

anisotrope_diff(Image,&Aniso,80,1,5,8);
sub_image(Image,Aniso,&Sub,2.0,127.0);
disp_image(Sub,WindowHandle);

Complexity
For each pixel: O(Iterations ∗ 18).
Result
If the parameter values are correct the operator anisotrope_diff returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
anisotrope_diff is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
read_image, grab_image
Possible Successors
regiongrowing, threshold, sub_image, dyn_threshold, auto_threshold
Alternatives
sigma_image, rank_image
See also
smooth_image, binomial_filter, gauss_image, sigma_image, rank_image,
eliminate_min_max
References
P. Perona, J. Malik: “Scale-space and edge detection using anisotropic diffusion”, IEEE transaction on pattern
analysis and machine intelligence, Vol. 12, No. 7, July 1990.
Module
Foundation

anisotropic_diffusion ( const Hobject Image, Hobject *ImageAniso,


const char *Mode, double Contrast, double Theta, Hlong Iterations )

T_anisotropic_diffusion ( const Hobject Image, Hobject *ImageAniso,


const Htuple Mode, const Htuple Contrast, const Htuple Theta,
const Htuple Iterations )

Perform an anisotropic diffusion of an image.


The operator anisotropic_diffusion performs an anisotropic diffusion on the input image Image ac-
cording to the model of Perona and Malik. This procedure is also referred to as nonlinear isotropic diffusion.
Considering the image as a gray value function u, the algorithm is a discretization of the partial differential equa-
tion

ut = div(g(|∇u|2 , c)∇u)

with the initial value u = u0 defined by Image at a time t0 . The equation is iterated Iterations times in
time steps of length Theta, so that the output image ImageAniso contains the gray value function at the time
t0 + Iterations · Theta.
The goal of the anisotropic diffusion is the elimination of image noise in constant image patches while preserv-
ing the edges in the image. The distinction between edges and constant patches is achieved using the threshold
Contrast on the size of the gray value differences between adjacent pixels. Contrast is referred to as the
contrast parameter and abbreviated with the letter c.
The variable diffusion coefficient g can be chosen to follow different monotonically decreasing functions with
values between 0 and 1 and determines the response of the diffusion process to an edge. With the parameter Mode,
the following functions can be selected:

1
g1 (x, c) = p
1 + 2 cx2

HALCON 8.0.2
266 CHAPTER 3. FILTER

Choosing the function g1 by setting Mode to ’parabolic’ guarantees that the associated differential equation is
parabolic, so that a well-posedness theory exists for the problem and the procedure is stable for an arbitrary step
size Theta. In this case however, there remains a slight diffusion even across edges of a height larger than c.

1
g2 (x, c) =
1 + cx2

The choice of ’perona-malik’ for Mode, as used in the publication of Perona and Malik, does not possess the
theoretical properties of g1 , but in practice it has proved to be sufficiently stable and is thus widely used. The
theoretical instability results in a slight sharpening of strong edges.

c8
g3 (x, c) = 1 − exp(−C )
x4

The function g3 with the constant C = 3.31488, proposed by Weickert, and selectable by setting Mode to ’weick-
ert’ is an improvement of g2 with respect to edge sharpening. The transition between smoothing and sharpening
happens very abruptly at x = c2 .
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real


Input image.
. ImageAniso (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Diffusion coefficient as a function of the edge amplitude.
Default Value : "weickert"
List of values : Mode ∈ {"weickert", "perona-malik", "parabolic"}
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Contrast parameter.
Default Value : 5.0
Suggested values : Contrast ∈ {2.0, 5.0, 10.0, 20.0, 50.0, 100.0}
Restriction : Contrast > 0
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Time step.
Default Value : 1.0
Suggested values : Theta ∈ {0.5, 1.0, 3.0}
Restriction : Theta > 0
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 3, 10, 100, 500}
Restriction : Iterations ≥ 1
Parallelization Information
anisotropic_diffusion is reentrant and automatically parallelized (on tuple level).
References
J. Weickert; “’Anisotropic Diffusion in Image Processing’; PhD Thesis; Fachbereich Mathematik, Universität
Kaiserslautern; 1996.
P. Perona, J. Malik; “Scale-space and edge detection using anisotropic diffusion”; Transactions on Pattern Analysis
and Machine Intelligence 12(7), pp. 629-639; IEEE; 1990.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 267

binomial_filter ( const Hobject Image, Hobject *ImageBinomial,


Hlong MaskWidth, Hlong MaskHeight )

T_binomial_filter ( const Hobject Image, Hobject *ImageBinomial,


const Htuple MaskWidth, const Htuple MaskHeight )

Smooth an image using the binomial filter.


binomial_filter smooths the image Image using a binomial filter with a mask size of MaskWidth ×
MaskHeight pixels and returns the smoothed image in ImageBinomial. The binomial filter is a very good
approximation of a Gaussian filter that can be implemented extremely efficiently using only integer operations.
Hence, binomial_filter is very fast. Let m = MaskHeight and n = MaskWidth. Then, the filter
coefficients bij are given by binomial coefficients
 
l l!
=
k k!(l − k)!

as follows:
  
1 m−1 n−1
bij =
2n+m−2 i j

Here, i = 0, . . . , m − 1 and√
j = 0, . . . , n − 1. The binomial filter performs approximately the same smoothing as
a Gaussian filter with σ = n − 1/2, where for simplicity it is assumed that m = n. In detail, the relationship
between n and σ is:
n σ
3 0.7523
5 1.0317
7 1.2505
9 1.4365
11 1.6010
13 1.7502
15 1.8876
17 2.0157
19 2.1361
21 2.2501
23 2.3586
25 2.4623
27 2.5618
29 2.6576
31 2.7500
33 2.8395
35 2.9262
37 3.0104
If different values are chosen for MaskHeight and MaskWidth, the above relation between n and σ still holds
and refers to the amount of smoothing in the row and column directions.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. ImageBinomial (output_object) . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Filter width.
Default Value : 5
List of values : MaskWidth ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37}
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Filter height.
Default Value : 5
List of values : MaskHeight ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37}

HALCON 8.0.2
268 CHAPTER 3. FILTER

Result
If the parameter values are correct the operator binomial_filter returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
binomial_filter is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, grab_image
Possible Successors
regiongrowing, threshold, sub_image, dyn_threshold, auto_threshold
Alternatives
gauss_image, smooth_image, derivate_gauss, isotropic_diffusion
See also
mean_image, anisotropic_diffusion, sigma_image, gen_lowpass
Module
Foundation

eliminate_min_max ( const Hobject Image, Hobject *FilteredImage,


Hlong MaskWidth, Hlong MaskHeight, double Gap, Hlong Mode )

T_eliminate_min_max ( const Hobject Image, Hobject *FilteredImage,


const Htuple MaskWidth, const Htuple MaskHeight, const Htuple Gap,
const Htuple Mode )

Smooth an image in the spatial domain to suppress noise.


eliminate_min_max smooths an image by replacing gray values with neighboring mean values, or local
minima/maxima. In order to prevent edges and lines from being smoothed, only those gray values that represent
local minima or maxima are replaced (if there is a line or edge within an image there will be at least one neighboring
pixel with a comparable gray value). Gap controls the strictness of replacment: Only gray values that exceed all
other values within their local neighborhood more than Gap and all values that fall below their neighboring more
than Gap are replaced: E(x, y) represents a N × M sized rectangular neighborhood of an pixel at position (x, y),
containing all pixels within the neighborhood except the pixel itself;

• if grayvalue(x, y) ≥ Gap + maximum(E(x, y)) then replacement;


• else if grayvalue(x, y) + Gap ≤ minimum(E(x, y)) then replacement;
• else adopt grayvalue(x, y) without change;

Mode specifies how to perform the new value in case of a replacement.


Mode = 1 → replace a local maximum with next minor local maximum and replace a local minimum with
next bigger local minimum
Mode = 2 → replace with mean value of all pixels within the local neighborhood (including the replaced
pixel)
Mode = 3 → replace with median value of all pixels within the local neighborhood (including the replaced
pixel (this is default and used if Mode has got any other value than 1 or 2)
MaskWidth and MaskHeight specifiy the width and height of the rectangular neighborhood. Border treatment:
Pixels outside the image border are not considered (e.g.: With a local 3 × 3-mask the neighborhood of a pixel at
(0, 0) reduces to the pixels at (1, 0), (0, 1) and (1, 1)).
Attention
eliminate_min_max only can work on byte images (HALCON image type BYTE_IMAGE). If MaskWidth
or MaskHeight is an even number, it is replaced by the next higher odd number (this allows the unique extraction
of the center of the filter mask). Width/height of the mask may not exceed the image width/height.

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 269

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Image to smooth.
. FilteredImage (output_object) . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of filter mask.
Default Value : 3
Suggested values : MaskWidth ∈ {3, 5, 7, 9}
Typical range of values : 3 ≤ MaskWidth ≤ width(Image)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of filter mask.
Default Value : 3
Suggested values : MaskHeight ∈ {3, 5, 7, 9}
Typical range of values : 3 ≤ MaskHeight ≤ width(Image)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. Gap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Gap between local maximum/minimum and all other gray values of the neighborhood.
Default Value : 1.0
Suggested values : Gap ∈ {1.0, 2.0, 5.0, 10.0}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Replacement rule (1 = next minimum/maximum, 2 = average, 3 =median).
Default Value : 3
List of values : Mode ∈ {1, 2, 3}
Result
eliminate_min_max returns H_MSG_TRUE if all parameters are correct. If the input is empty
eliminate_min_max returns with an error message.
Parallelization Information
eliminate_min_max is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
wiener_filter, wiener_filter_ni
See also
mean_sp, mean_image, median_image, median_weighted, binomial_filter,
gauss_image, smooth_image
References
M. Imme:“A Noise Peak Elimination Filter”; S. 204-211 in CVGIP Graphical Models and Image Processing, Vol.
53, No. 2, March 1991
M. Lückenhaus:“Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse”; Diplomarbeit; Tech-
nische Universität München, Institut für Informatik; Lehrstuhl Prof. Radig; 1995.
Module
Foundation

eliminate_sp ( const Hobject Image, Hobject *ImageFillSP,


Hlong MaskWidth, Hlong MaskHeight, Hlong MinThresh, Hlong MaxThresh )

T_eliminate_sp ( const Hobject Image, Hobject *ImageFillSP,


const Htuple MaskWidth, const Htuple MaskHeight,
const Htuple MinThresh, const Htuple MaxThresh )

Replace values outside of thresholds with average value.

HALCON 8.0.2
270 CHAPTER 3. FILTER

The operator eliminate_sp replaces all gray values outside the indicated gray value intervals (MinThresh
to MaxThresh) with the neighboring mean values. Only those neighboring pixels which also fall within the gray
value interval are used for averaging. If no such pixel is present in the vicinity the original gray value is used. The
gray values in the input image falling within the gray value interval are also adopted without change.
Attention
If even values instead of odd values are given for MaskHeight or MaskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. ImageFillSP (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of filter mask.
Default Value : 3
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11}
Typical range of values : 3 ≤ MaskWidth ≤ 512 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of filter mask.
Default Value : 3
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11}
Typical range of values : 3 ≤ MaskHeight ≤ 512 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MinThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Minimum gray value.
Default Value : 1
Suggested values : MinThresh ∈ {1, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
. MaxThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum gray value.
Default Value : 254
Suggested values : MaxThresh ∈ {5, 7, 9, 11, 15, 23, 31, 43, 61, 101, 200, 230, 250, 254}
Restriction : MinThresh ≤ MaxThresh
Example

read_image(&Image,"mreut");
disp_image(Image,WindowHandle);
eliminate_sp(Image,&ImageMeansp,3,3,101,201);
disp_image(ImageMeansp,WindowHandle);

Parallelization Information
eliminate_sp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
mean_sp, mean_image, median_image, eliminate_min_max
See also
binomial_filter, gauss_image, smooth_image, anisotropic_diffusion, sigma_image,
eliminate_min_max
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 271

fill_interlace ( const Hobject ImageCamera, Hobject *ImageFilled,


const char *Mode )

T_fill_interlace ( const Hobject ImageCamera, Hobject *ImageFilled,


const Htuple Mode )

Interpolate 2 video half images.


The operator fill_interlace calculates an interpolated full image or removes odd/even lines from a video
image composed of two half images. If an image is recorded with a video camera it consists of two half images
recorded at different times but stored in one image in the digital form. This can lead to several errors in further
processing. In order to reduce these errors the video image is modified. Every second line is re-calculated or
removed. The parameter Mode determines whether this must be done for even (’even’, ’rmeven’) or odd (’odd’,
’rmodd’) line numbers. If you choose ’even’ or ’odd’ the gray values in the generated lines are calculated as mean
values from the direct neighbors above or below the current pixel, respectively. If you choose ’rmeven’ or ’rmodd’
the even or odd lines numbers are removed (in that case the resulting image is only half as high as the input image).
The value ’switch’ for Mode cause the odd and even lines to be exchanged.
Parameter

. ImageCamera (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte


Gray image consisting of two half images.
. ImageFilled (output_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte
Full image with interpolated/removed lines.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Instruction whether even or odd lines should be replaced/removed.
Default Value : "odd"
List of values : Mode ∈ {"odd", "even", "rmodd", "rmeven", "switch"}
Example

read_image(&Image,"video_bild");
fill_interlace(Image,&New,"odd");
sobel_amp(New,&Sobel,"sum_abs",3);

Complexity
For each pixel: O(2).
Result
If the parameter values are correct the operator fill_interlace returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
fill_interlace is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, grab_image
Possible Successors
sobel_amp, edges_image, regiongrowing, diff_of_gauss, threshold, dyn_threshold,
auto_threshold, mean_image, binomial_filter, gauss_image,
anisotropic_diffusion, sigma_image, median_image
See also
median_image, binomial_filter, gauss_image, crop_part
Module
Foundation

HALCON 8.0.2
272 CHAPTER 3. FILTER

gauss_image ( const Hobject Image, Hobject *ImageGauss, Hlong Size )


T_gauss_image ( const Hobject Image, Hobject *ImageGauss,
const Htuple Size )

Smooth using discrete gauss functions.


The operator gauss_image smoothes images using the discrete Gaussian. The smoothing effect increases with
increasing filter size. The following filter sizes (Size) are supported (the sigma value of the gauss function is
indicated in brackets):

3 (0.65)
5 (0.87)
7 (1.43)
9 (1.88)
11 (2.31)

For border treatment the gray values of the images are reflected at the image borders.
The operator binomial_filter can be used as an alternative to gauss_image. binomial_filter
is significantly faster than gauss_image. It should be noted that the mask size in binomial_filter does
not lead to the same amount of smoothing as the mask size in gauss_image. Corresponding mask sizes can be
determined based on the respective values of the Gaussian smoothing parameter sigma.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4


Image to be smoothed.
. ImageGauss (output_object) . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 / int4
Filtered image.
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Required filter size.
Default Value : 5
List of values : Size ∈ {3, 5, 7, 9, 11}
Example

gauss_image(Input,&Gauss,7,);
regiongrowing(Gauss,&Segments,7,7,5,100,);

Complexity
For each pixel: O(Size ∗ 2).
Result
If the parameter values are correct the operator gauss_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
gauss_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, grab_image
Possible Successors
regiongrowing, threshold, sub_image, dyn_threshold, auto_threshold
Alternatives
binomial_filter, smooth_image, derivate_gauss, isotropic_diffusion
See also
mean_image, anisotropic_diffusion, sigma_image, gen_lowpass
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 273

T_info_smooth ( const Htuple Filter, const Htuple Alpha, Htuple *Size,


Htuple *Coeffs )

Information on smoothing filter smooth_image.


The operator info_smooth returns an estimation of the width of the smoothing filters used in routine
smooth_image. For this purpose the underlying continuous impulse answers of Filter are scanned until
a filter coefficient is smaller than five percent of the maximum coefficient (at the origin). Alpha is the filter
parameter (see smooth_image). Currently four filters are supported (parameter Filter):

’deriche1’, ’deriche2’, ’shen’ und ’gauss’.

The gauss filter was conventionally implemented with filter masks (the other three are recursive filters). In the case
of the gauss filter the filter coefficients (of the one-dimensional impulse answer f (n) with n ≥ 0) are returned in
Coeffs in addition to the filter size.
Parameter

. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Name of required filter.
Default Value : "deriche2"
List of values : Filter ∈ {"deriche1", "deriche2", "shen", "gauss"}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Filter parameter: small values effect strong smoothing (reversed in case of ’gauss’).
Default Value : 0.5
Suggested values : Alpha ∈ {0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.01 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0.0
. Size (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Width of filter is approx. size x size pixels.
. Coeffs (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
In case of gauss filter: coefficients of the “positive” half of the 1D impulse answer.
Example

info_smooth(’deriche2’,0.5,Size,Coeffs);
smooth_image(Input,&Smooth,’deriche2’,7);

Result
If the parameter values are correct the operator info_smooth returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
info_smooth is reentrant and processed without parallelization.
Possible Predecessors
read_image
Possible Successors
smooth_image
See also
smooth_image
Module
Foundation

HALCON 8.0.2
274 CHAPTER 3. FILTER

isotropic_diffusion ( const Hobject Image, Hobject *SmoothedImage,


double Sigma, Hlong Iterations )

T_isotropic_diffusion ( const Hobject Image, Hobject *SmoothedImage,


const Htuple Sigma, const Htuple Iterations )

Perform an isotropic diffusion of an image.


The operator isotropic_diffusion performs an isotropic diffusion of the input image Image. This cor-
responds to a convolution of the image matrix with a Gaussian mask of standard deviation Sigma. If the pa-
rameter Iterations is set to 0, such a convolution is performed explicitly. For input images with a full
ROI, isotropic_diffusion returns the same results as the operator derivate_gauss when choos-
ing ’none’ for its parameter Component. If the gray value matrix is larger than the ROI of Image the
two operators differ since derivate_gauss takes the gray values outside of the ROI into account, while
isotropic_diffusion mirrors the values at the boundary of the ROI in any case. The computational com-
plexity increases linearly with the value of Sigma.
If Iterations has a positive value the smoothing process is considered as an application of the heat equation

ut = ∆u

on the gray value function u with the initial value u = u0 defined by the gray values of Image at a time t0 . This
equation is then solved up to a time t0 + 12 Sigma2 , which is equivalent to the above convolution, using an iterative
procedure for parabolic partial differential equations. The computational complexity is proportional to the value
of Iterations and independent of Sigma in this case. For small values of Iterations, the computational
accuracy is very low, however. For this reason, choosing Iterations < 3 is not recommended.
For smaller values of Sigma, the convolution implementation is typically the faster method. Since the runtime of
the partial differential equation solver only depends on the number of iterations and not on the value of Sigma, it
is typically faster for large values of Sigma if few iterations are chosen (e.g., Iterations = 3 ).
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real


Input image.
. SmoothedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Standard deviation of the Gauss distribution.
Default Value : 1.0
Suggested values : Sigma ∈ {0.1, 0.5, 1.0, 3.0, 10.0, 20.0, 50.0}
Restriction : Sigma > 0
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {0, 3, 10, 100, 500}
Restriction : Iterations ≥ 0
Parallelization Information
isotropic_diffusion is reentrant and automatically parallelized (on tuple level).
Module
Foundation

mean_image ( const Hobject Image, Hobject *ImageMean, Hlong MaskWidth,


Hlong MaskHeight )

T_mean_image ( const Hobject Image, Hobject *ImageMean,


const Htuple MaskWidth, const Htuple MaskHeight )

Smooth by averaging.

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 275

The operator mean_image carries out a linear smoothing with the gray values of all input images (Image). The
filter matrix consists of ones (evaluated equally) and has the size MaskHeight × MaskWidth. The result of the
convolution is divided by MaskHeight × MaskWidth. For border treatment the gray values are reflected at the
image edges.
For mean_image special optimizations are implemented that use SIMD technology. The actual application
of these special optimizations is controlled by the system parameter ’mmx_enable’ (see set_system). If
’mmx_enable’ is set to ’true’ (and the SIMD instruction set is available), the internal calculations are performed
using SIMD technology. Note that SIMD technology performs best on large, compact input regions. Depending on
the input region and the capabilities of the hardware the execution of mean_image might even take significantly
more time with SIMD technology than without.
At any rate, it is advantageous for the performance of mean_image to choose the input region of Image such
that any border treatment is avoided.
Attention
If even values instead of odd values are given for MaskHeight or MaskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter

. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real / vec-
tor_field
Image to be smoothed.
. ImageMean (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 / int4 /
real / vector_field
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of filter mask.
Default Value : 9
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
Typical range of values : 1 ≤ MaskWidth ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of filter mask.
Default Value : 9
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
Typical range of values : 1 ≤ MaskHeight ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
Example

read_image(&Image,"fabrik");
mean_image(Image,&Mean,3,3);
disp_image(Mean,WindowHandle);

Complexity
For each pixel: O(15).
Result
If the parameter values are correct the operator mean_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
mean_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
reduce_domain, rectangle1_domain

HALCON 8.0.2
276 CHAPTER 3. FILTER

Possible Successors
dyn_threshold, regiongrowing
Alternatives
binomial_filter, gauss_image, smooth_image
See also
anisotropic_diffusion, sigma_image, convol_image, gen_lowpass
Module
Foundation

mean_n ( const Hobject Image, Hobject *ImageMean )


T_mean_n ( const Hobject Image, Hobject *ImageMean )

Average gray values over several channels.


The operator mean_n generates the pixel-by-pixel mean value of all channels . For each coordinate point the sum
of all gray values at this coordinate is calculated. The result is the mean of the gray values (sum divided by the
number of channels). The output image has one channel.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte / int4 / uint2 / int4 / real


Multichannel gray image.
. ImageMean (output_object) . . . . . singlechannel-image(-array) ; Hobject * : byte / int4 / uint2 / int4 / real
Result of averaging.
Example

compose3(Channel1,Channel2,Channel3,&MultiChannel);
mean_n(MultiChannel,&Mean);

Parallelization Information
mean_n is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
compose2, compose3, compose4, add_channels
Possible Successors
disp_image
See also
count_channels
Module
Foundation

mean_sp ( const Hobject Image, Hobject *ImageSPMean, Hlong MaskWidth,


Hlong MaskHeight, Hlong MinThresh, Hlong MaxThresh )

T_mean_sp ( const Hobject Image, Hobject *ImageSPMean,


const Htuple MaskWidth, const Htuple MaskHeight,
const Htuple MinThresh, const Htuple MaxThresh )

Suppress salt and pepper noise.


The operator mean_sp carries out a smoothing by averaging the values. Only the gray values within the interval
from MinThresh to MaxThresh are averaged. Gray values which are too light or too dark are ignored during
summation. If no gray value lies within the default interval during summation the original gray value is adopted.
If the thresholds are set at 0 or 255, respectively, the operator mean_sp behaves like mean_image except for
the running time.

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 277

The operator mean_sp is used to suppress extreme gray values (salt and pepper noise = white and black dots).
Attention
If even values instead of odd values are given for MaskHeight or MaskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2


Input image.
. ImageSPMean (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of filter mask.
Default Value : 3
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11}
Typical range of values : 3 ≤ MaskWidth ≤ 512 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of filter mask.
Default Value : 3
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11}
Typical range of values : 3 ≤ MaskHeight ≤ 512 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MinThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Minimum gray value.
Default Value : 1
Suggested values : MinThresh ∈ {1, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
. MaxThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum gray value.
Default Value : 254
Suggested values : MaxThresh ∈ {5, 7, 9, 11, 15, 23, 31, 43, 61, 101, 200, 230, 250, 254}
Restriction : MinThresh ≤ MaxThresh
Example

read_image(&Image,"mreut");
disp_image(Image,WindowHandle);
mean_sp(Image,&ImageMeansp,3,3,101,201);
disp_image(ImageMeansp,WindowHandle);

Parallelization Information
mean_sp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
mean_image, median_image, median_separate, eliminate_min_max
See also
anisotropic_diffusion, sigma_image, binomial_filter, gauss_image, smooth_image,
eliminate_min_max
Module
Foundation

HALCON 8.0.2
278 CHAPTER 3. FILTER

median_image ( const Hobject Image, Hobject *ImageMedian,


const char *MaskType, Hlong Radius, const char *Margin )

T_median_image ( const Hobject Image, Hobject *ImageMedian,


const Htuple MaskType, const Htuple Radius, const Htuple Margin )

Median filtering with different rank masks.


The operator median_image carries out a non-linear smoothing of the gray values of all input images (Image).
The shift mask (MaskType) is transmitted in the form of an object (more precisely: its region). Several border
treatments can be chosen for filtering (Margin):

gray value Pixels outside of the image edges are assumued


to be constant (with the indicated gray value).
’continued’ Continuation of edge pixels.
’cyclic’ Cyclic continuation of image edges.
’mirrored’ Reflection of pixels at the image edges.

The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels of the objects once. For each of these pixels all neighboring pixels covered by the
mask are sorted in an ascending sequence according to their gray values. Thus, each of these sorted gray value
sequences contains exactly as many gray values as the mask has pixels. From these sequences the median is
selected and entered as resulting gray value at the corresponding output image.
Parameter
. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Image to be filtered.
. ImageMedian (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 / int4
/ real
Median filtered image.
. MaskType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of median mask.
Default Value : "circle"
List of values : MaskType ∈ {"circle", "rectangle"}
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Radius of median mask.
Default Value : 1
Suggested values : Radius ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 15, 19, 25, 31, 39, 47, 59}
Typical range of values : 1 ≤ Radius ≤ 101
Minimum Increment : 1
Recommended Increment : 2
. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Example

read_image(&Image,"fabrik");
median_image(Image,&Median,"circle",3,"continued");
disp_image(MedianWeighted,WindowHandle);

√ Complexity
For each pixel: O( F ∗ 5) with F = area of MaskType.
Result
If the parameter values are correct the operator median_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 279

Parallelization Information
median_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
rank_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 318-319
Module
Foundation

median_separate ( const Hobject Image, Hobject *ImageSMedian,


Hlong MaskWidth, Hlong MaskHeight, const char *Margin )

T_median_separate ( const Hobject Image, Hobject *ImageSMedian,


const Htuple MaskWidth, const Htuple MaskHeight, const Htuple Margin )

Separated median filtering with rectangle masks.


The operator median_separate carries out a variation of the median filtering: First two auxiliary images are
created. The first one originates from a median filtering with a horizontal mask with a height of one pixel and the
width MaskWidth followed by filtering with a mask with the height MaskHeight. The second auxiliary image
is created by filtering with the same masks, but with a reversed sequence of the operation: first the vertical, then
the horizontal mask. The output image results from averaging the two auxiliary images pixel by pixel.
The operator median_separate is clearly faster than the normal operator median_image because both
masks are one pixel wide, facilitating a very effecient processing. The runtime is practically independent of the
size of the mask. For example, the operator median_separate can be well used after texture filters, where
large masks are needed.
The filter can also be used several times in a row in order to enhance the smoothing.
Parameter

. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real


Image to be filtered.
. ImageSMedian (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 / int4
/ real
Median filtered image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of rank mask.
Default Value : 25
Suggested values : MaskWidth ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 27, 43, 51, 67, 91, 121, 151}
Typical range of values : 1 ≤ MaskWidth ≤ 401
Minimum Increment : 2
Recommended Increment : 2
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of rank mask.
Default Value : 25
Suggested values : MaskHeight ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 27, 43, 51, 67, 91, 121, 151}
Typical range of values : 1 ≤ MaskHeight ≤ 401
Minimum Increment : 2
Recommended Increment : 2

HALCON 8.0.2
280 CHAPTER 3. FILTER

. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double


Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Example

read_image(&Image,"fabrik");
median_separate(Image,&MedianSeparate,5,5,3);
disp_image(MedianSeparate,WindowHandle);

Complexity
For each pixel: O(40).
Parallelization Information
median_separate is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
texture_laws, sobel_amp, deviation_image
Possible Successors
learn_ndim_norm, learn_ndim_box, median_separate, regiongrowing, auto_threshold
Alternatives
median_image
See also
rank_image
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation

median_weighted ( const Hobject Image, Hobject *ImageWMedian,


const char *MaskType, Hlong MaskSize )

T_median_weighted ( const Hobject Image, Hobject *ImageWMedian,


const Htuple MaskType, const Htuple MaskSize )

Weighted median filtering with different rank masks.


The operator median_weighted calculates the median of the gray values within a local environment. In
contrast to median_image, which uses all gray values within the environment exactly once, the operator
median_weighted weights all gray values several times depending on their position. A gray value is received
into the field to be sorted several times according to its weighting. The following masks are available:

’gauss’ (MaskSize = 3)
1 2 1
2 4 2
1 2 1
’inner’ (MaskSize = 3)
1 1 1
1 3 1
1 1 1

The operator median_weighted means that, contrary to median_image, gray value corners remain.

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 281

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2
Image to be filtered.
. ImageWMedian (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2
Median filtered image.
. MaskType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of median mask.
Default Value : "inner"
List of values : MaskType ∈ {"inner", "gauss"}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
mask size.
Default Value : 3
List of values : MaskSize ∈ {3}
Example

read_image(&Image,"fabrik");
median_weighted(Image,&MedianWeighted,"gauss",3);
disp_image(MedianWeighted,WindowHandle);

Complexity
For each pixel: O(F ∗ log F ) with F = area of MaskType.
Parallelization Information
median_weighted is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
median_image, trimmed_mean, sigma_image
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation

midrange_image ( const Hobject Image, const Hobject Mask,


Hobject *ImageMidrange, const char *Margin )

T_midrange_image ( const Hobject Image, const Hobject Mask,


Hobject *ImageMidrange, const Htuple Margin )

Calculate the average of maximum and minimum inside any mask.


The operator midrange_image forms the average of maximum and minimum inside the indicated mask in the
whole image. Several border treatments (Margin) can be chosen for filtering:

gray value Pixels outside of the image edges are assumued


to be constant (with the indicated gray value).
’continued’ Continuation of edge pixels.
’cyclic’ Cyclic continuation of image edges.
’mirrored’ Reflection of pixels at the image edges.

The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels once.

HALCON 8.0.2
282 CHAPTER 3. FILTER

Parameter

. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real


Image to be filtered.
. Mask (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject : byte
Region serving as filter mask.
. ImageMidrange (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 /
int4 / real
Filtered output image.
. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Example

read_image(&Image,"fabrik");
draw_region(&Region,WindowHandle);
midrange_image(Image,Region,&Midrange,"mirrored");
disp_image(Midrange,WindowHandle);

√ Complexity
For each pixel: O( F ∗ 5) with F = area of Mask.
Result
If the parameter values are correct the operator midrange_image returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
midrange_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, draw_region, gen_circle, gen_rectangle1
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
sigma_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect,
gray_range_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation

rank_image ( const Hobject Image, const Hobject Mask,


Hobject *ImageRank, Hlong Rank, const char *Margin )

T_rank_image ( const Hobject Image, const Hobject Mask,


Hobject *ImageRank, const Htuple Rank, const Htuple Margin )

Smooth an image with an arbitrary rank mask.


The operator rank_image carries out a non-linear smoothing of the gray values of all input images (Image).
The filter mask (Mask) is transmitted as a region. In contrast to many other filters you can choose an arbitrary
shape, e.g., by using operators like gen_circle or draw_region. The position of the mask region has no
influence on the result; the center of gravity of the region is used as the reference point of the filter mask.

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 283

The specified mask is moved over the image to be filtered in such a way that the reference point of the mask touches
all pixels once. At each position a histogram is calculated from the gray values of all pixels covered by the mask.
By specifying Rank = 1 the lowest (= darkest) gray value appearing in the histogram is selected and entered as
resulting gray value in the output image ImageRank; if Rank corresponds to the number of pixels of the filter
mask, i.e., its area, the brightest gray value is selected. This behavior is idential to the erosion/dilation operators in
gray morphology ( gray_erosion, gray_dilation). If you use a rank that is equal to half of the pixels of
the filter mask you get the same behavior as for the the median filter ( median_image).
You can use rank_image to eliminate noise, to eliminate structures with a given orientation (use
gen_rectangle2 to create the mask region), or as an advanced gray morphologic operator that is more ro-
bust against noise. In this case you will not use 1 or the mask area as rank values, but a slightly higher or lower
value, respectively.
Several border treatments can be chosen for filtering (Margin):

gray value Pixels outside of the image edges are assumued


to be constant (with the indicated gray value).
’continued’ Continuation of edge pixels.
’cyclic’ Cyclic continuation of image edges.
’mirrored’ Reflection of pixels at the image edges.

Parameter
. Image (input_object) . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Image to be filtered.
. Mask (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject : byte
Region serving as filter mask.
. ImageRank (output_object) . . . . . . multichannel-image(-array) ; Hobject * : byte / int2 / uint2 / int4 / real
Filtered image.
. Rank (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rank of the output gray value in the sorted sequence of input gray values inside the filter mask. Typical value
(median): area(mask) / 2.
Default Value : 5
Suggested values : Rank ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31}
Typical range of values : 1 ≤ Rank ≤ 512
Minimum Increment : 1
Recommended Increment : 2
. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Example

read_image(&Image,"fabrik");
draw_region(&Region,WindowHandle);
rank_image(Image,Region,&ImageRank,5,"mirrored");
disp_image(ImageRank,WindowHandle);

√ Complexity
For each pixel: O( F ∗ 5) with F = area of Mask.
Result
If the parameter values are correct the operator rank_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
rank_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, draw_region, gen_circle, gen_rectangle1

HALCON 8.0.2
284 CHAPTER 3. FILTER

Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
sigma_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 318-320
Module
Foundation

sigma_image ( const Hobject Image, Hobject *ImageSigma,


Hlong MaskHeight, Hlong MaskWidth, Hlong Sigma )

T_sigma_image ( const Hobject Image, Hobject *ImageSigma,


const Htuple MaskHeight, const Htuple MaskWidth, const Htuple Sigma )

Non-linear smoothing with the sigma filter.


The operator sigma_image carries out a non-linear smoothing of the gray values of all input images (Image).
All pixels are checked in a rectangular window (MaskHeight × MaskWidth). All pixels of the window which
differ from the current pixel by less than Sigma are used for calculating the new pixel, which is the average of the
chosen pixels. If all differences are larger than Sigma the gray value is adapted unchanged.
Attention
If even values instead of odd values are given for MaskHeight or MaskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / cyclic / int1 / int2 / uint2 / int4
/ real
Image to be smoothed.
. ImageSigma (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / cyclic / int1 / int2 /
uint2 / int4 / real
Smoothed image.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the mask (number of lines).
Default Value : 5
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 101
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the mask (number of columns).
Default Value : 5
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 101
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Max. deviation to the average.
Default Value : 3
Suggested values : Sigma ∈ {3, 5, 7, 9, 11, 20, 30, 50}
Typical range of values : 0 ≤ Sigma ≤ 255
Minimum Increment : 1
Recommended Increment : 2

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 285

Example

read_image(&Image,"fabrik");
sigma_image(Image,&ImageSigma,5,5,3);
disp_image(ImageSigma,WindowHandle);

Complexity
For each pixel: O(MaskHeight× MaskWidth).
Result
If the parameter values are correct the operator sigma_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
sigma_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
anisotropic_diffusion, rank_image
See also
smooth_image, binomial_filter, gauss_image, mean_image
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 325
Module
Foundation

smooth_image ( const Hobject Image, Hobject *ImageSmooth,


const char *Filter, double Alpha )

T_smooth_image ( const Hobject Image, Hobject *ImageSmooth,


const Htuple Filter, const Htuple Alpha )

Smooth an image using recursive filters.


smooth_image smooths gray images using recursive filters originally developed by Deriche and Shen and using
the non-recursive Gaussian filter. The following filters can be choosen via the parameter Filter:
’deriche1’, ’deriche2’, ’shen’ und ’gauss’.
The “filter width” (i.e., the range of the filter and thereby result of the filter) can be of any size. In the case that the
Deriche or Shen is choosen it decreases by increasing the filter parameter Alpha and increases in the case of the
Gauss filter (and Alpha corresponds to the standard deviation of the Gaussian function). An approximation of the
appropiate size of the filterwidth Alpha is performed by the operator info_smooth.
Non-recursive filters like the Gaussian filter are often implemented using filter-masks. In this case the runtime
of the operator increases with increasing size of the filter mask. The runtime of the recursive filters remains
constant; except the border treatment becomes a little bit more time consuming. The Gaussian filter becomes slow
in comparison to the recursive ones but is in contrast to them isotropic (the filter ’deriche2’ is only weakly direction
sensitive). A comparable result of the smoothing is achieved by choosing the following values for the parameter:

Alpha(0 deriche10 )
Alpha(0 deriche20 ) =
2
Alpha( deriche10 )
0
Alpha(0 shen0 ) =
2
1.77
Alpha(0 gauss0 ) =
Alpha(0 deriche10 )

HALCON 8.0.2
286 CHAPTER 3. FILTER

Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2


Image to be smoothed.
. ImageSmooth (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Smoothed image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Filter.
Default Value : "deriche2"
List of values : Filter ∈ {"deriche1", "deriche2", "shen", "gauss"}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Filterparameter: small values cause strong smoothing (vice versa by using bei ’gauss’).
Default Value : 0.5
Suggested values : Alpha ∈ {0.1, 0.2, 0.3, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.01 ≤ Alpha ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Alpha > 0
Example

info_smooth(’deriche2’,0.5,Size,Coeffs);
smooth_image(Input,&Smooth,’deriche2’,7);

Result
If the parameter values are correct the operator smooth_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
smooth_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
binomial_filter, gauss_image, mean_image, derivate_gauss, isotropic_diffusion
See also
info_smooth, median_image, sigma_image, anisotropic_diffusion
References
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; S. 78-87; 1990.
Module
Foundation

trimmed_mean ( const Hobject Image, const Hobject Mask,


Hobject *ImageTMean, Hlong Number, const char *Margin )

T_trimmed_mean ( const Hobject Image, const Hobject Mask,


Hobject *ImageTMean, const Htuple Number, const Htuple Margin )

Smooth an image with an arbitrary rank mask.


The operator trimmed_mean carries out a non-linear smoothing of the gray values of all input images (Image).
The filter mask (Mask) is passed in the form of a region. The average of Number gray values located near the
median is calculated. Several border treatments can be chosen for filtering (Margin):

HALCON/C Reference Manual, 2008-5-13


3.15. SMOOTHING 287

gray value Pixels outside of the image edges are assumued


to be constant (with the indicated gray value).
’continued’ Continuation of edge pixels.
’cyclic’ Cyclic continuation of image edges.
’mirrored’ Reflection of pixels at the image edges.

The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels once. For each of these pixels all neighboring pixels covered by the mask are sorted
in an ascending sequence according to their gray values. Thus, each of these sorted gray value sequences contains
exactly as many gray values as the mask has pixels. If F is the area of the mask the average of these sequences is
calculated as follows: The first (F - Number)/2 gray values are ignored. Then the following Number gray values
are summed up and divided by Number. Again the remaining (F - Number)/2 gray values are ignored.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Image to be filtered.
. Mask (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Image whose region serves as filter mask.
. ImageTMean (output_object) . . . . multichannel-image(-array) ; Hobject * : byte / int2 / uint2 / int4 / real
Filtered output image.
. Number (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of averaged pixels. Typical value: Surface(Mask) / 2.
Default Value : 5
Suggested values : Number ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31}
Typical range of values : 1 ≤ Number ≤ 401
Minimum Increment : 1
Recommended Increment : 2
. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Example

read_image(&Image,"fabrik");
draw_region(&Region,WindowHandle);
trimmed_mean(Image,Region,&TrimmedMean,5,"mirrored");
disp_image(TrimmedMean,WindowHandle);

Result
If the parameter values are correct the operator trimmed_mean returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
trimmed_mean is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, draw_region, gen_circle, gen_rectangle1
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
sigma_image, median_weighted, median_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 320
Module
Foundation

HALCON 8.0.2
288 CHAPTER 3. FILTER

3.16 Texture
deviation_image ( const Hobject Image, Hobject *ImageDeviation,
Hlong Width, Hlong Height )

T_deviation_image ( const Hobject Image, Hobject *ImageDeviation,


const Htuple Width, const Htuple Height )

Calculate the standard deviation of gray values within rectangular windows.


deviation_image calculates the standard deviation of gray values in the image Image within a rectangular
mask of size (Height, Width). The resulting image is returned in ImageDeviation. To better use the range
of gray values available in the output image, the result is multiplied by 2. If the parameters Height and Width
are even, they are changed to the next larger odd value. At the image borders the gray values are mirrored.
Parameter

. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int4 / real / int2 / uint2


Image for which the standard deviation is to be calculated.
. ImageDeviation (output_object) . . . . . . . . . . . . image(-array) ; Hobject * : byte / int4 / real / int2 / uint2
Image containing the standard deviation.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the mask in which the standard deviation is calculated.
Default Value : 11
List of values : Width ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25}
Restriction : (3 ≤ Width) ∧ odd(Width)
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the mask in which the standard deviation is calculated.
Default Value : 11
List of values : Height ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25}
Restriction : (3 ≤ Height) ∧ odd(Height)
Example

read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
deviation_image(Image,&Deviation,9,9);
disp_image(Deviation,WindowHandle);

Result
deviation_image returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
deviation_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
entropy_image, entropy_gray
See also
convol_image, texture_laws, intensity
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.16. TEXTURE 289

entropy_image ( const Hobject Image, Hobject *ImageEntropy,


Hlong Width, Hlong Height )

T_entropy_image ( const Hobject Image, Hobject *ImageEntropy,


const Htuple Width, const Htuple Height )

Calculate the entropy of gray values within a rectangular window.


entropy_image calculates the entropy of gray values in the image Image within a rectangular mask of size
(Height, Width). The resulting image is returned in ImageEntropy, in which the entropy is multiplied by
32. If the parameters Height and Width are even, they are changed to the next larger odd value. At the image
borders the gray values are mirrored.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte


Image for which the entropy is to be calculated.
. ImageEntropy (output_object) . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte
Entropy image.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the mask in which the entropy is calculated.
Default Value : 9
List of values : Width ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25}
Suggested values : Width ∈ {3, 5, 7, 9, 11, 13, 15}
Restriction : (3 ≤ Width) ∧ odd(Width)
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the mask in which the entropy is calculated.
Default Value : 9
List of values : Height ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25}
Suggested values : Height ∈ {3, 5, 7, 9, 11, 13, 15}
Restriction : (3 ≤ Height) ∧ odd(Height)
Example

read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
entropy_image(Image,&Entropy1,9,9);
disp_image(Entropy1,WindowHandle);

Result
entropy_image returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
entropy_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
entropy_gray
See also
energy_gabor, entropy_gray
Module
Foundation

HALCON 8.0.2
290 CHAPTER 3. FILTER

texture_laws ( const Hobject Image, Hobject *ImageTexture,


const char *FilterTypes, Hlong Shift, Hlong FilterSize )

T_texture_laws ( const Hobject Image, Hobject *ImageTexture,


const Htuple FilterTypes, const Htuple Shift,
const Htuple FilterSize )

Filter an image using a Laws texture filter.


texture_laws applies a texture transformations (according to Laws) to an image. This is done by convolving
the input image with a special filter mask. The filters are:
9 different 3x3 matrices obtainable from the following three vectors:

l = [ 1 2 1]
e = [−1 0 1]
s = [−1 2 −1]

25 different 5x5 matrices obtainable from the following five vectors:

l = [ 1 4 6 4 1]
e = [−1 −2 0 2 1]
s = [−1 0 2 0 −1]
r = [ 1 −4 6 −4 1]
w = [−1 2 0 −2 1]

36 different 7x7 matrices obtainable from the following six vectors:

l = [ 1 6 15 20 15 6 1]
e = [−1 −4 −5 0 5 4 1]
s = [−1 −2 1 4 1 −2 −1]
r = [−1 −2 −1 4 −1 −2 −1]
w = [−1 0 3 0 −3 0 1]
o = [−1 6 −15 20 −15 6 −1]

For most of the filters the resulting gray values must be modified by a Shift. This makes the different textures in
the output image more comparable to each other, provided suitable filters are used.
The name of the filter is composed of the letters of the two vectors used, where the first letter denotes convolution
in the column direction while the second letter denotes convolution in the row direction.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Images to which the texture transformation is to be applied.
. ImageTexture (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Texture images.
. FilterTypes (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Desired filters (name or number).
Default Value : "el"
Suggested values : FilterTypes ∈ {"ll", "le", "ls", "lr", "lw", "lo", "el", "ee", "es", "er", "ew", "eo", "sl",
"se", "ss", "sr", "sw", "so", "rl", "re", "rs", "rr", "rw", "ro", "wl", "we", "ws", "wr", "ww", "wo", "ol", "oe",
"os", "or", "ow", "oo"}
. Shift (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Shift to reduce the gray value dynamics.
Default Value : 2
List of values : Shift ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
. FilterSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Size of the filter kernel.
Default Value : 5
List of values : FilterSize ∈ {3, 5, 7}

HALCON/C Reference Manual, 2008-5-13


3.17. WIENER-FILTER 291

Example

/* Two-dimensional pixel classification */


read_image(&Image,"combine");
open_window(0,0,-1,-1,"root","visible","",&WindowHandle);
disp_image(Image,WindowHandle);
texture_laws(Image,&Texture1,"es",2,5);
texture_laws(Image,&Texture2,"le",2,5);
mean_image(Texture1,&H1,51,51);
mean_image(Texture2,&H2,51,51);
fwrite_string(FileId,"mark desired image section");
fnew_line(FileId);
set_color(WindowHandle,"green");
draw_region(&Region,WindowHandle);
reduce_domain(H1,Region,&Foreground1);
reduce_domain(H2,Region&,Foreground2);
histo_2dim(Region,Foreground1,Foreground2,&Histo);
threshold(Histo,&Characteristic_area,1.0,1000000.0);
set_color(WindowHandle,"blue");
disp_region(Characteristic_area,WindowHandle);
class_2dim_sup(H1,H2,Characteristic_area,&Seg,4,5:);
set_color(WindowHandle,"red");
disp_region(Seg,WindowHandle);

Result
texture_laws returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
texture_laws is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
mean_image, binomial_filter, gauss_image, median_image, histo_2dim,
learn_ndim_norm, learn_ndim_box, threshold
Alternatives
convol_image
See also
class_2dim_sup, class_ndim_norm
References
Laws, K.I. “Textured image segmentation”; Ph.D. dissertation, Dept. of Engineering, Univ. Southern California,
1980
Module
Foundation

3.17 Wiener-Filter
gen_psf_defocus ( Hobject *Psf, Hlong PSFwidth, Hlong PSFheight,
double Blurring )

T_gen_psf_defocus ( Hobject *Psf, const Htuple PSFwidth,


const Htuple PSFheight, const Htuple Blurring )

Generate an impulse response of an uniform out-of-focus blurring.


gen_psf_defocus generates an impulse response (spatial domain) of an uniform out-of-focus blurring and
writes it into an image of HALCON image type ’real’. Blurring specifies the extent of blurring by defining
the "‘blur radius"’ (out-of-focus blurring maps each image pixel on a small circle with a radius of Blurring
- specified in "‘number of pixels"’). If specified less than zero, the absolute value of Blurring is used. The

HALCON 8.0.2
292 CHAPTER 3. FILTER

result image of gen_psf_defocus encloses an spatial domain impulse response of the specified blurring. Its
representation presumes the origin in the upper left corner. This results in the following disposition of an N xM
sized image:
• first rectangle ("‘upper left"’): (image coordinates xb = 0..(N/2) − 1, yb = 0..(M/2) − 1)
- conforms to the fourth quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 0..N/2 and y = 0.. − M/2
• second rectangle ("‘upper right"’): (image coordinates xb = N/2..N − 1, yb = 0..(M/2) − 1)
- conforms to the third quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 and y = −1.. − M/2
• third rectangle ("‘lower left"’): (image coordinates xb = 0..(N/2) − 1, yb = M/2..M − 1)
- conforms to the first quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 1..N/2 and y = M/2..0
• fourth rectangle ("‘lower right"’): (image coordinates xb = N/2..N − 1, yb = M/2..M − 1)
- conforms to the second quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 und y = M/2..1
This representation conforms to that of the impulse response parameter of the HALCON-operator
wiener_filter. So one can use gen_psf_defocus to generate an impulse response for Wiener filter-
ing.
Parameter
. Psf (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Impulse response of uniform out-of-focus blurring.
. PSFwidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of result image.
Default Value : 256
Suggested values : PSFwidth ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ PSFwidth
. PSFheight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of result image.
Default Value : 256
Suggested values : PSFheight ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ PSFheight
. Blurring (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Degree of Blurring.
Default Value : 5.0
Suggested values : Blurring ∈ {1.0, 5.0, 10.0, 15.0, 18.0}
Result
gen_psf_defocus returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
gen_psf_defocus is reentrant and processed without parallelization.
Possible Predecessors
simulate_motion, gen_psf_motion
Possible Successors
simulate_defocus, wiener_filter, wiener_filter_ni
See also
simulate_defocus, gen_psf_motion, simulate_motion, wiener_filter,
wiener_filter_ni
References
Reginald L. Lagendijk, Jan Biemond: Iterative Identification and Restoration of Images, Kluwer Academic Pub-
lishers Boston/Dordrecht/London, 1991
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995.
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


3.17. WIENER-FILTER 293

gen_psf_motion ( Hobject *Psf, Hlong PSFwidth, Hlong PSFheight,


double Blurring, Hlong Angle, Hlong Type )

T_gen_psf_motion ( Hobject *Psf, const Htuple PSFwidth,


const Htuple PSFheight, const Htuple Blurring, const Htuple Angle,
const Htuple Type )

Generate an impulse response of a (linearly) motion blurring.


gen_psf_motion generates an impulse response (spatial domain) of a blurring caused by a relative motion
between the object and the camera during exposure. The generated impulse response is output into an image
of HALCON image type ’real’. PSFwidth and PSFheight define the width and height of the output image.
The blurring motion moves along an even. Angle fixes its direction by specifying the angle between the motion
direction and the x-axis (anticlockwise, measured in degrees). To specify different velocity behaviour five PSF
prototypes can be generated. Type switches between the following prototypes:

1. reverse ramp (crude model for acceleration)


2. reverse trapezoid (crude model for high acceleration)
3. square pulse (exact model for constant velocity), this is default
4. forward trapezoid (crude model for deceleration)
5. forward ramp (crude model for high deceleration)

The blurring affects all part of the image uniformly. Blurring controls the extent of blurring. It specifies the
number of pixels (lying one after another) that are affetcetd by the blurring. This number is determined by velocity
of the motion and exposure time. If Blurring is a negative number, an adequate blurring in reverse direction
is simulated. If Angle is a negative number, it is interpreted clockwise. If Angle exceeds 360 or falls below
-360, it is transformed modulo(360) in an adequate number between [0..360] resp. [−360..0]. The result image
of gen_psf_motion encloses an spatial domain impulse response of the specified blurring. Its representation
presumes the origin in the upper left corner. This results in the following disposition of an N xM sized image:
• first rectangle ("‘upper left"’): (image coordinates xb = 0..(N/2) − 1, yb = 0..(M/2) − 1)
- conforms to the fourth quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 0..N/2 and y = 0.. − M/2
• second rectangle ("‘upper right"’): (image coordinates xb = N/2..N − 1, yb = 0..(M/2) − 1)
- conforms to the third quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 and y = −1.. − M/2
• third rectangle ("‘lower left"’): (image coordinates xb = 0..(N/2) − 1, yb = M/2..M − 1)
- conforms to the first quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 1..N/2 and y = M/2..0
• fourth rectangle ("‘lower right"’): (image coordinates xb = N/2..N − 1, yb = M/2..M − 1)
- conforms to the second quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 und y = M/2..1
This representation conforms to that of the impulse response parameter of the HALCON-operator
wiener_filter. So one can use gen_psf_motion to generate an impulse response for Wiener filtering
a motion blurred image.
Parameter
. Psf (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Impulse response of motion-blur.
. PSFwidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of impulse response image.
Default Value : 256
Suggested values : PSFwidth ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ PSFwidth
. PSFheight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of impulse response image.
Default Value : 256
Suggested values : PSFheight ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ PSFheight

HALCON 8.0.2
294 CHAPTER 3. FILTER

. Blurring (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double


Degree of motion-blur.
Default Value : 20.0
Suggested values : Blurring ∈ {5.0, 10.0, 20.0, 30.0, 40.0}
. Angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Angle between direction of motion and x-axis (anticlockwise).
Default Value : 0
Suggested values : Angle ∈ {0, 45, 90, 180, 270}
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
PSF prototype resp. type of motion.
Default Value : 3
List of values : Type ∈ {1, 2, 3, 4, 5}
Result
gen_psf_motion returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
gen_psf_motion is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_motion, simulate_defocus, gen_psf_defocus
Possible Successors
simulate_motion, wiener_filter, wiener_filter_ni
See also
simulate_motion, simulate_defocus, gen_psf_defocus, wiener_filter,
wiener_filter_ni
References
Anil K. Jain:Fundamentals of Digital Image Processing, Prentice-Hall International Inc., Englewood Cliffs, New
Jersey, 1989
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995.
Kha-Chye Tan, Hock Lim, B. T. G. Tan:"‘Restoration of Real-World Motion-Blurred Images"’;S. 291-299 in:
CVGIP Graphical Models and Image Processing, Vol. 53, No. 3, May 1991
Module
Foundation

simulate_defocus ( const Hobject Image, Hobject *DefocusedImage,


double Blurring )

T_simulate_defocus ( const Hobject Image, Hobject *DefocusedImage,


const Htuple Blurring )

Simulate an uniform out-of-focus blurring of an image.


simulate_defocus simulates out-of-focus blurring of an image. All parts of the image are blurred uniformly.
Blurring specifies the extent of blurring by defining the "‘blur radius"’ (out-of-focus blurring maps each image
pixel on a small circle with a radius of Blurring - specified in "‘number of pixels"’). If specified less than null,
the absolute value of Blurring is used. Simulation of blurring is done by a convolution of the image with a
blurring specific impulse response. The convolution is realized by multiplication in the Fourier domain.
Parameter
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Image to blur.
. DefocusedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Blurred image.
. Blurring (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Degree of blurring.
Default Value : 5.0
Suggested values : Blurring ∈ {1.0, 5.0, 10.0, 15.0, 18.0}

HALCON/C Reference Manual, 2008-5-13


3.17. WIENER-FILTER 295

Result
simulate_defocus returns H_MSG_TRUE if all parameters are correct. If the input is empty
simulate_defocus returns with an error message.
Parallelization Information
simulate_defocus is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_defocus, simulate_motion, gen_psf_motion
Possible Successors
wiener_filter, wiener_filter_ni
See also
gen_psf_defocus, simulate_motion, gen_psf_motion
References
Reginald L. Lagendijk, Jan Biemond: Iterative Identification and Restoration of Images, Kluwer Academic Pub-
lishers Boston/Dordrecht/London, 1991
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995.
Module
Foundation

simulate_motion ( const Hobject Image, Hobject *MovedImage,


double Blurring, Hlong Angle, Hlong Type )

T_simulate_motion ( const Hobject Image, Hobject *MovedImage,


const Htuple Blurring, const Htuple Angle, const Htuple Type )

Simulation of (linearly) motion blur.


simulate_motion simulates blurring caused by a relative motion between the object and the camera during
exposure. The simulated motion moves along an even. Angle fixes its direction by specifying the angle between
the motion direction and the x-axis (anticlockwise, measured in degrees). Simulation is done by a convolution of
the image with a blurring specific impulse response. The convolution is realized by multiplication in the Fourier
domain. simulate_motion offers five prototypes of impulse responses conforming to different acceleration
behaviours. Type allows to choose one of the following PSF prototypes:

1. reverse ramp (crude model for acceleration)


2. reverse trapezoid (crude model for high acceleration)
3. square pulse (exact model for constant velocity), this is default
4. forward trapezoid (crude model for deceleration)
5. forward ramp (crude model for high deceleration)

The simulated blurring affects all part of the image uniformly. Blurring controls the extent of blurring. It
specifies the number of pixels (lying one after another) that are affetcetd by the blurring. This number is determined
by velocity of the motion and exposure time. If Blurring is a negative number, an adequate blurring in reverse
direction is simulated. If Angle is a negative number, it is interpreted clockwise. If Angle exceeds 360 or falls
below -360, it is transformed modulo(360) in an adequate number between [0..360] resp. [−360..0].
Parameter

. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
image to be blurred.
. MovedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
motion blurred image.
. Blurring (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
extent of blurring.
Default Value : 20.0
Suggested values : Blurring ∈ {5.0, 10.0, 20.0, 30.0, 40.0}

HALCON 8.0.2
296 CHAPTER 3. FILTER

. Angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Angle between direction of motion and x-axis (anticlockwise).
Default Value : 0
Suggested values : Angle ∈ {0, 45, 90, 180, 270}
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
impulse response of motion blur.
Default Value : 3
List of values : Type ∈ {1, 2, 3, 4, 5}
Result
simulate_motion returns H_MSG_TRUE if all parameters are correct. If the input is empty
simulate_motion returns with an error message.
Parallelization Information
simulate_motion is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_motion, gen_psf_motion
Possible Successors
simulate_defocus, wiener_filter, wiener_filter_ni
See also
gen_psf_motion, simulate_defocus, gen_psf_defocus
References
Anil K. Jain:Fundamentals of Digital Image Processing, Prentice-Hall International Inc., Englewood Cliffs, New
Jersey, 1989
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995.
Kha-Chye Tan, Hock Lim, B. T. G. Tan:"‘Restoration of Real-World Motion-Blurred Images"’;S. 291-299 in:
CVGIP Graphical Models and Image Processing, Vol. 53, No. 3, May 1991
Module
Foundation

wiener_filter ( const Hobject Image, const Hobject Psf,


const Hobject FilteredImage, Hobject *RestoredImage )

T_wiener_filter ( const Hobject Image, const Hobject Psf,


const Hobject FilteredImage, Hobject *RestoredImage )

Image restoration by Wiener filtering.


wiener_filter produces an estimate of the original image (= image without noise and blurring) by minimizing
the mean square error between estimated and original image. wiener_filter can be used to restore images
corrupted by noise and/or blurring (e.g. motion blur, atmospheric turbulence or out-of-focus blur). Method and
realisation of this restoration technique bases on the following model: The corrupted image is interpreted as the
output of a (disturbed) linear system. Functionality of a linear system is determined by its specific impuls response.
So the convolution of original image and impulse response results in the corrupted image. The specific impulse
response describes image acquisition and the occured degradations. In the presence of additive noise an additional
noise term must be considered. So the corrupted image can be modeled as the result of
[convolution(impulse_response, original_image)] + noise_term
The noise term encloses two different terms describing image-dependent and image-independent noise. According
to this model, two terms must be known for restoration by Wiener filtering:

1. degradation-specific impulse response


2. noise term

So wiener_filter needs a smoothed version of the input image to estimate the power spectral density of
noise and original image. One can use one of the smoothing HALCON-filters (e.g. eliminate_min_max)to
get this version. wiener_filter needs further the impulse response that describes the specific degradation.

HALCON/C Reference Manual, 2008-5-13


3.17. WIENER-FILTER 297

This impulse response (represented in spatial domain) must fit into an image of HALCON image type ’real’.
There exist two HALCON-operators for generation of an impulse response for motion blur and out-of-focus (see
gen_psf_motion, gen_psf_defocus). The representation of the impulse response presumes the origin in
the upper left corner. This results in the following disposition of an N xM sized image:

• first rectangle ("‘upper left"’): (image coordinates xb = 0..(N/2) − 1, yb = 0..(M/2) − 1)


- conforms to the fourth quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 0..N/2 and y = 0.. − M/2
• second rectangle ("‘upper right"’): (image coordinates xb = N/2..N − 1, yb = 0..(M/2) − 1)
- conforms to the third quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 and y = −1.. − M/2
• third rectangle ("‘lower left"’): (image coordinates xb = 0..(N/2) − 1, yb = M/2..M − 1)
- conforms to the first quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 1..N/2 and y = M/2..0
• fourth rectangle ("‘lower right"’): (image coordinates xb = N/2..N − 1, yb = M/2..M − 1)
- conforms to the second quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 und y = M/2..1

wiener_filter works as follows:

• estimation of the power spectrum density of the original image by using the smoothed version of the corrupted
image,
• estimation of the power spectrum density of each pixel by subtracting smoothed version from unsmoothed
version,
• building the Wiener filter kernel with the quotient of power spectrum densities of noise and original image
and with the impulse response,
• processing the convolution of image and Wiener filter frequency response.

The result image has got image type ’real’.


Attention
Psf must be of image type ’real’ and conform to Image and FilteredImage in image width and height.
Parameter

. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Corrupted image.
. Psf (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : real
impulse response (PSF) of degradation (in spatial domain).
. FilteredImage (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4
/ real
Smoothed version of corrupted image.
. RestoredImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Restored image.
Example

/* Restoration of a noisy image (size=256x256), that was blurred by motion*/


Hobject object;
Hobject restored;
Hobject psf;
Hobject noisefiltered;
/* 1. Generate a Point-Spread-Function for a motion-blur with */
/* parameter a=10 and direction along the x-axis */
gen_psf_motion(&psf,256,256,10,0,3);
/* 2. Noisefiltering of the image */
median_image(object,&noisefiltered,"circle",2,0);
/* 3. Wiener-filtering */
wiener_filter(object,psf,noisefiltered,&restored);

HALCON 8.0.2
298 CHAPTER 3. FILTER

Result
wiener_filter returns H_MSG_TRUE if all parameters are correct. If the input is empty wiener_filter
returns with an error message.
Parallelization Information
wiener_filter is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_motion, simulate_motion, simulate_defocus, gen_psf_defocus
Alternatives
wiener_filter_ni
See also
simulate_motion, gen_psf_motion, simulate_defocus, gen_psf_defocus
References
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995
Azriel Rosenfeld, Avinash C. Kak: Digital Picture Processing, Computer Science and Aplied Mathematics, Aca-
demic Press New York/San Francisco/London 1982
Module
Foundation

wiener_filter_ni ( const Hobject Image, const Hobject Psf,


const Hobject NoiseRegion, Hobject *RestoredImage, Hlong MaskWidth,
Hlong MaskHeight )

T_wiener_filter_ni ( const Hobject Image, const Hobject Psf,


const Hobject NoiseRegion, Hobject *RestoredImage,
const Htuple MaskWidth, const Htuple MaskHeight )

Image restoration by Wiener filtering.


wiener_filter_ni (ni = noise-estimation integrated) produces an estimate of the original image (= im-
age without noise and blurring) by minimizing the mean square error between estimated and original image.
wiener_filter can be used to restore images corrupted by noise and/or blurring (e.g. motion blur, atmospheric
turbulence or out-of-focus blur). Method and realisation of this restoration technique bases on the following model:
The corrupted image is interpreted as the output of a (disturbed) linear system. Functionality of a linear system is
determined by its specific impuls response. So the convolution of original image and impulse response results in
the corrupted image. The specific impulse response describes image acquisition and the occured degradations. In
the presence of additive noise an additional noise term must be considered. So the corrupted image can be modeled
as the result of
[convolution(impulse_response, original_image)] + noise_term
The noise term encloses two different terms describing image-dependent and image-independent noise. According
to this model, two terms must be known for restoration by Wiener filtering:

1. degradation-specific impulse response


2. noise term

wiener_filter_ni estimates the noise term as follows: The user defines a region that is suitable for noise
estimation within the image (homogeneous as possible, as edges or textures aggravate noise estimation). After
smoothing within this region by an (unweighted) median filter and subtracting smoothed version from unsmoothed,
the average noise amplitude of the region is processed within wiener_filter_ni. This amplitude together
with the average gray value within the region allows estimating the quotient of the power spectral densities of
noise and original image (in contrast to wiener_filter wiener_filter_ni assumes a rather constant
quotient within the whole image). The user can define width and height of the rectangular (median-)filter mask to
influence the noise estimation (MaskWidth, MaskHeight). wiener_filter_ni needs further the impulse
response that describes the specific degradation. This impulse response (represented in spatial domain) must fit
into an image of HALCON image type ’real’. There exist two HALCON-operators for generation of an impulse
response for motion blur and out-of-focus (see gen_psf_motion, gen_psf_defocus). The representation

HALCON/C Reference Manual, 2008-5-13


3.17. WIENER-FILTER 299

of the impulse response presumes the origin in the upper left corner. This results in the following disposition of an
N xM sized image:
• first rectangle ("‘upper left"’): (image coordinates xb = 0..(N/2) − 1, yb = 0..(M/2) − 1)
- conforms to the fourth quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 0..N/2 and y = 0.. − M/2
• second rectangle ("‘upper right"’): (image coordinates xb = N/2..N − 1, yb = 0..(M/2) − 1)
- conforms to the third quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 and y = −1.. − M/2
• third rectangle ("‘lower left"’): (image coordinates xb = 0..(N/2) − 1, yb = M/2..M − 1)
- conforms to the first quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 1..N/2 and y = M/2..0
• fourth rectangle ("‘lower right"’): (image coordinates xb = N/2..N − 1, yb = M/2..M − 1)
- conforms to the second quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 und y = M/2..1
wiener_filter works as follows:

• estimating the quotient of the power spectrum densities of noise and original image,
• building the Wiener filter kernel with the quotient of power spectrum densities of noise and original image
and with the impulse response,
• processing the convolution of image and Wiener filter frequency response.

The result image has got image type ’real’.


Attention
Psf must be of image type ’real’ and conform to Image in width and height. The Region used for noise estimation
must lie completely within the image. If MaskWidth or MaskHeight is an even number, it is replaced by the
next higher odd number (this allows the unique extraction of the center of the filter mask). Width/height of the
mask may not exceed the image width/height or be less than null.
Parameter
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Corrupted image.
. Psf (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : real
impulse response (PSF) of degradation (in spatial domain).
. NoiseRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region for noise estimation.
. RestoredImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Restored image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of filter mask.
Default Value : 3
Suggested values : MaskWidth ∈ {3, 5, 7, 9}
Typical range of values : 0 ≤ MaskWidth ≤ width(Image)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of filter mask.
Default Value : 3
Suggested values : MaskHeight ∈ {3, 5, 7, 9}
Typical range of values : 0 ≤ MaskHeight ≤ height(Image)
Example

/* Restoration of a noisy image (size=256x256), that was blurred by motion*/


Hobject object;
Hobject restored;
Hobject psf;
Hobject noise_region;
/* 1. Generate a Point-Spread-Function for a motion-blur with */

HALCON 8.0.2
300 CHAPTER 3. FILTER

/* parameter a=10 and direction of the x-axis */


gen_psf_motion(&psf,256,256,10,0,3);
/* 2. Segmentation of a region for the noise-estimation */
open_window(0,0,256,256,0,"visible",&WindowHandle);
disp_image(object,WindowHandle);
draw_region(&noise_region,draw_region);
/* 3. Wiener-filtering */
wiener_filter_ni(object,psf,noise_region,&restored,3,3);

Result
wiener_filter_ni returns H_MSG_TRUE if all parameters are correct. If the input is empty
wiener_filter_ni returns with an error message.
Parallelization Information
wiener_filter_ni is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_motion, simulate_motion, simulate_defocus, gen_psf_defocus
Alternatives
wiener_filter
See also
simulate_motion, gen_psf_motion, simulate_defocus, gen_psf_defocus
References
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995
Azriel Rosenfeld, Avinash C. Kak: Digital Picture Processing, Computer Science and Aplied Mathematics, Aca-
demic Press New York/San Francisco/London 1982
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


Chapter 4

Graphics

4.1 Drawing

drag_region1 ( const Hobject SourceRegion, Hobject *DestinationRegion,


Hlong WindowHandle )

T_drag_region1 ( const Hobject SourceRegion,


Hobject *DestinationRegion, const Htuple WindowHandle )

Interactive moving of a region.


drag_region1 is used to move a region on the display by mouse. Calling drag_region1 turns the region
visible as soon as the left mouse button is pressed. Therefore the region’s edges are displayed only. As repre-
sentation mode the mode ’not’ (see set_draw) is used during procedure’s permanence. During the movement
the cursor resides in the region’s barycenter. If you move the mouse with pressed left mouse button, the depicted
region follows - delayed - this movement. If you press the right mouse button you terminate drag_region1.
The depicted region disappears from the display. Output is a region which corresponds to the last position on the
display. You may pass even several regions at once. Procedure affine_trans_image moves the gray values.
Attention
Gray values of regions are not moved. With moving the input region it is not sure whether the gray values of the
output regions are filled reasonable. This may occur if the gray values of the input regions do not comprise the
whole image.
Parameter
. SourceRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to move.
. DestinationRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Moved Regions.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example

draw_region(&Obj,WindowHandle) ;
drag_region1(Obj,&New,WindowHandle) ;
disp_region(New,WindowHandle) ;
position(Obj,_,Row1,Column1,_,_,_,_) ;
position(New,_,Row2,Column2,_,_,_,_) ;
disp_arrow(WindowHandle,Row1,Column1,Row2,Column2,1.0) ;
fwrite_string("Transformation: ") ;
fwrite_string(Row2-Row1) ;
fwrite_string(", ") ;
fwrite_string(Column2-Column1) ;
fnew_line() ;

301
302 CHAPTER 4. GRAPHICS

Result
drag_region1 returns H_MSG_TRUE, if a region is entered, the window is valid and the needed drawing mode
(see set_insert) is available. If necessary, an exception handling is raised. You may determine the behavior
after an empty input with set_system(’no_object_result’,<Result>).
Parallelization Information
drag_region1 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
get_mposition, move_region
See also
set_insert, set_draw, affine_trans_image
Module
Foundation

drag_region2 ( const Hobject SourceRegion, Hobject *DestinationRegion,


Hlong WindowHandle, Hlong Row, Hlong Column )

T_drag_region2 ( const Hobject SourceRegion,


Hobject *DestinationRegion, const Htuple WindowHandle,
const Htuple Row, const Htuple Column )

Interactive movement of a region with fixpoint specification.


You use drag_region2 to move a region on the display by mouse. It corresponds to the procedure
drag_region1 with the difference, that the position of the mouse cursor can be determined.
Attention
Gray values of the regions are not moved. With moving the input region it is not sure whether the gray values of
the output regions are filled reasonable. This may occur if the gray values of the input regions do not comprise the
whole image.
Parameter
. SourceRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to move.
. DestinationRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Moved regions.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row index of the reference point.
Default Value : 100
Suggested values : Row ∈ {0, 64, 128, 256, 512}
Typical range of values : 0 ≤ Row ≤ 1024
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column index of the reference point.
Default Value : 100
Suggested values : Column ∈ {0, 64, 128, 256, 512}
Typical range of values : 0 ≤ Column ≤ 1024
Result
drag_region2 returns H_MSG_TRUE, if a region is entered, the window is valid and the needed drawing mode
(see set_insert) is available. If necessary, an exception handling is raised. You may determine the behavior
after an empty input with set_system(’no_object_result’,<Result>).
Parallelization Information
drag_region2 is reentrant, local, and processed without parallelization.

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 303

Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert,
affine_trans_image
Alternatives
get_mposition, move_region, drag_region1, drag_region3
See also
set_insert, set_draw, affine_trans_image
Module
Foundation

drag_region3 ( const Hobject SourceRegion, const Hobject MaskRegion,


Hobject *DestinationRegion, Hlong WindowHandle, Hlong Row,
Hlong Column )

T_drag_region3 ( const Hobject SourceRegion, const Hobject MaskRegion,


Hobject *DestinationRegion, const Htuple WindowHandle,
const Htuple Row, const Htuple Column )

Interactive movement of a region with restriction of positions.


You use drag_region3 to move a region on the display by mouse. It corresponds to the procedure
drag_region2 with the enhancement, that all points are specified which can be entered by mouse. If you
move the mouse outside of this area (MaskRegion), the region on the point with the smallest distance inside
MaskRegion will be displayed.
Attention
The region’s gray values are not moved. With moving the input region it is not sure whether the gray values of
the output regions are filled reasonable. This may occur if the gray values of the input regions do not comprise the
whole image.
Parameter
. SourceRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to move.
. MaskRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Points on which it is allowed for a region to move.
. DestinationRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Moved regions.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row index of the reference point.
Default Value : 100
Suggested values : Row ∈ {0, 64, 128, 256, 512}
Typical range of values : 0 ≤ Row ≤ 1024
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column index of the reference point.
Default Value : 100
Suggested values : Column ∈ {0, 64, 128, 256, 512}
Typical range of values : 0 ≤ Column ≤ 1024
Result
drag_region3 returns H_MSG_TRUE, if a region is entered, if the window is valid and the needed drawing
mode (see set_insert) is available. If necessary, an exception handling is raised. You may determine the
behavior after an empty input with set_system(’no_object_result’,<Result>).
Parallelization Information
drag_region3 is reentrant, local, and processed without parallelization.

HALCON 8.0.2
304 CHAPTER 4. GRAPHICS

Possible Predecessors
open_window, get_mposition
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert,
affine_trans_image
Alternatives
get_mposition, move_region, drag_region1, drag_region2
See also
set_insert, set_draw, affine_trans_image
Module
Foundation

draw_circle ( Hlong WindowHandle, double *Row, double *Column,


double *Radius )

T_draw_circle ( const Htuple WindowHandle, Htuple *Row, Htuple *Column,


Htuple *Radius )

Interactive drawing of a circle.


draw_circle produces the parameter for a circle created interactive by the user in the window.
To create a circle you have to press the mouse button at the location which is used as the center of that circle. While
keeping the mouse button pressed, the Radius’s length can be modified through moving the mouse. After another
mouse click in the created circle center you can move it. A clicking close to the circular arc you can modify the
Radius of the circle. Pressing the right mousebutton terminates the procedure. After terminating the procedure
the circle is not visible in the window any longer.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y ; double *
Barycenter’s row index.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x ; double *
Barycenter’s column index.
. Radius (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius ; double *
Circle’s radius.
Example

read_image(&Image,"affe") ;
draw_circle(WindowHandle,&Row,&Column,&Radius) ;
gen_circle(&Circle,Row,Column,Radius) ;
reduce_domain(Image,Circle,&GrayCircle) ;
disp_image(GrayCircle,WindowHandle) ;

Result
draw_circle returns H_MSG_TRUE if the window is valid and the needed drawing mode (see set_insert)
is available. If necessary, an exception handling is raised.
Parallelization Information
draw_circle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_circle_mod, draw_ellipse, draw_region

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 305

See also
gen_circle, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation

draw_circle_mod ( Hlong WindowHandle, double RowIn, double ColumnIn,


double RadiusIn, double *Row, double *Column, double *Radius )

T_draw_circle_mod ( const Htuple WindowHandle, const Htuple RowIn,


const Htuple ColumnIn, const Htuple RadiusIn, Htuple *Row,
Htuple *Column, Htuple *Radius )

Interactive drawing of a circle.


draw_circle_mod produces the parameter for a circle created interactive by the user in the window.
To create a circle are expected the coordinates RowIn and ColumnIn of the center of a circle with radius
RadiusIn. After another mouse click in the created circle center you can move it. A clicking close to the
circular arc you can modify the Radius of the circle. Pressing the right mousebutton terminates the procedure.
After terminating the procedure the circle is not visible in the window any longer.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. RowIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y ; double
Row index of the center.
. ColumnIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x ; double
Column index of the center.
. RadiusIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius1 ; double
Radius of the circle.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y ; double *
Barycenter’s row index.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x ; double *
Barycenter’s column index.
. Radius (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius ; double *
Circle’s radius.
Example

read_image(&Image,"affe") ;
draw_circle_mod(WindowHandle,20,20,15,&Row,&Column,&Radius) ;
gen_circle(&Circle,Row,Column,Radius) ;
reduce_domain(Image,Circle,&GrayCircle) ;
disp_image(GrayCircle,WindowHandle) ;

Result
draw_circle_mod returns H_MSG_TRUE if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_circle_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_circle, draw_ellipse, draw_region

HALCON 8.0.2
306 CHAPTER 4. GRAPHICS

See also
gen_circle, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation

draw_ellipse ( Hlong WindowHandle, double *Row, double *Column,


double *Phi, double *Radius1, double *Radius2 )

T_draw_ellipse ( const Htuple WindowHandle, Htuple *Row,


Htuple *Column, Htuple *Phi, Htuple *Radius1, Htuple *Radius2 )

Interactive drawing of an ellipse.


draw_ellipse returns the parameter for any orientated ellipse, which has been created interactively by the user
in the window.
The created ellipse is described by its center, its two half axes and the angle between the first half axis and the
horizontal coordinate axis.
To create an ellipse you have to determine the center of the ellipse with the left mouse button. Keeping the button
pressed determines the length (Radius1) and the orientation (Phi) of the first half axis. In doing so a temporary
default length for the second half axis is assumed, which may be modified afterwards on demand. After another
mouse click in the center of the created ellipse you can move it. A mouse click close to a vertex “grips” it to
modify the length of the appropriate half axis. You may modify the orientation only, if a vertex of the first half axis
is gripped.
Pressing the right mouse button terminates the procedure. After terminating the procedure the ellipse is not visible
in the window any longer.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.y ; double *
Row index of the center.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.x ; double *
Column index of the center.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.angle.rad ; double *
Orientation of the first half axis in radians.
. Radius1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius1 ; double *
First half axis.
. Radius2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius2 ; double *
Second half axis.
Example

read_image(&Image,"affe") ;
draw_ellipse(WindowHandle,&Row,&Column,&Phi,&Radius1,&Radius2) ;
gen_ellipse(&Ellipse,Row,Column,Phi,Radius1,Radius2) ;
reduce_domain(Image,Ellipse,&GrayEllipse) ;
sobel_amp(GrayEllipse,&Sobel,"sum_abs",3) ;
disp_image(Sobel,WindowHandle) ;

Result
draw_ellipse returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_ellipse is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 307

Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_ellipse_mod, draw_circle, draw_region
See also
gen_ellipse, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation

draw_ellipse_mod ( Hlong WindowHandle, double RowIn, double ColumnIn,


double PhiIn, double Radius1In, double Radius2In, double *Row,
double *Column, double *Phi, double *Radius1, double *Radius2 )

T_draw_ellipse_mod ( const Htuple WindowHandle, const Htuple RowIn,


const Htuple ColumnIn, const Htuple PhiIn, const Htuple Radius1In,
const Htuple Radius2In, Htuple *Row, Htuple *Column, Htuple *Phi,
Htuple *Radius1, Htuple *Radius2 )

Interactive drawing of an ellipse.


draw_ellipse_mod returns the parameter for any orientated ellipse, which has been created interactively by
the user in the window.
The created ellipse is described by its center, its two half axes and the angle between the first half axis and the
horizontal coordinate axis.
To create an ellipse are expected the parameters RowIn, ColumnIn,PhiIn,Radius1In,Radius2In. Keep-
ing the button pressed determines the length (Radius1) and the orientation (Phi) of the first half axis. In doing
so a temporary default length for the second half axis is assumed, which may be modified afterwards on demand.
After another mouse click in the center of the created ellipse you can move it. A mouse click close to a vertex
“grips” it to modify the length of the appropriate half axis. You may modify the orientation only, if a vertex of the
first half axis is gripped.
Pressing the right mouse button terminates the procedure. After terminating the procedure the ellipse is not visible
in the window any longer.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. RowIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.y ; double
Row index of the barycenter.
. ColumnIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.x ; double
Column index of the barycenter.
. PhiIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.angle.rad ; double
Orientation of the bigger half axis in radians.
. Radius1In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius1 ; double
Bigger half axis.
. Radius2In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius1 ; double
Smaller half axis.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.y ; double *
Row index of the center.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.x ; double *
Column index of the center.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.angle.rad ; double *
Orientation of the first half axis in radians.
. Radius1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius1 ; double *
First half axis.
. Radius2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius2 ; double *
Second half axis.

HALCON 8.0.2
308 CHAPTER 4. GRAPHICS

Example

read_image(&Image,"affe") ;
draw_ellipse_mod(WindowHandle,RowIn,ColumnIn,PhiIn,Radius1In,Radius2In,&Row,&Column,&Phi
gen_ellipse(&Ellipse,Row,Column,Phi,Radius1,Radius2) ;
reduce_domain(Image,Ellipse,&GrayEllipse) ;
sobel_amp(GrayEllipse,&Sobel,"sum_abs",3) ;
disp_image(Sobel,WindowHandle) ;

Result
draw_ellipse_mod returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_ellipse_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_ellipse, draw_circle, draw_region
See also
gen_ellipse, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation

draw_line ( Hlong WindowHandle, double *Row1, double *Column1,


double *Row2, double *Column2 )

T_draw_line ( const Htuple WindowHandle, Htuple *Row1, Htuple *Column1,


Htuple *Row2, Htuple *Column2 )

Draw a line.
draw_line returns the parameter for a line, which has been created interactively by the user in the window.
To create a line you have to press the left mouse button determining a start point of the line. While keeping the
button pressed you may “drag” the line in any direction. After another mouse click in the middle of the created
line you can move it. If you click on one end point of the created line, you may move this point. Pressing the right
mousebutton terminates the procedure.
After terminating the procedure the line is not visible in the window any longer.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y ; double *
Row index of the first point of the line.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x ; double *
Column index of the first point of the line.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y ; double *
Row index of the second point of the line.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x ; double *
Column index of the second point of the line.
Example

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 309

get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_line(WindowHandle,&Row1,&Column1,&Row2,&Column2) ;
set_part(WindowHandle,Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fwrite_string(Row2) ;
fwrite_string(",") ;
fwrite_string(Column2) ;
fwrite_string(")") ;
fnew_line(:::) ;

Result
draw_line returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see set_insert)
is available. If necessary, an exception handling is raised.
Parallelization Information
draw_line is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
See also
draw_line_mod, gen_rectangle1, draw_circle, draw_ellipse, set_insert
Module
Foundation

draw_line_mod ( Hlong WindowHandle, double Row1In, double Column1In,


double Row2In, double Column2In, double *Row1, double *Column1,
double *Row2, double *Column2 )

T_draw_line_mod ( const Htuple WindowHandle, const Htuple Row1In,


const Htuple Column1In, const Htuple Row2In, const Htuple Column2In,
Htuple *Row1, Htuple *Column1, Htuple *Row2, Htuple *Column2 )

Draw a line.
draw_line_mod returns the parameter for a line, which has been created interactively by the user in the window.
To create a line are expected the coordinates of the start point Row1In, Column1In and of the end point
Row2In,Column2In. If you click on one end point of the created line, you may move this point. After an-
other mouse click in the middle of the created line you can move it.
Pressing the right mousebutton terminates the procedure.
After terminating the procedure the line is not visible in the window any longer.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Row1In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y ; double
Row index of the first point of the line.

HALCON 8.0.2
310 CHAPTER 4. GRAPHICS

. Column1In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x ; double


Column index of the first point of the line.
. Row2In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y ; double
Row index of the second point of the line.
. Column2In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x ; double
Column index of the second point of the line.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y ; double *
Row index of the first point of the line.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x ; double *
Column index of the first point of the line.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y ; double *
Row index of the second point of the line.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x ; double *
Column index of the second point of the line.
Example

get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_line_mod(WindowHandle,10,20,55,124,&Row1,&Column1,&Row2,&Column2) ;
set_part(WindowHandle,Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fwrite_string(Row2) ;
fwrite_string(",") ;
fwrite_string(Column2) ;
fwrite_string(")") ;
fnew_line(:::) ;

Result
draw_line_mod returns H_MSG_TRUE, if the window is valid and the needed drawing mode is available. If
necessary, an exception handling is raised.
Parallelization Information
draw_line_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_line, draw_ellipse, draw_region
See also
gen_circle, draw_rectangle1, draw_rectangle2
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 311

T_draw_nurbs ( Hobject *ContOut, const Htuple WindowHandle,


const Htuple Rotate, const Htuple Move, const Htuple Scale,
const Htuple KeepRatio, const Htuple Degree, Htuple *Rows,
Htuple *Cols, Htuple *Weights )

Interactive drawing of a NURBS curve.


draw_nurbs returns the contour ContOut and control information (Rows, Cols, and Weights) of
a NURBS curve of degree Degree, which has been created interactively by the user in the win-
dow WindowHandle. For additional information concerning NURBS curves, see the documentation of
gen_contour_nurbs_xld. To use the control information Rows, Cols, and Weights in a subsequent
call to the operator gen_contour_nurbs_xld, the knot vector Knots should be set to ’auto’.
The NURBS curve is created and manipulated by the means of its control polygon. By contrast, using the operator
draw_nurbs_interp, it is possible to create a NURBS curve that interpolates points specified by the user.
Directly after calling draw_nurbs, you can add control points by clicking with the left mouse button in the
window at the desired positions.
When there are three control points or more, the first and the last point will be marked with an additional square.
By clicking on them you can close the curve or open it again. You delete the point appended last by pressing the
Ctrl key.
As soon as the number of control points exceeds Degree, the NURBS curve given by the specified control polygon
and weight vector is displayed in addition to the control polygon.
The control point which was handled last is surrounded by a circle representing its weight. By simply dragging the
circle you can increase or decrease the weight of this control point.
Existing control points can be moved by dragging them with the mouse. Further new points on the control polygon
(to refine the control polygon) can be inserted by a left click on the desired position on the control polygon.
By pressing the Shift key, you can switch into the transformation mode. In this mode you can rotate, move, and
scale the contour as a whole, but only if you set the parameters Rotate, Move, and Scale, respectively, to true.
Instead of the pick points and the control polygon, 3 symbols are displayed with the contour: a cross in the middle
and an arrow to the right if Rotate is set to true, and a double-headed arrow to the upper right if Scale is set to
true.
You can
• move the curve by clicking the left mouse button on the cross in the center and then dragging it to the new
position,
• rotate it by clicking with the left mouse button on the arrow and then dragging it, till the curve has the right
direction, and
• scale it by dragging the double arrow. To keep the ratio the parameter KeepRatio has to be set to true.
By pressing the Shift key again you can switch back to the edit mode. Pressing the right mouse button terminates
the procedure.
The appearance of the curve while drawing is determined by the line width, size, and color set via the operators
set_color, set_colored, set_line_width, and set_line_style. The control polygon and all
handles are displayed in the second color set by set_color or set_colored. Their line width is fixed to 1
and their line style is fixed to a drawn-through line.
Parameter
. ContOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject *
Contour approximating the NURBS curve.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window identifier.
. Rotate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable rotation?
Default Value : "true"
List of values : Rotate ∈ {"true", "false"}
. Move (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable moving?
Default Value : "true"
List of values : Move ∈ {"true", "false"}

HALCON 8.0.2
312 CHAPTER 4. GRAPHICS

. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Enable scaling?
Default Value : "true"
List of values : Scale ∈ {"true", "false"}
. KeepRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Keep ratio while scaling?
Default Value : "true"
List of values : KeepRatio ∈ {"true", "false"}
. Degree (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
The degree p of the NURBS curve. Reasonable values are 3 to 25.
Default Value : 3
Suggested values : Degree ∈ {2, 3, 4, 5}
Restriction : Degree ≥ 2
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . double *
Row coordinates of the control polygon.
. Cols (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . double *
Columns coordinates of the control polygon.
. Weights (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Weight vector.
Result
draw_nurbs returns H_MSG_TRUE, if the window is valid. If necessary, an exception handling is raised.
Parallelization Information
draw_nurbs is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_xld, draw_nurbs_interp
See also
draw_nurbs_mod, draw_nurbs_interp, gen_contour_nurbs_xld
Module
Foundation

T_draw_nurbs_interp ( Hobject *ContOut, const Htuple WindowHandle,


const Htuple Rotate, const Htuple Move, const Htuple Scale,
const Htuple KeepRatio, const Htuple Degree, Htuple *ControlRows,
Htuple *ControlCols, Htuple *Knots, Htuple *Rows, Htuple *Cols,
Htuple *Tangents )

Interactive drawing of a NURBS curve using interpolation.


draw_nurbs_interp returns the contour ContOut of a NURBS curve of degree Degree, which has been
created interactively by the user in the window WindowHandle using interpolation. That means, that the user
specifies a set of points and the operator computes the parameters of a NURBS curve that includes this points. By
contrast, using the Operator draw_nurbs it is possible to create a NURBS curve by drawing its control polygon.
In addition to ContOut, the control information of the curve (ControlRows, ControlCols, and Knots), the
interpolation points (Rows, Cols) specified by the user and the tangents at the first and the last point (Tangents)
are returned. Tangents consists of four values. The first two values correspond to the y (row) and the x (column)
value of the tangent at the start of the curve and the second two values to the tangent at the end of the curve,
respectively.
The weight vector is not returned because it consists of equal entries. As a consequence, one can use ’auto’ as
weight vector if the control information is used in a subsequent call to the operator gen_contour_nurbs_xld.
For more information on NURBS see the documentation of gen_contour_nurbs_xld.

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 313

Directly after calling draw_nurbs_interp, you can add interpolation points by clicking with the left mouse
button in the window at the desired positions. If enough points are specified (at least Degree − 1), a NURBS
curve that goes through all specified points (in the order of their generation) is computed and displayed.
When there are three points or more, the first and the last point will be marked with an additional square. By
clicking on them you can close the curve or open it again. You delete the point appended last by pressing the Ctrl
key.
The tangents (i.e. the first derivative of the curve) of the first and the last point are displayed as lines. They can be
modified by dragging their ends using the mouse.
Existing points can be moved by dragging them with the mouse. Further new points on the curve can be inserted
by a left click on the desired position on the curve.
By pressing the Shift key, you can switch into the transformation mode. In this mode you can rotate, move, and
scale the curve as a whole, but only if you set the parameters Rotate, Move, and Scale, respectively, to true.
Instead of the pick points and the two tangents, 3 symbols are displayed with the curve: a cross in the middle and
an arrow to the right if Rotate is set to true, and a double-headed arrow to the upper right if Scale is set to true.
You can

• move the curve by clicking the left mouse button on the cross in the center and then dragging it to the new
position,
• rotate it by clicking with the left mouse button on the arrow and then dragging it, till the curve has the right
direction, and
• scale it by dragging the double arrow. To keep the ratio, the parameter KeepRatio has to be set to true.

By pressing the Shift key again you can switch back to the edit mode. Pressing the right mouse button terminates
the procedure.
The appearance of the curve while drawing is determined by the line width, size, and color set via the operators
set_color, set_colored, set_line_width, and set_line_style. The tangents and all handles
are displayed in the second color set by set_color or set_colored. Their line width is fixed to 1 and their
line style is fixed to a drawn-through line.
Attention
In contrast to draw_nurbs, each point specified by the user influences the whole curve. Thus, if one point is
moved, the whole curve can and will change. To minimize this effects, it is recommended to use a small degree
(3-5) and to place the points such that they are approximately equally spaced. In general, uneven degrees will
perform slightly better than even degrees.
Parameter

. ContOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject *


Contour of the curve.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window identifier.
. Rotate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable rotation?
Default Value : "true"
List of values : Rotate ∈ {"true", "false"}
. Move (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable moving?
Default Value : "true"
List of values : Move ∈ {"true", "false"}
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable scaling?
Default Value : "true"
List of values : Scale ∈ {"true", "false"}
. KeepRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Keep ratio while scaling?
Default Value : "true"
List of values : KeepRatio ∈ {"true", "false"}

HALCON 8.0.2
314 CHAPTER 4. GRAPHICS

. Degree (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


The degree p of the NURBS curve. Reasonable values are 3 to 5.
Default Value : 3
Suggested values : Degree ∈ {2, 3, 4, 5}
Restriction : (Degree ≥ 2) ∧ (Degree ≤ 9)
. ControlRows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . double *
Row coordinates of the control polygon.
. ControlCols (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . double *
Column coordinates of the control polygon.
. Knots (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Knot vector.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . double *
Row coordinates of the points specified by the user.
. Cols (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . double *
Column coordinates of the points specified by the user.
. Tangents (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Tangents specified by the user.
Result
draw_nurbs_interp returns H_MSG_TRUE, if the window is valid. If necessary, an exception handling is
raised.
Parallelization Information
draw_nurbs_interp is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_xld, draw_nurbs
See also
draw_nurbs_interp_mod, draw_nurbs, gen_contour_nurbs_xld
Module
Foundation

T_draw_nurbs_interp_mod ( Hobject *ContOut,


const Htuple WindowHandle, const Htuple Rotate, const Htuple Move,
const Htuple Scale, const Htuple KeepRatio, const Htuple Edit,
const Htuple Degree, const Htuple RowsIn, const Htuple ColsIn,
const Htuple TangentsIn, Htuple *ControlRows, Htuple *ControlCols,
Htuple *Knots, Htuple *Rows, Htuple *Cols, Htuple *Tangents )

Interactive modification of a NURBS curve using interpolation.


draw_nurbs_interp_mod returns the contour ContOut of a NURBS curve of degree Degree, which has
been modified interactively by the user in the window WindowHandle.
In addition to ContOut the control information of the curve (ControlRows, ControlCols, and Knots), the
interpolation points (Rows, Cols) specified by the user and the tangents at the first and the last point (Tangents)
are returned. Tangents consists of four values. The first two values correspond to the y (row) and the x (column)
value of the tangent at the start of the curve and the second two values to the tangent at the end of the curve,
respectively.
The weight vector is not returned because it consists of equal entries. As a consequence one
can use ’auto’ as weight vector, if the control information is used in a subsequent call to the op-
erator gen_contour_nurbs_xld. For more information on NURBS see the documentation of
gen_contour_nurbs_xld.

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 315

The input curve is specified by the interpolation points (RowsIn, ColsIn), its degree Degree and the
tangents TangentsIn, such that draw_nurbs_interp_mod can be applied to the output data of
draw_nurbs_interp.
You can modify the curve in two ways: by editing the interpolation points, e.g., by inserting or moving points, or
by transforming the curve as a whole, e.g., by rotating moving or scaling it. Note that you can only edit the curve
if Edit is set to true. Similarly, you can only rotate, move or scale it if Rotate, Move, and Scale, respectively,
are set to true.
draw_nurbs_interp_mod starts in the transformation mode. In this mode, the curve is displayed together
with 3 symbols: a cross in the middle and an arrow to the right if Rotate is set to true, and a double-headed
arrow to the upper right if Scale is set to true. To switch into the edit mode, press the Shift key; by pressing it
again, you can switch back into the transformation mode.
Transformation Mode
• To move the curve, click with the left mouse button on the cross in the center and then drag it to the new
position, i.e., keep the mouse button pressed while moving the mouse.
• To rotate it, click with the left mouse button on the arrow and then drag it, till the curve has the right direction.
• Scaling is achieved by dragging the double arrow. To keep the ratio, the parameter KeepRatio has to be
set to true.
Edit Mode
In this mode, the curve is displayed together with its interpolation points and the start and end tangent. Start and
end point are marked by an additional square. You can perform the following modifications:
• To append new points, click with the left mouse button in the window and a new point is added at this position.
• You can delete the point appended last by pressing the Ctrl key.
• To move a point, drag it with the mouse.
• To insert a point on the curve, click on the desired position on the curve.
• To close respectively open the curve, click on the first or on the last point.
Pressing the right mouse button terminates the procedure.
The appearance of the curve while drawing is determined by the line width, size and color set via the operators
set_color, set_colored, set_line_width, and set_line_style. The tangents and all handles
are displayed in the second color set by set_color or set_colored. Their line width is fixed to 1 and their
line style is fixed to a drawn-through line.
Attention
In contrast to draw_nurbs, each point specified by the user influences the whole curve. Thus, if one point is
moved, the whole curve can and will change. To minimize this effects, it is recommended to use a small degree
(3-5) and to place the points such that they are approximately equally spaced. In general, uneven degrees will
perform slightly better than even degrees.
Parameter
. ContOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject *
Contour of the modified curve.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window identifier.
. Rotate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable rotation?
Default Value : "true"
List of values : Rotate ∈ {"true", "false"}
. Move (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable moving?
Default Value : "true"
List of values : Move ∈ {"true", "false"}
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable scaling?
Default Value : "true"
List of values : Scale ∈ {"true", "false"}

HALCON 8.0.2
316 CHAPTER 4. GRAPHICS

. KeepRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Keep ratio while scaling?
Default Value : "true"
List of values : KeepRatio ∈ {"true", "false"}
. Edit (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable editing?
Default Value : "true"
List of values : Edit ∈ {"true", "false"}
. Degree (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
The degree p of the NURBS curve. Reasonable values are 3 to 5.
Default Value : 3
Suggested values : Degree ∈ {2, 3, 4, 5}
Restriction : (Degree ≥ 2) ∧ (Degree ≤ 9)
. RowsIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . double
Row coordinates of the input interpolation points.
. ColsIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . double
Column coordinates of the input interpolation points.
. TangentsIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Input tangents.
. ControlRows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . double *
Row coordinates of the control polygon.
. ControlCols (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . double *
Column coordinates of the control polygon.
. Knots (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Knot vector.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . double *
Row coordinates of the points specified by the user.
. Cols (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . double *
Column coordinates of the points specified by the user.
. Tangents (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Tangents specified by the user.
Result
draw_nurbs_interp_mod returns H_MSG_TRUE, if the window is valid. If necessary, an exception handling
is raised.
Parallelization Information
draw_nurbs_interp_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_xld_mod, draw_nurbs_mod
See also
draw_nurbs_interp, gen_contour_nurbs_xld
Module
Foundation

T_draw_nurbs_mod ( Hobject *ContOut, const Htuple WindowHandle,


const Htuple Rotate, const Htuple Move, const Htuple Scale,
const Htuple KeepRatio, const Htuple Edit, const Htuple Degree,
const Htuple RowsIn, const Htuple ColsIn, const Htuple WeightsIn,
Htuple *Rows, Htuple *Cols, Htuple *Weights )

Interactive modification of a NURBS curve.

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 317

draw_nurbs_mod returns the contour ContOut and control information (Rows, Cols, and Weights)
of a NURBS curve of degree Degree, which has been interactively modified by the user in the win-
dow WindowHandle. For additional information concerning NURBS curves, see the documentation of
gen_contour_nurbs_xld. To use the control information Rows, Cols, and Weights in a subsequent
call to the operator gen_contour_nurbs_xld, the knot vector Knots should be set to ’auto’.
The input NURBS curve is specified by its control polygon (RowsIn, ColsIn), its weight vector WeightsIn
and its degree Degree. The knot vector is assumed to be uniform (i.e. ’auto’ in gen_contour_nurbs_xld).
You can modify the curve in two ways: by editing the control polygon, e.g., by inserting or moving control points,
or by transforming the contour as a whole, e.g., by rotating moving or scaling it. Note that you can only edit the
control polygon if Edit is set to true. Similarly, you can only rotate, move or scale it if Rotate, Move, and
Scale, respectively, are set to true.
draw_nurbs_mod starts in the transformation mode. In this mode, the curve is displayed together with 3 sym-
bols: a cross in the middle and an arrow to the right if Rotate is set to true, and a double-headed arrow to the
upper right if Scale is set to true. To switch into the edit mode, press the Shift key; by pressing it again, you can
switch back into the transformation mode.
Transformation Mode
• To move the curve, click with the left mouse button on the cross in the center and then drag it to the new
position, i.e., keep the mouse button pressed while moving the mouse.
• To rotate it, click with the left mouse button on the arrow and then drag it, till the curve has the right direction.
• Scaling is achieved by dragging the double arrow. To keep the ratio, the parameter KeepRatio has to be
set to true.
Edit Mode
In this mode, the curve is displayed together with its control polygon. Start and end point are marked by an
additional square and the point which was handled last is surrounded by a circle representing its weight. You can
perform the following modifications:
• To append control points, click with the left mouse button in the window and a new point is added at this
position.
• You can delete the point appended last by pressing the Ctrl key.
• To move a point, drag it with the mouse.
• To insert a point on the control polygon, click on the desired position on the polygon.
• To close respectively open the curve, click on the first or on the last control point.
• You can modify the weight of a control point by first clicking on the point itself (if it is not already the point
which was modified or created last) and then dragging the circle around the point.
Pressing the right mouse button terminates the procedure.
The appearance of the curve while drawing is determined by the line width, size and color set via the operators
set_color, set_colored, set_line_width, and set_line_style. The control polygon and all
handles are displayed in the second color set by set_color or set_colored. Their line width is fixed to 1
and their line style is fixed to a drawn-through line.
Parameter
. ContOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject *
Contour of the modified curve.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window identifier.
. Rotate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable rotation?
Default Value : "true"
List of values : Rotate ∈ {"true", "false"}
. Move (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable moving?
Default Value : "true"
List of values : Move ∈ {"true", "false"}

HALCON 8.0.2
318 CHAPTER 4. GRAPHICS

. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Enable scaling?
Default Value : "true"
List of values : Scale ∈ {"true", "false"}
. KeepRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Keep ratio while scaling?
Default Value : "true"
List of values : KeepRatio ∈ {"true", "false"}
. Edit (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable editing?
Default Value : "true"
List of values : Edit ∈ {"true", "false"}
. Degree (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
The degree p of the NURBS curve. Reasonable values are 3 to 25.
Default Value : 3
Suggested values : Degree ∈ {2, 3, 4, 5}
Restriction : Degree ≥ 2
. RowsIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . double
Row coordinates of the input control polygon.
. ColsIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . double
Column coordinates of the input control polygon.
. WeightsIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Input weight vector.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . double *
Row coordinates of the control polygon.
. Cols (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . double *
Columns coordinates of the control polygon.
. Weights (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Weight vector.
Result
draw_nurbs_mod returns H_MSG_TRUE, if the window is valid. If necessary, an exception handling is raised.
Parallelization Information
draw_nurbs_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_nurbs_interp_mod, draw_xld_mod
See also
draw_nurbs, draw_nurbs_interp, gen_contour_nurbs_xld
Module
Foundation

draw_point ( Hlong WindowHandle, double *Row, double *Column )


T_draw_point ( const Htuple WindowHandle, Htuple *Row, Htuple *Column )

Draw a point.
draw_point returns the parameter for a point, which has been created interactively by the user in the window.
To create a point you have to press the left mouse button. While keeping the button pressed you may “drag” the
point in any direction. Pressing the right mousebutton terminates the procedure.
After terminating the procedure the point is not visible in the window any longer.

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 319

Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .point.y ; double *
Row index of the point.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double *
Column index of the point.
Example

get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_point(WindowHandle,&Row1,&Column1) ;
disp_line(WindowHandle,Row1-2,Column1,Row1+2,Column1) ;
disp_line(WindowHandle,Row1,Column1-2,Row1,Column1+2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fnew_line(:::) ;

Result
draw_point returns H_MSG_TRUE, if the window is valid and the needed drawing mode is available. If
necessary, an exception handling is raised.
Parallelization Information
draw_point is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
See also
draw_point_mod, draw_circle, draw_ellipse, set_insert
Module
Foundation

draw_point_mod ( Hlong WindowHandle, double RowIn, double ColumnIn,


double *Row, double *Column )

T_draw_point_mod ( const Htuple WindowHandle, const Htuple RowIn,


const Htuple ColumnIn, Htuple *Row, Htuple *Column )

Draw a point.
draw_point_mod returns the parameter for a point, which has been created interactively by the user in the
window.
To create a point are expected the coordinates RowIn and ColumnIn. While keeping the button pressed you may
“drag” the point in any direction. Pressing the right mousebutton terminates the procedure.
After terminating the procedure the point is not visible in the window any longer.

HALCON 8.0.2
320 CHAPTER 4. GRAPHICS

Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. RowIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; double
Row index of the point.
. ColumnIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double
Column index of the point.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .point.y ; double *
Row index of the point.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double *
Column index of the point.
Example

get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_point_mod(WindowHandle,&Row1,&Column1) ;
disp_line(WindowHandle,Row1-2,Column1,Row1+2,Column1) ;
disp_line(WindowHandle,Row1,Column1-2,Row1,Column1+2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fnew_line(:::) ;

Result
draw_point_mod returns H_MSG_TRUE, if the window is valid and the needed drawing mode is available. If
necessary, an exception handling is raised.
Parallelization Information
draw_point_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
See also
draw_point, draw_circle, draw_ellipse, set_insert
Module
Foundation

draw_polygon ( Hobject *PolygonRegion, Hlong WindowHandle )


T_draw_polygon ( Hobject *PolygonRegion, const Htuple WindowHandle )

Interactive drawing of a polygon row.


draw_polygon produces an image. The region of that image spans exactly the imagepoints entered interactively
by mouse clicks (gray values remain undefined).
Painting in the output window happens while pressing the left mouse button. Releasing the left mouse button and
repressing it at another position effects drawing a line between these two points. Pressing the right mouse button
terminates the input. Painting uses that color which has been set by set_color, set_rgb, etc. .

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 321

To put gray values on the created PolygonRegion for further processing, you may use the procedure
reduce_domain.
Attention
The painted contour is not closed automatically, in particular it is not “filled up” either.
Output object’s gray values are not defined.
Parameter

. PolygonRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Region, which encompasses all painted points.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example

draw_polygon(&Polygon,WindowHandle) ;
shape_trans(Polygon,&Filled,"convex") ;
disp_region(Filled,WindowHandle) ;

Result
If the window is valid, draw_polygon returns H_MSG_TRUE. If necessary, an exception handling is raised.
Parallelization Information
draw_polygon is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw
Alternatives
draw_region, draw_circle, draw_rectangle1, draw_rectangle2, boundary
See also
reduce_domain, fill_up, set_color
Module
Foundation

draw_rectangle1 ( Hlong WindowHandle, double *Row1, double *Column1,


double *Row2, double *Column2 )

T_draw_rectangle1 ( const Htuple WindowHandle, Htuple *Row1,


Htuple *Column1, Htuple *Row2, Htuple *Column2 )

Draw a rectangle parallel to the coordinate axis.


draw_rectangle1 returns the parameter for a rectangle parallel to the coordinate axes, which has been created
interactively by the user in the window.
To create a rectangle you have to press the left mouse button determining a corner of the rectangle. While keeping
the button pressed you may “drag” the rectangle in any direction. After another mouse click in the middle of the
created rectangle you can move it. A click close to one side “grips” it to modify the rectangle’s dimension in
perpendicular direction to this side. If you click on one corner of the created rectangle, you may move this corner.
Pressing the right mousebutton terminates the procedure.
After terminating the procedure the rectangle is not visible in the window any longer.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; double *
Row index of the left upper corner.

HALCON 8.0.2
322 CHAPTER 4. GRAPHICS

. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; double *


Column index of the left upper corner.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y ; double *
Row index of the right lower corner.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x ; double *
Column index of the right lower corner.
Example

get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_rectangle1(WindowHandle,&Row1,&Column1,&Row2,&Column2) ;
set_part(WindowHandle,Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fwrite_string(Row2) ;
fwrite_string(",") ;
fwrite_string(Column2) ;
fwrite_string(")") ;
fnew_line(:::) ;

Result
draw_rectangle1 returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_rectangle1 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle1_mod, draw_rectangle2, draw_region
See also
gen_rectangle1, draw_circle, draw_ellipse, set_insert
Module
Foundation

draw_rectangle1_mod ( Hlong WindowHandle, double Row1In,


double Column1In, double Row2In, double Column2In, double *Row1,
double *Column1, double *Row2, double *Column2 )

T_draw_rectangle1_mod ( const Htuple WindowHandle,


const Htuple Row1In, const Htuple Column1In, const Htuple Row2In,
const Htuple Column2In, Htuple *Row1, Htuple *Column1, Htuple *Row2,
Htuple *Column2 )

Draw a rectangle parallel to the coordinate axis.


draw_rectangle1_mod returns the parameter for a rectangle parallel to the coordinate axes, which has been
created interactively by the user in the window.

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 323

To create a rectangle are expected the parameters Row1In, Column1In,Row2In und Column2In. After a
mouse click in the middle of the created rectangle you can move it. A click close to one side “grips” it to modify
the rectangle’s dimension in perpendicular direction to this side. If you click on one corner of the created rectangle,
you may move this corner. Pressing the right mousebutton terminates the procedure.
After terminating the procedure the rectangle is not visible in the window any longer.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Row1In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; double
Row index of the left upper corner.
. Column1In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .rectangle.origin.x ; double
Column index of the left upper corner.
. Row2In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y ; double
Row index of the right lower corner.
. Column2In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x ; double
Column index of the right lower corner.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; double *
Row index of the left upper corner.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; double *
Column index of the left upper corner.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y ; double *
Row index of the right lower corner.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x ; double *
Column index of the right lower corner.
Example

get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_rectangle1_mod(WindowHandle,Row1In,Column1In,Row2In,Column2In,&Row1,&Column1,&Row2,
set_part(WindowHandle,Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fwrite_string(Row2) ;
fwrite_string(",") ;
fwrite_string(Column2) ;
fwrite_string(")") ;
fnew_line(:::) ;

Result
draw_rectangle1_mod returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_rectangle1_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert

HALCON 8.0.2
324 CHAPTER 4. GRAPHICS

Alternatives
draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle1, draw_circle, draw_ellipse, set_insert
Module
Foundation

draw_rectangle2 ( Hlong WindowHandle, double *Row, double *Column,


double *Phi, double *Length1, double *Length2 )

T_draw_rectangle2 ( const Htuple WindowHandle, Htuple *Row,


Htuple *Column, Htuple *Phi, Htuple *Length1, Htuple *Length2 )

Interactive drawing of any orientated rectangle.


draw_rectangle2 returns the parameter for any orientated rectangle, which has been created interactively by
the user in the window.
The created rectangle is described by its center, its two half axes and the angle between the first half axis and the
horizontal coordinate axis.
To create a rectangle you have to press the left mouse button for the center of the rectangle. While keeping the
button pressed you may dimension the length (Length1) and the orientation (Phi) of the first half axis. In doing
so a temporary default length for the second half axis is assumed, which may be modified afterwards on demand.
After another mouse click in the middle of the created rectangle, you can move it. A click close to one side “grips”
it to modify the rectangle’s dimension in perpendicular direction to this side. You only can modify the orientation,
if you grip a side perpendicular to the first half axis. Pressing the right mouse button terminates the procedure.
After terminating the procedure the rectangle is not visible in the window any longer.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y ; double *
Row index of the barycenter.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x ; double *
Column index of the barycenter.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad ; double *
Orientation of the bigger half axis in radians.
. Length1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth ; double *
Bigger half axis.
. Length2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight ; double *
Smaller half axis.
Result
draw_rectangle2 returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_rectangle2 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle2_mod, draw_rectangle1, draw_region
See also
gen_rectangle2, draw_circle, draw_ellipse, set_insert
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 325

draw_rectangle2_mod ( Hlong WindowHandle, double RowIn,


double ColumnIn, double PhiIn, double Length1In, double Length2In,
double *Row, double *Column, double *Phi, double *Length1,
double *Length2 )

T_draw_rectangle2_mod ( const Htuple WindowHandle, const Htuple RowIn,


const Htuple ColumnIn, const Htuple PhiIn, const Htuple Length1In,
const Htuple Length2In, Htuple *Row, Htuple *Column, Htuple *Phi,
Htuple *Length1, Htuple *Length2 )

Interactive drawing of any orientated rectangle.


draw_rectangle2_mod returns the parameter for any orientated rectangle, which has been created interac-
tively by the user in the window.
The created rectangle is described by its center, its two half axes and the angle between the first half axis and the
horizontal coordinate axis.
To create a rectangle are expected the parameters RowIn, ColumnIn,PhiIn,Length1In,Length2In. A
click close to one side “grips” it to modify the rectangle’s dimension in perpendicular direction (Length2) to this
side. You only can modify the orientation (Phi), if you grip a side perpendicular to the first half axis. After another
mouse click in the middle of the created rectangle, you can move it. Pressing the right mouse button terminates
the procedure.
After terminating the procedure the rectangle is not visible in the window any longer.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. RowIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y ; double
Row index of the barycenter.
. ColumnIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x ; double
Column index of the barycenter.
. PhiIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad ; double
Orientation of the bigger half axis in radians.
. Length1In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth ; double
Bigger half axis.
. Length2In (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight ; double
Smaller half axis.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y ; double *
Row index of the barycenter.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x ; double *
Column index of the barycenter.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad ; double *
Orientation of the bigger half axis in radians.
. Length1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth ; double *
Bigger half axis.
. Length2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight ; double *
Smaller half axis.
Result
draw_rectangle2_mod returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_rectangle2_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert

HALCON 8.0.2
326 CHAPTER 4. GRAPHICS

Alternatives
draw_rectangle2, draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle2, draw_circle, draw_ellipse, set_insert
Module
Foundation

draw_region ( Hobject *Region, Hlong WindowHandle )


T_draw_region ( Hobject *Region, const Htuple WindowHandle )

Interactive drawing of a closed region.


draw_region produces an image. The region of that image spans exactly the image region entered interactively
by mouse clicks (gray values remain undefined). Painting happens in the output window while keeping the left
mouse button pressed. The left mouse button even operates by clicking in the output window; through this a line
between the previous clicked points is drawn. Clicking the right mouse button terminates input and closes the
outline. Subsequently the image is “filled up”. Also it contains the whole image area enclosed by the mouse.
Painting uses that color which has been set by set_color, set_rgb, etc. .
Attention
The output object’s gray values are not defined.
Parameter

. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Interactive created region.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example

read_image(&Image,"fabrik") ;
disp_image(Image,WindowHandle) ;
draw_region(&Region,WindowHandle) ;
reduce_domain(Image,Region,&New) ;
regiongrowing(New,&Segmente,5,5,6,50) ;
set_colored(WindowHandle,12) ;
disp_region(Segmente,WindowHandle) ;

Result
If the window is valid, draw_region returns H_MSG_TRUE. If necessary, an exception handling is raised.
Parallelization Information
draw_region is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw
Alternatives
draw_circle, draw_ellipse, draw_rectangle1, draw_rectangle2
See also
draw_polygon, reduce_domain, fill_up, set_color
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 327

draw_xld ( Hobject *ContOut, Hlong WindowHandle, const char *Rotate,


const char *Move, const char *Scale, const char *KeepRatio )

T_draw_xld ( Hobject *ContOut, const Htuple WindowHandle,


const Htuple Rotate, const Htuple Move, const Htuple Scale,
const Htuple KeepRatio )

Interactive drawing of a contour.


draw_xld returns a contour, which has been created interactively by the user in the window.
Directly after calling draw_xld you can add contour points by clicking with the left mouse button in the window
at the desired positions. You delete the point appended last by pressing the Ctrl key. As soon as you add three
contour points, 5 so-called pick points are displayed, one in the middle and 4 at the corners of the surrounding
rectangle. By clicking on a pick point you can close the contour or open it again. If the contour is closed, the pick
points are displayed in form of squares, otherwise they are shaped like a ’u’.
If the contour is closed you can
• move contour points by clicking with the left mouse button on a point marked by a rectangle and keep the
mouse button pressed while moving the mouse,
• insert contour points by clicking with the left mouse button in the vicinity of a line and then move the mouse
to the position where you want the new point to be placed, and
• delete contour points by selecting the point which should be deleted with the left mouse button and then press
the Ctrl key.
By pressing the Shift key, you can switch into the transformation mode. In this mode you can rotate, move, and
scale the contour as a whole, but only if you set the parameters Rotate, Move, and Scale, respectively, to true.
Instead of the pick points, 3 symbols are displayed with the contour: a cross in the middle and an arrow to the right
if Rotate is set to true, and a double-headed arrow to the upper right if Scale is set to true.
You can
• move the contour by clicking the left mouse button on the cross in the center and then dragging it to the new
position,
• rotate it by clicking with the left mouse button on the arrow and then dragging it, till the contour has the right
direction, and
• scale it by dragging the double arrow. To keep the ratio the parameter KeepRatio has to be set to true.
Pressing the right mouse button terminates the procedure.
Parameter
. ContOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject *
Modified contour.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Rotate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Enable rotation?
Default Value : "true"
List of values : Rotate ∈ {"true", "false"}
. Move (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Enable moving?
Default Value : "true"
List of values : Move ∈ {"true", "false"}
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Enable scaling?
Default Value : "true"
List of values : Scale ∈ {"true", "false"}
. KeepRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Keep ratio while scaling?
Default Value : "true"
List of values : KeepRatio ∈ {"true", "false"}

HALCON 8.0.2
328 CHAPTER 4. GRAPHICS

Result
draw_xld returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see set_insert) is
available. If necessary, an exception handling is raised.
Parallelization Information
draw_xld is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle2, draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle2, draw_circle, draw_ellipse, set_insert
Module
Foundation

draw_xld_mod ( const Hobject ContIn, Hobject *ContOut,


Hlong WindowHandle, const char *Rotate, const char *Move,
const char *Scale, const char *KeepRatio, const char *Edit )

T_draw_xld_mod ( const Hobject ContIn, Hobject *ContOut,


const Htuple WindowHandle, const Htuple Rotate, const Htuple Move,
const Htuple Scale, const Htuple KeepRatio, const Htuple Edit )

Interactive modification of a contour.


draw_xld_mod returns a contour, which has been interactively modified by the user in the window.
You can modify the contour in two ways: by editing the contour itself, e.g., by inserting or moving contour points,
or by transforming the contour as a whole, e.g., by rotating moving or scaling it. Note that you can only edit a
contour if Edit is set to true. Similarly, you can only rotate, move or scale it if Rotate, Move, and Scale,
respectively, are set to true.
draw_xld_mod starts in the transformation mode. In this mode, the contour is displayed together with 3 symbols:
a cross in the middle and an arrow to the right if Rotate is set to true, and a double-headed arrow to the upper
right if Scale is set to true. To switch into the edit mode, press the Shift key; by pressing it again, you can switch
back into the transformation mode.
Transformation Mode
• To move the contour, click with the left mouse button on the cross in the center and then drag it to the new
position´, i.e., keep the mouse button pressed while moving the mouse.
• To rotate it, click with the left mouse button on the arrow and then drag it, till the contour has the right
direction.
• Scaling is achieved by dragging the double arrow. To keep the ratio the parameter KeepRatio has to be set
to true.
Edit Mode In this mode, the contour is display together with 5 pick points, which are located in the middle and
at the corners of the surrounding rectangle. If the contour is closed, the pick points are displayed as squares,
otherwise shaped like a ’u’. By clicking on a pick point, you can close an open contour and vice versa. Depending
on the state of the contour, you can perform different modifications. Open contours (pick points shaped like a ’u’)
• To append points, click with the left mouse button in the window and a new point is added at this position.
• You can delete the point appended last by pressing the Ctrl key.
• To move or insert points, you must first close the contour by clicking on one of the pick points.
Closed contours (square pick points)
• To move a point, click with the left mouse button on a point marked by a rectangle and then drag it to the
new position.

HALCON/C Reference Manual, 2008-5-13


4.1. DRAWING 329

• To insert a point, click with the left mouse button in the vicinity of a line and then move the mouse to the
position where you want the new point to be placed.
• To delete a point, select the point which should be deleted with the left mouse button and then press the Ctrl
key.

Pressing the right mouse button terminates the procedure.


Parameter

. ContIn (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject


Input contour.
. ContOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject *
Modified contour.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Rotate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Enable rotation?
Default Value : "true"
List of values : Rotate ∈ {"true", "false"}
. Move (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Enable moving?
Default Value : "true"
List of values : Move ∈ {"true", "false"}
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Enable scaling?
Default Value : "true"
List of values : Scale ∈ {"true", "false"}
. KeepRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Keep ratio while scaling?
Default Value : "true"
List of values : KeepRatio ∈ {"true", "false"}
. Edit (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Enable editing?
Default Value : "true"
List of values : Edit ∈ {"true", "false"}
Result
draw_xld_mod returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_xld_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle2, draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle2, draw_circle, draw_ellipse, set_insert
Module
Foundation

HALCON 8.0.2
330 CHAPTER 4. GRAPHICS

4.2 Gnuplot

gnuplot_close ( Hlong GnuplotFileID )


T_gnuplot_close ( const Htuple GnuplotFileID )

Close all open gnuplot files or terminate an active gnuplot sub-process.


gnuplot_close closes all gnuplot files opened by gnuplot_open_file or terminates the gnuplot sub-
process created with gnuplot_open_pipe. In the latter case, all temporary files used to display images
and control values are deleted. This means that gnuplot_close must be called after such a plot sequence.
GnuplotFileID is the identifier of the corresponding gnuplot output stream.
Parameter
. GnuplotFileID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gnuplot_id ; Hlong
Identifier for the gnuplot output stream.
Result
gnuplot_close returns H_MSG_TRUE if GnuplotFileID is a valid gnuplot output stream. Otherwise, an
exception handling is raised.
Parallelization Information
gnuplot_close is processed completely exclusively without parallelization.
Possible Predecessors
gnuplot_open_pipe, gnuplot_open_file, gnuplot_plot_image
See also
gnuplot_open_pipe, gnuplot_open_file, gnuplot_plot_image
Module
Foundation

gnuplot_open_file ( const char *FileName, Hlong *GnuplotFileID )


T_gnuplot_open_file ( const Htuple FileName, Htuple *GnuplotFileID )

Open a gnuplot file for visualization of images and control values.


gnuplot_open_file allows the output of images and control values in a format which can be later
processed by gnuplot. The parameter FileName determines the base-name of the files to be created
by calls to gnuplot_plot_image. gnuplot_open_file generates a gnuplot control file with
the name <FileName>.gp, in which the respective plot commands are written. Each image plotted by
gnuplot_plot_image (or control values plotted by gnuplot_plot_ctrl) creates a data file with the
name <FileName>.dat.<Number>, where Number is the number of the plot in the current sequence. The gen-
erated control file can later be edited to create the desired effect. After the last plot gnuplot_close has to
be called in order to close all open files. The corresponding identifier for the gnuplot output stream is returned in
GnuplotFileID.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Base name for control and data files.
. GnuplotFileID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gnuplot_id ; Hlong *
Identifier for the gnuplot output stream.
Result
gnuplot_open_file returns the value H_MSG_TRUE if the control file could be opened. Otherwise, an
exception handling is raised.
Parallelization Information
gnuplot_open_file is processed completely exclusively without parallelization.
Possible Successors
gnuplot_plot_image, gnuplot_close

HALCON/C Reference Manual, 2008-5-13


4.2. GNUPLOT 331

Alternatives
gnuplot_open_pipe
See also
gnuplot_open_pipe, gnuplot_close, gnuplot_plot_image
Module
Foundation

gnuplot_open_pipe ( Hlong *GnuplotFileID )


T_gnuplot_open_pipe ( Htuple *GnuplotFileID )

Open a pipe to a gnuplot process for visualization of images and control values.
gnuplot_open_pipe opens a pipe to a gnuplot sub-process with which subsequently images can be
visualized as 3D-plots ( gnuplot_plot_image) or control values can be visualized as 2D-plots (
gnuplot_plot_ctrl). The sub-process must be terminated after displaying the last plot by calling
gnuplot_close. The corresponding identifier for the gnuplot output stream is returned in GnuplotFileID.
Attention
gnuplot_open_pipe is only implemented for Unix because gnuplot for Windows (wgnuplot) cannot be
controlled by an external process.
Parameter
. GnuplotFileID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gnuplot_id ; Hlong *
Identifier for the gnuplot output stream.
Result
gnuplot_open_pipe returns the value H_MSG_TRUE if the sub-process could be created. Otherwise, an
exception handling is raised.
Parallelization Information
gnuplot_open_pipe is processed completely exclusively without parallelization.
Possible Successors
gnuplot_plot_image, gnuplot_plot_ctrl, gnuplot_close
Alternatives
gnuplot_open_file
Module
Foundation

T_gnuplot_plot_ctrl ( const Htuple GnuplotFileID,


const Htuple Values )

Plot control values using gnuplot.


gnuplot_plot_ctrl displays a tuple of control values using gnuplot. If there is an active gnuplot sub-process
(started with gnuplot_open_pipe), the image is displayed in a gnuplot window. Otherwise, the image is
output to a file, which can be later read by gnuplot. In both cases the gnuplot output stream is identified by
GnuplotFileID.
Parameter
. GnuplotFileID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gnuplot_id ; Htuple . Hlong
Identifier for the gnuplot output stream.
. Values (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Control values to be plotted (y-values).
Result
gnuplot_plot_ctrl returns the value if GnuplotFileID is a valid gnuplot output stream, if the data file for
the current plot could be opened, and if only integer or floating point values were passed. Otherwise, an exception
handling is raised.

HALCON 8.0.2
332 CHAPTER 4. GRAPHICS

Parallelization Information
gnuplot_plot_ctrl is processed completely exclusively without parallelization.
Possible Predecessors
gnuplot_open_pipe, gnuplot_open_file
Possible Successors
gnuplot_close
See also
gnuplot_open_pipe, gnuplot_open_file, gnuplot_close
Module
Foundation

T_gnuplot_plot_funct_1d ( const Htuple GnuplotFileID,


const Htuple Function )

Plot a function using gnuplot.


gnuplot_plot_ctrl displays a function of control values using gnuplot. If there is an active gnuplot sub-
process (started with gnuplot_open_pipe), the image is displayed in a gnuplot window. Otherwise, the
image is output to a file, which can be later read by gnuplot. In both cases the gnuplot output stream is identified
by GnuplotFileID.
Parameter
. GnuplotFileID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gnuplot_id ; Htuple . Hlong
Identifier for the gnuplot output stream.
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Function to be plotted.
Result
gnuplot_plot_ctrl returns H_MSG_TRUE if GnuplotFileID is a valid gnuplot output stream, and if
the data file for the current plot could be opened. Otherwise, an exception handling is raised.
Parallelization Information
gnuplot_plot_funct_1d is processed completely exclusively without parallelization.
Possible Predecessors
gnuplot_open_pipe, gnuplot_open_file
Possible Successors
gnuplot_close
Alternatives
gnuplot_plot_ctrl
See also
gnuplot_open_pipe, gnuplot_open_file, gnuplot_close
Module
Foundation

gnuplot_plot_image ( const Hobject Image, Hlong GnuplotFileID,


Hlong SamplesX, Hlong SamplesY, double ViewRotX, double ViewRotZ,
const char *Hidden3D )

T_gnuplot_plot_image ( const Hobject Image,


const Htuple GnuplotFileID, const Htuple SamplesX,
const Htuple SamplesY, const Htuple ViewRotX, const Htuple ViewRotZ,
const Htuple Hidden3D )

Visualize images using gnuplot.


gnuplot_plot_image displays an image as a 3D-plot using gnuplot. If there is an active gnuplot sub-process
(started with gnuplot_open_pipe), the image is displayed in a gnuplot window. Otherwise, the image is

HALCON/C Reference Manual, 2008-5-13


4.2. GNUPLOT 333

output to a file, which can be later read by gnuplot. In both cases the gnuplot output stream is identified by
GnuplotFileID. The parameters SamplesX and SamplesY determine the number of data points in the x-
and y-direction, respectively, which gnuplot should use to display the image. They are the equivalent of the gnuplot
variables samples and isosamples. The parameters ViewRotX und ViewRotZ determine the rotation of the plot
with respect to the viewer. ViewRotX is the rotation of the coordinate system about the x-axis, while ViewRotZ
is the rotation of the plot about the z-axis. These two parameters correspond directly to the first two parameters
of the ’set view’ command in gnuplot. The parameter Hidden3D determines whether hidden surfaces should be
removed. This is equivalent to the ’set hidden3d’ command in gnuplot. If a single image is passed to the operator,
it is displayed in a separate plot. If multiple images are passed, they are displayed in the same plot.
Parameter

. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Image to be plotted.
. GnuplotFileID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gnuplot_id ; Hlong
Identifier for the gnuplot output stream.
. SamplesX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of samples in the x-direction.
Default Value : 64
Typical range of values : 2 ≤ SamplesX ≤ 10000
Restriction : SamplesX ≥ 2
. SamplesY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of samples in the y-direction.
Default Value : 64
Typical range of values : 2 ≤ SamplesY ≤ 10000
Restriction : SamplesY ≥ 2
. ViewRotX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Rotation of the plot about the x-axis.
Default Value : 60
Typical range of values : 0 ≤ ViewRotX ≤ 180
Minimum Increment : 0.01
Recommended Increment : 10
Restriction : (0 ≤ ViewRotX) ∧ (ViewRotX ≤ 180)
. ViewRotZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Rotation of the plot about the z-axis.
Default Value : 30
Typical range of values : 0 ≤ ViewRotZ ≤ 360
Minimum Increment : 0.01
Recommended Increment : 10
Restriction : (0 ≤ ViewRotZ) ∧ (ViewRotZ ≤ 360)
. Hidden3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Plot the image with hidden surfaces removed.
Default Value : "hidden3d"
List of values : Hidden3D ∈ {"hidden3d", "nohidden3d"}
Result
gnuplot_plot_image returns the value if GnuplotFileID is a valid gnuplot output stream, and if the data
file for the current plot could be opened. Otherwise, an exception handling is raised.
Parallelization Information
gnuplot_plot_image is processed completely exclusively without parallelization.
Possible Predecessors
gnuplot_open_pipe, gnuplot_open_file
Possible Successors
gnuplot_close
See also
gnuplot_open_pipe, gnuplot_open_file, gnuplot_close
Module
Foundation

HALCON 8.0.2
334 CHAPTER 4. GRAPHICS

4.3 LUT
disp_lut ( Hlong WindowHandle, Hlong Row, Hlong Column, Hlong Scale )
T_disp_lut ( const Htuple WindowHandle, const Htuple Row,
const Htuple Column, const Htuple Scale )

Graphical view of the look-up-table (lut).


disp_lut displays a graphical view of the look-up-table (lut) in the valid window. A look-up-table defines the
transformation of image gray values to colors/gray levels on the screen. On most systems this can be modified.
disp_lut creates a graphical view of the table assigned to the output window with the logical window number
WindowHandle and displays it for every basic color (red, green, blue). Row and Column define the position
of the centre of the graphic. Scale allows scaling of the graphic, whereas 1 means displaying all 256 values, 2
means displaying 128 values, 3 means displaying only 64 values, etc. Tables for monochrome-representations are
displayed in the currently set color (see set_color, set_rgb, etc.). Tables for displaying "‘false colors"’ are
viewed with red, green and blue for each color component.
Attention
draw_lut can only be used on hardware supporting look-up-tables for the output.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row of centre of the graphic.
Default Value : 128
Typical range of values : 0 ≤ Row ≤ 511
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column of centre of the graphic.
Default Value : 128
Typical range of values : 0 ≤ Column ≤ 511
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Scaling of the graphic.
Default Value : 1
List of values : Scale ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Typical range of values : 0 ≤ Scale ≤ 20
Example

set_lut(WindowHandle,"color") ;
disp_lut(WindowHandle,256,256,1) ;
get_mbutton(WindowHandle,_,_,_) ;
set_lut(WindowHandle,"sqrt") ;
disp_lut(WindowHandle,128,128,2) ;

Result
disp_lut returns H_MSG_TRUE if the hardware supports a look-up-table, the window is valid and the param-
eters are correct. Otherwise an exception handling is raised.
Parallelization Information
disp_lut is reentrant, local, and processed without parallelization.
Possible Predecessors
set_lut
See also
open_window, open_textwindow, draw_lut, set_lut, set_fix, set_pixel, write_lut,
get_lut, set_color
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


4.3. LUT 335

draw_lut ( Hlong WindowHandle )


T_draw_lut ( const Htuple WindowHandle )

Manipulate look-up-table (lut) interactively.


draw_lut allows interactive manipulation of the look-up-table of the device currently displaying the output
window.
By pressing and holding down the left mouse button one can change (from "‘left to right"’) the red-, green- and
blue-intensity displayed in a 2 dimensional diagram with the gray values on the x-axis. The left mouse button
also is used for choosing the color channel that should be changed. As an alternative, one can map pure gray
levels (gray "‘color channel"’) to the gray values on the x-axis. The right mouse button is used for terminating
the change-process. The modified look-up-table can be saved by write_lut and reloaded later by set_lut.
get_lut succeeding draw_lut returns directly the RGB tuple of the look-up-table. These are suitable as input
of set_lut.
Attention
draw_lut can only be used on hardware supporting look-up-tables for the output and allow dynamic changing
of the tables.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
Example

read_image(&Image,"fabrik") ;
disp_image(Image,WindowHandle) ;
draw_lut(WindowHandle) ;
write_lut(WindowHandle,"my_lut") ;
...
read_image(&Image,"fabrik") ;
set_lut(WindowHandle,"my_lut") ;

Result
draw_lut returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
draw_lut is reentrant, local, and processed without parallelization.
Possible Successors
set_lut_style, set_lut, write_lut, disp_lut
Alternatives
set_fix, set_rgb
See also
write_lut, set_lut, get_lut, disp_lut
Module
Foundation

get_fixed_lut ( Hlong WindowHandle, char *Mode )


T_get_fixed_lut ( const Htuple WindowHandle, Htuple *Mode )

Get fixing of "‘look-up-table"’ (lut) for "‘real color images"’

HALCON 8.0.2
336 CHAPTER 4. GRAPHICS

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Mode of fixing.
Default Value : "true"
List of values : Mode ∈ {"true", "false"}
Parallelization Information
get_fixed_lut is reentrant, local, and processed without parallelization.
Possible Successors
set_fixed_lut
Module
Foundation

T_get_lut ( const Htuple WindowHandle, Htuple *LookUpTable )

Get current look-up-table (lut).


get_lut returns the name or the values of the look-up-table (lut) of the window, currently used by disp_image
(or indirectly by disp_region, etc.) for output. To set a look-up-table use set_lut. If the current table is a
system table without any modification ( by set_fix ), the name of the table is returned. If it is a modified table,
a table read from a file or a table for output with pseudo real colors, the RGB-values of the table are returned.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong


Window identifier.
. LookUpTable (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char * / Hlong *
Name of look-up-table or tuple of RGB-values.
Result
get_lut returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_lut is reentrant, local, and processed without parallelization.
Possible Successors
draw_lut, set_lut
Alternatives
set_fix, get_pixel
See also
set_lut, draw_lut
Module
Foundation

get_lut_style ( Hlong WindowHandle, double *Hue, double *Saturation,


double *Intensity )

T_get_lut_style ( const Htuple WindowHandle, Htuple *Hue,


Htuple *Saturation, Htuple *Intensity )

Get modification parameters of look-up-table (lut).


get_lut_style returns the values that were set with set_lut_style. Default is:

Hue: 0.0
Saturation 1.0

HALCON/C Reference Manual, 2008-5-13


4.3. LUT 337

Intensity 1.0
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Hue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Modification of color value.
. Saturation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Modification of saturation.
. Intensity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Modification of intensity.
Result
get_lut_style returns H_MSG_TRUE if the window is valid and the parameter is correct. Otherwise an
exception handling is raised.
Parallelization Information
get_lut_style is reentrant, local, and processed without parallelization.
Possible Successors
set_lut_style, set_lut
See also
set_lut_style
Module
Foundation

T_query_lut ( const Htuple WindowHandle, Htuple *LookUpTable )

Query all available look-up-tables (lut).


query_lut returns the names of all look-up-tables available on the current used device. These tables can be set
with set_lut. An table named ’default’ is always available.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window identifier.
. LookUpTable (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Names of look-up-tables.
Result
query_lut returns H_MSG_TRUE if a window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_lut is reentrant, local, and processed without parallelization.
Possible Successors
set_lut_style, set_lut, write_lut, disp_lut
See also
set_lut
Module
Foundation

set_fixed_lut ( Hlong WindowHandle, const char *Mode )


T_set_fixed_lut ( const Htuple WindowHandle, const Htuple Mode )

Fix "‘look-up-table"’ (lut) for "‘real color images"’.

HALCON 8.0.2
338 CHAPTER 4. GRAPHICS

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Mode of fixing.
Default Value : "true"
List of values : Mode ∈ {"true", "false"}
Parallelization Information
set_fixed_lut is reentrant, local, and processed without parallelization.
Possible Predecessors
get_fixed_lut
Module
Foundation

set_lut ( Hlong WindowHandle, const char *LookUpTable )


T_set_lut ( const Htuple WindowHandle, const Htuple LookUpTable )

Set "‘look-up-table"’ (lut).


set_lut sets look-up-table of the device (monitor) displaying the output window. A look-up-table defines the
transformation of a "‘gray value"’ within an image into a gray value or color on the screen. It describes the screen
gray value/color as a combination of red, green and blue for any image gray value (0..255) (so it is a ’table’ to
’look up’ the screen gray value/color for each image gray value: look-up-table). Transformation into screen-colors
is performed in real-time at every time the screen is displayed new (typically this happens about 60 - 70 times per
second). So it is possible to change the llok-up-table to get a new look of images or regions. Please remind that
not all machines support changing the look-up-table (e.g. monochrome resp. real color).
Look-up-tables within HALCON (and on a machine that supports 256 colors) are diposed into three areas:

S: system area resp. user area,


G: graphic colors,
B: image data.

Colors in S descend from applications that were active before starting HALCON and should not get lost. Graphic
colors in G are used for operators such as disp_region, disp_circle etc. and are set unique within
all look-up-tables. An output in a graphic color has always got the same (color-)look, even if different look-up-
tables are used. set_color and set_rgb set graphic colors. Gray values resp. colors in B are used by
disp_image to display an image. They can change according to the current look-up-table. There exist two
exceptions to this concept:

• set_gray allows setting of colors of the area B for operators such as disp_region,
• set_fix that allows modification of graphic colors.

For common monitors only one look-up-table can be loaded per screen. Whereas set_lut can be activated
separately for each window. There is the following solution for this problem: It will always be activated the
look-up-table that is assigned to the "‘active window"’ (a window is set into the state "‘active"’ by the window
manager).
look-up-table can also be used with truecolor displays. In this case the look-up-table will be simulated in software.
This means, that the look-up-table will be used each time an image is displayed.
WindowsNT specific: if the graphiccard is used in mode different from truecolor, you must display the image after
setting the look-up-taple.
query_lut lists the names of all look-up-tables. They differ from each other in the area used for gray values.
Within this area the following behaiviour is defined:
gray value tables (1-7 image levels)

HALCON/C Reference Manual, 2008-5-13


4.3. LUT 339

’default’: Only the two basic colors (generally black and white) are used.

color tables (Real color, static gray value steps)

’default’: Table proposed by the hardware.

gray value tables (256 colors)

’default’: As ’linear’.
’linear’: Linear increasing of gray values from 0 (black) to 255 (white).
’inverse’: Inverse function of ’linear’.
’sqr’: Gray values increase according to square function.
’inv_sqr’: Inverse function of ’sqr’.
’cube’: Gray values increase according to cubic function.
’inv_cube’: Inverse function of ’cube’.
’sqrt’: Gray values increase according to square-root function.
’inv_sqrt’: Inverse Function of ’sqrt’.
’cubic_root’: Gray values increase according to cubic-root function.
’inv_cubic_root’: Inverse Function of ’cubic_root’.

color tables (256 colors)

’color1’: Linear transition from red via green to blue.


’color2’: Smooth transition from yellow via red, blue to green.
’color3’: Smooth transition from yellow via red, blue, green, red to blue.
’color4’: Smooth transition from yellow via red to blue.
’three’: Displaying the three colors red, green and blue.
’six’: Displaying the six basic colors yellow, red, magenta, blue, cyan and green.
’twelve’: Displaying 12 colors.
’twenty_four’: Displaying 24 colors.
’rainbow’: Displaying the spectral colors from red via green to blue.
’temperature’: Temperature table from black via red, yellow to white.
’change1’: Color changement after every pixel within the table alternating the six basic colors.
’change2’: Fivefold color changement from green via red to blue.
’change3’: Threefold color changement from green via red to blue.

A look-up-table can be read from a file. Every line of such a file must contain three numbers in the range of 0 to
255, with the first number describing the amount of red, the second the amount of green and the third the amount
of blue of the represented display color. The number of lines can vary. The first line contains information for the
first gray value and the last line for the last value. If there are less lines than gray values, the available information
values are distributed over the whole interval. If there are more lines than gray values, a number of (uniformly
distributed) lines is ignored. The file-name must conform to "‘LookUpTable.lut"’. Within the parameter the
name is specified without file extension. HALCON will search for the file in the current directory and after that in
a specified directory ( see set_system(’lut_dir’,<Pfad>) ). It is also possible to call set_lut with a
tuple of RGB-Values. These will be set directly. The number of parameter values must conform to the number of
pixels currently used within the look-up-table.
Attention
set_lut can only be used with monitors supporting 256 gray levels/colors.

HALCON 8.0.2
340 CHAPTER 4. GRAPHICS

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Window identifier.
. LookUpTable (input_control) . . . . . . . . . . . . . . . . . . . filename.read(-array) ; (Htuple .) const char * / Hlong
Name of look-up-table, values of look-up-table (RGB) or file name.
Default Value : "default"
Suggested values : LookUpTable ∈ {"default", "linear", "inverse", "sqr", "inv_sqr", "cube", "inv_cube",
"sqrt", "inv_sqrt", "cubic_root", "inv_cubic_root", "color1", "color2", "color3", "color4", "three", "six",
"twelve", "twenty_four", "rainbow", "temperature", "cyclic_gray", "cyclic_temperature", "hsi", "change1",
"change2", "change3"}
Example

Htuple WindowHandleTuple, LUTs ;


read_image(&Image,"affe") ;
create_tuple(&WindowHandleTuple,1) ;
set_i(WindowHandleTuple,WindowHandle,0) ;
T_query_lut(WindowHandleTuple,&LUTs) \:
for(i=1; i<length_tuple(LUTs); i++)
{
set_lut(WindowHandle,get_s(LUTs,i)) ;
fwrite_string("current table: ") ;
fwrite_string(get_s(LUTs,i)) ;
fnew_line() ;
get_mbutton(WindowHandle,_,_,_) ;
} ;

Result
set_lut returns H_MSG_TRUE if the hardware supports a look-up-table and the parameter is correct. Otherwise
an exception handling is raised.
Parallelization Information
set_lut is reentrant, local, and processed without parallelization.
Possible Predecessors
query_lut, draw_lut, get_lut
Possible Successors
write_lut
Alternatives
draw_lut, set_fix, set_pixel
See also
get_lut, query_lut, draw_lut, set_fix, set_color, set_rgb, set_hsi, write_lut
Module
Foundation

set_lut_style ( Hlong WindowHandle, double Hue, double Saturation,


double Intensity )

T_set_lut_style ( const Htuple WindowHandle, const Htuple Hue,


const Htuple Saturation, const Htuple Intensity )

Changing the look-up-table (lut).


set_lut_style changes the look-up-table (lut) of the device displaying the valid output window. It has got
three parameters:

Hue: Rotation of color space, Hue = 1.9 conforms to a one-time rotation of the color space. No changement: Hue
= 0.0 Complement colors: Hue = 0.5

HALCON/C Reference Manual, 2008-5-13


4.3. LUT 341

Saturation: Changement of saturation, No changement: Saturation = 1.0 Gray value image: Saturation = 0.0
Intensity: Changement of intensity, No changement: Intensity = 1.0 Black image: Intensity = 0.0

Changement affects only the part of an look-up-table that is used for diplaying images. The parameter of modifi-
cation remain until the next call of set_lut_style. Calling set_lut has got no effect on these parameters.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Hue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Modification of color value.
Default Value : 0.0
Typical range of values : 0.0 ≤ Hue ≤ 1.0
Restriction : (0.0 ≤ Hue) ∧ (Hue ≤ 1.0)
. Saturation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Modification of saturation.
Default Value : 1.5
Typical range of values : 0.0 ≤ Saturation
Restriction : 0.0 ≤ Saturation
. Intensity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Modification of intensity.
Default Value : 1.5
Typical range of values : 0.0 ≤ Intensity
Restriction : 0.0 ≤ Intensity
Example

read_image(&Image,"affe") ;
set_lut(WindowHandle,"color") ;
do{
get_mbutton(WindowHandle,&Row,&Column,&Button) ;
Saturation= Row/300.0 ;
Hue = Column/512.0 ;
set_lut_style(WindowHandle,Hue,Saturation,1.0) ;
}
while(Button > 1) ;

Result
set_lut_style returns H_MSG_TRUE if the window is valid and the parameter is correct. Otherwise an
exception handling is raised.
Parallelization Information
set_lut_style is reentrant, local, and processed without parallelization.
Possible Predecessors
get_lut_style
Possible Successors
set_lut
Alternatives
set_lut, scale_image
See also
get_lut_style
Module
Foundation

HALCON 8.0.2
342 CHAPTER 4. GRAPHICS

write_lut ( Hlong WindowHandle, const char *FileName )


T_write_lut ( const Htuple WindowHandle, const Htuple FileName )

Write look-up-table (lut) as file.


write_lut saves the look-up-table (resp. the part of it that is relevant for displaying image gray values) of the
valid output window into a file named ’FileName.lut’. It can be read again later with set_lut.
Attention
write_lut is only suitable for systems using 256 colors.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name (of file containing the look-up-table).
Default Value : "/tmp/lut"
Example

read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_lut(WindowHandle) ;
write_lut(WindowHandle,"test_lut") ;

Result
write_lut returns H_MSG_TRUE if the window with the required properties (256 colors) is valid and the
parameter (file name) is correct. Otherwise an exception handling is raised.
Parallelization Information
write_lut is reentrant, local, and processed without parallelization.
Possible Predecessors
draw_lut, set_lut
See also
set_lut, draw_lut, set_pixel, get_pixel
Module
Foundation

4.4 Mouse
get_mbutton ( Hlong WindowHandle, Hlong *Row, Hlong *Column,
Hlong *Button )

T_get_mbutton ( const Htuple WindowHandle, Htuple *Row, Htuple *Column,


Htuple *Button )

Wait until a mouse button is pressed.


get_mbutton returns the coordinates of the mouse pointer in the output window and the mouse button pressed
(Button):

1: Left button,
2: Middle button,
4: Right button.

The operator waits until a button is pressed in the output window. If more than one button is pressed, the sum of
the individual buttons’ values is returned. The origin of the coordinate system is located in the left upper corner
of the window. The row coordinates increase towards the bottom, while the column coordinates increase towards

HALCON/C Reference Manual, 2008-5-13


4.4. MOUSE 343

the right. For graphics windows, the coordinates of the lower right corner are (image height-1,image width-1) (see
open_window, reset_obj_db), while for text windows they are (window height-1,window width-1) (see
open_textwindow).
Attention
get_mbutton only returns if a mouse button is pressed in the window.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong *
Row coordinate of the mouse position in the window.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong *
Column coordinate of the mouse position in the window.
. Button (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Mouse button(s) pressed.
Result
get_mbutton returns the value H_MSG_TRUE.
Parallelization Information
get_mbutton is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
get_mposition
See also
open_window, open_textwindow
Module
Foundation

get_mposition ( Hlong WindowHandle, Hlong *Row, Hlong *Column,


Hlong *Button )

T_get_mposition ( const Htuple WindowHandle, Htuple *Row,


Htuple *Column, Htuple *Button )

Query the mouse position.


get_mposition returns the coordinates of the mouse pointer in the output window and the mouse button
pressed. These values are returned regardless of the state of the mouse buttons (pressed or not pressed). If more
than one button is pressed, the sum of the individual buttons’ values is returned. The possible values for Button
are:

0: No button,
1: Left button,
2: Middle button,
4: Right button.

The origin of the coordinate system is located in the left upper corner of the window. The row coordinates increase
towards the bottom, while the column coordinates increase towards the right. For graphics windows, the coor-
dinates of the lower right corner are (image height-1,image width-1) (see open_window, reset_obj_db),
while for text windows they are (window height-1,window width-1) (see open_textwindow).
Attention
get_mposition fails (returns FAIL) if the mouse pointer is not located within the window. In this case, no
values are returned.

HALCON 8.0.2
344 CHAPTER 4. GRAPHICS

Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong *
Row coordinate of the mouse position in the window.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong *
Column coordinate of the mouse position in the window.
. Button (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Mouse button(s) pressed or 0.
Result
get_mposition returns the value H_MSG_TRUE. If the mouse pointer is not located within the window,
H_MSG_FAIL is returned.
Parallelization Information
get_mposition is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
get_mbutton
See also
open_window, open_textwindow
Module
Foundation

get_mshape ( Hlong WindowHandle, char *Cursor )


T_get_mshape ( const Htuple WindowHandle, Htuple *Cursor )

Query the current mouse pointer shape.


get_mshape returns the name of the pointer shape set for the window. The mouse pointer shape can be used in
the operator set_mshape.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Cursor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Mouse pointer name.
Result
get_mshape returns the value H_MSG_TRUE.
Parallelization Information
get_mshape is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, query_mshape
Possible Successors
set_mshape
See also
set_mshape, query_mshape
Module
Foundation

T_query_mshape ( const Htuple WindowHandle, Htuple *ShapeNames )

Query all available mouse pointer shapes.

HALCON/C Reference Manual, 2008-5-13


4.4. MOUSE 345

query_mshape returns the names of all available mouse pointer shapes for the window. These can be used in
the operator set_mshape. If no mouse pointers are available, the empty tuple is returned.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong


Window identifier.
. ShapeNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Available mouse pointer names.
Result
query_mshape returns the value H_MSG_TRUE.
Parallelization Information
query_mshape is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, get_mshape
Possible Successors
set_mshape
See also
set_mshape, get_mshape
Module
Foundation

set_mshape ( Hlong WindowHandle, const char *Cursor )


T_set_mshape ( const Htuple WindowHandle, const Htuple Cursor )

Set the current mouse pointer shape.


set_mshape sets the shape of the mouse pointer for the window. A list of the names of all available mouse
pointer shapes can be obtained by calling query_mshape. The mouse pointer shape given by Cursor is used
if the mouse pointer enters the window, irrespective of which window is the output window at present.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Cursor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Mouse pointer name.
Default Value : "arrow"
Result
set_mshape returns the value H_MSG_TRUE if the mouse pointer shape Cursor is defined for this window.
Otherwise, an exception handling is raised.
Parallelization Information
set_mshape is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, query_mshape, get_mshape
See also
get_mshape, query_mshape
Module
Foundation

HALCON 8.0.2
346 CHAPTER 4. GRAPHICS

4.5 Output

disp_arc ( Hlong WindowHandle, double CenterRow, double CenterCol,


double Angle, Hlong BeginRow, Hlong BeginCol )

T_disp_arc ( const Htuple WindowHandle, const Htuple CenterRow,


const Htuple CenterCol, const Htuple Angle, const Htuple BeginRow,
const Htuple BeginCol )

Displays circular arcs in a window.


disp_arc displays one or several circular arcs in the output window. An arc is described by its center point
(CenterRow,CenterCol), the angle between start and end of the arc (Angle in radians) and the first point of
the arc (BeginRow,BeginCol). The arc is displayed in clockwise direction. The parameters for output can be
determined - as with the output of regions - with the procedures set_color, set_gray, set_draw, etc. It
is possible to draw several arcs with one call by using tupel parameters. For the use of colors with several arcs, see
set_color.
Attention
The center point has to be within the window. The radius of the arc has be at least 2 pixel.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window identifier.
. CenterRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.center.y ; (Htuple .) double / Hlong
Row coordinate of center point.
Default Value : 64
Suggested values : CenterRow ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ CenterRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. CenterCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.center.x ; (Htuple .) double / Hlong
Column coordinate of center point.
Default Value : 64
Suggested values : CenterCol ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ CenterCol ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.angle.rad ; (Htuple .) double / Hlong
Angle between start and end of the arc (in radians).
Default Value : 3.1415926
Suggested values : Angle ∈ {0.0, 0.785398, 1.570796, 3.1415926, 6.283185}
Typical range of values : 0.0 ≤ Angle ≤ 6.283185 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Angle > 0.0
. BeginRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.begin.y(-array) ; (Htuple .) Hlong / double
Row coordinate of the start of the arc.
Default Value : 32
Suggested values : BeginRow ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ BeginRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. BeginCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.begin.x(-array) ; (Htuple .) Hlong / double
Column coordinate of the start of the arc.
Default Value : 32
Suggested values : BeginCol ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ BeginCol ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1

HALCON/C Reference Manual, 2008-5-13


4.5. OUTPUT 347

Example

double Row, Column, WindowHandle;


open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
set_draw(WindowHandle,"fill") ;
set_color(WindowHandle,"white") ;
set_insert(WindowHandle,"not") ;
Row = 100 ;
Column = 100 ;
disp_arc(WindowHandle,Row,Column,(double)3.14,(int)Row+10,(int)Column+10) ;
close_window(WindowHandle).

Result
disp_arc returns H_MSG_TRUE.
Parallelization Information
disp_arc is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi
Alternatives
disp_circle, disp_ellipse, disp_region, gen_circle, gen_ellipse
See also
open_window, open_textwindow, set_color, set_draw, set_rgb, set_hsi
Module
Foundation

disp_arrow ( Hlong WindowHandle, double Row1, double Column1,


double Row2, double Column2, double Size )

T_disp_arrow ( const Htuple WindowHandle, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2,
const Htuple Size )

Displays arrows in a window.


disp_arrow displays one or several arrows in the output window. An arrow is described by the coordinates of
the start (Row1,Column1) and the end (Row2,Column2). An arrowhead is displayed at the end of the arrow.
The size of the arrowhead is specified by the parameter Size. If the arrow consists of just one point (start = end)
nothing is displayed. The procedures used to control the display of regions (e.g. set_draw, set_color,
set_line_width) can also be used with arrows. Several arrows can be displayed with one call by using tuple
parameters. For the use of colors with several arcs, see set_color.
Attention
The start and the end of the arrows must fall within the window.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Window identifier.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y(-array) ; (Htuple .) double / Hlong
Row index of the start.
Default Value : 10.0
Suggested values : Row1 ∈ {0.0, 64.0, 128.0, 256.0}
Typical range of values : 0.0 ≤ Row1 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0

HALCON 8.0.2
348 CHAPTER 4. GRAPHICS

. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x(-array) ; (Htuple .) double / Hlong


Column index of the start.
Default Value : 10.0
Suggested values : Column1 ∈ {0.0, 64.0, 128.0, 256.0}
Typical range of values : 0.0 ≤ Column1 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y(-array) ; (Htuple .) double / Hlong
Row index of the end.
Default Value : 118.0
Suggested values : Row2 ∈ {0.0, 64.0, 128.0, 256.0}
Typical range of values : 0.0 ≤ Row2 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x(-array) ; (Htuple .) double / Hlong
Column index of the end.
Default Value : 118.0
Suggested values : Column2 ∈ {0.0, 64.0, 128.0, 256.0}
Typical range of values : 0.0 ≤ Column2 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double / Hlong
Size of the arrowhead.
Default Value : 1.0
Suggested values : Size ∈ {1.0, 2.0, 3.0, 5.0}
Typical range of values : 0.0 ≤ Size ≤ 20.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0
Restriction : Size > 0.0
Example

set_colored(WindowHandle,3) ;
disp_arrow(WindowHandle,10,10,118,118,1.0);

Result
disp_arrow returns H_MSG_TRUE.
Parallelization Information
disp_arrow is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi
Alternatives
disp_line, gen_region_polygon, disp_region
See also
open_window, open_textwindow, set_color, set_draw, set_line_width
Module
Foundation

disp_channel ( const Hobject MultichannelImage, Hlong WindowHandle,


Hlong Channel )

T_disp_channel ( const Hobject MultichannelImage,


const Htuple WindowHandle, const Htuple Channel )

Displays images with several channels.

HALCON/C Reference Manual, 2008-5-13


4.5. OUTPUT 349

disp_channel displays an image in the output window. It is possible to display several images with one call.
In this case the images are displayed one after another. If the definition domains of the images overlap only the last
image is visible. The parameter Channel defines the number of the channel that is displayed. For RGB-images
the three color channels have to be used within a tuple parameter. For more information see disp_image.
Parameter
. MultichannelImage (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Multichannel images to be displayed.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window identifier.
. Channel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Number of channel or the numbers of the RGB-channels
Default Value : 1
List of values : Channel ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Example

/* Tranformation from rgb to gray */


read_image(Image,"patras") ;
disp_color(Image,WindowHandle) ;
rgb1_to_gray(Image,&GrayImage) ;
disp_image(GrayImage,WindowHandle);

Result
If the used images contain valid values and a correct output mode is set, disp_channel returns H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
disp_channel is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi
Alternatives
disp_image, disp_color
See also
open_window, open_textwindow, reset_obj_db, set_lut, draw_lut, dump_window
Module
Foundation

disp_circle ( Hlong WindowHandle, double Row, double Column,


double Radius )

T_disp_circle ( const Htuple WindowHandle, const Htuple Row,


const Htuple Column, const Htuple Radius )

Displays circles in a window.


disp_circle displays one or several circles in the output window. A circle is described by the center (Row,
Column) and the radius Radius. If the used coordinates are not within the window the circle is clipped accord-
ingly.
The procedures used to control the display of regions (e.g. set_draw, set_gray, set_draw) can also be
used with circles. Several circles can be displayed with one call by using tuple parameters. For the use of colors
with several circles, see set_color.
Attention
The center of the circle must be within the window.

HALCON 8.0.2
350 CHAPTER 4. GRAPHICS

Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y(-array) ; (Htuple .) double / Hlong
Row index of the center.
Default Value : 64
Suggested values : Row ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x(-array) ; (Htuple .) double / Hlong
Column index of the center.
Default Value : 64
Suggested values : Column ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius(-array) ; (Htuple .) double / Hlong
Radius of the circle.
Default Value : 64
Suggested values : Radius ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Radius ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Radius > 0.0
Example

open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
set_draw(WindowHandle,"fill") ;
set_color(WindowHandle,"white") ;
set_insert(WindowHandle,"not") ;
get_mbutton(WindowHandle,&Row,&Column,&Button) ;
disp_circle(WindowHandle,Row,Column,(Row + Column) mod 50) ;

Result
disp_circle returns H_MSG_TRUE.
Parallelization Information
disp_circle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi
Alternatives
disp_ellipse, disp_region, gen_circle, gen_ellipse
See also
open_window, open_textwindow, set_color, set_draw, set_rgb, set_hsi
Module
Foundation

disp_color ( const Hobject ColorImage, Hlong WindowHandle )


T_disp_color ( const Hobject ColorImage, const Htuple WindowHandle )

Displays a color (RGB) image


disp_color displays the three channels of a color image in the output window. The channels are ordered in the
sequence (red,green,blue). disp_color can be simulated by disp_channel.

HALCON/C Reference Manual, 2008-5-13


4.5. OUTPUT 351

Attention
Due to the restricted number of available colors the color appearance is usually different from the original.
Parameter
. ColorImage (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Color image to display.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example

/* disp_color(ColorImage) is identical to: */


Herror my_disp_color(Hobject ColorImage, Htuple *WindowHandle) {
Htuple Tupel;
create_tuple(&Tupel,3);
set_i(Tupel,1,0);
set_i(Tupel,2,1);
set_i(Tupel,3,2);
T_disp_channel(ColorImage,*WindowHandle,Tupel);
destroy_tuple(Tupel);
}

Result
If the used image contains valid values and a correct output mode is set, disp_color returns H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
disp_color is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi
Alternatives
disp_channel, disp_obj
See also
disp_image, open_window, open_textwindow, reset_obj_db, set_lut, draw_lut,
dump_window
Module
Foundation

disp_cross ( Hlong WindowHandle, double Row, double Column, double Size,


double Angle )

T_disp_cross ( const Htuple WindowHandle, const Htuple Row,


const Htuple Column, const Htuple Size, const Htuple Angle )

Displays crosses in a window.


disp_cross displays one or several crosses in the output window. A cross is described by the coordinates of
the center point (Row,Column), the length of its bars Size and the orientation Angle. The procedures used to
control the display of regions (e.g. set_color, set_gray, set_draw, set_line_width) can also be
used with crosses. Several crosses can be displayed with one call by using tuple parameters. For the use of colors
with several crosses, see set_color.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y(-array) ; (Htuple .) double
Row coordinate of the center.
Default Value : 32
Suggested values : Row ∈ {0, 64, 128, 256, 511}

HALCON 8.0.2
352 CHAPTER 4. GRAPHICS

. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x(-array) ; (Htuple .) double


Column coordinate of the center.
Default Value : 32
Suggested values : Column ∈ {0, 64, 128, 256, 511}
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Length of the bars.
Default Value : 6.0
Suggested values : Size ∈ {4.0, 6.0, 8.0, 10.0}
Typical range of values : 0.0 ≤ Size
. Angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Orientation.
Default Value : 0.0
Suggested values : Angle ∈ {0.0, 0.78539816339744830961566084581988}
Result
disp_cross returns H_MSG_TRUE.
Parallelization Information
disp_cross is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_arrow, disp_rectangle1, disp_rectangle2, disp_circle
See also
open_window, open_textwindow, set_color, set_rgb, set_hsi, set_insert,
set_line_width
Module
Foundation

T_disp_distribution ( const Htuple WindowHandle,


const Htuple Distribution, const Htuple Row, const Htuple Column,
const Htuple Scale )

Displays a noise distribution.


disp_distribution displays a distribution in the window. The parameters are the same as in set_paint
(WindowHandle,’histogram’) or gen_region_histo. Noise distributions can be generated with
operations like gauss_distribution or noise_distribution_mean.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window identifier.
. Distribution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Gray value distribution (513 values).
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . Hlong
Row index of center.
Default Value : 256
Suggested values : Row ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . Hlong
Column index of center.
Default Value : 256
Suggested values : Column ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10

HALCON/C Reference Manual, 2008-5-13


4.5. OUTPUT 353

. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Size of display.
Default Value : 1
Suggested values : Scale ∈ {1, 2, 3, 4, 5, 6}
Example

open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
set_draw(WindowHandle,"fill") ;
set_color(WindowHandle,"white") ;
set_insert(WindowHandle,"not") ;
read_image(Image,"affe") ;
draw_region(&Region,WindowHandle) ;
noise_distribution_mean(Region,Image,21,&Distribution) ;
disp_distribution (WindowHandle,Distribution,100,100,3) ;

Parallelization Information
disp_distribution is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi,
noise_distribution_mean, gauss_distribution
See also
gen_region_histo, set_paint, gauss_distribution, noise_distribution_mean
Module
Foundation

disp_ellipse ( Hlong WindowHandle, Hlong CenterRow, Hlong CenterCol,


double Phi, double Radius1, double Radius2 )

T_disp_ellipse ( const Htuple WindowHandle, const Htuple CenterRow,


const Htuple CenterCol, const Htuple Phi, const Htuple Radius1,
const Htuple Radius2 )

Displays ellipses.
disp_ellipse displays one or several ellipses in the output window. An ellipse is described by the center
(CenterRow, CenterCol), the orientation Phi (in radians) and the radii of the major and the minor axis
(Radius1 and Radius2).
The procedures used to control the display of regions (e.g. set_draw, set_gray, set_draw) can also be
used with ellipses. Several ellipses can be displayed with one call by using tuple parameters. For the use of colors
with several ellipses, see set_color.
Attention
The center of the ellipse must be within the window.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Window identifier.
. CenterRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.y(-array) ; (Htuple .) Hlong
Row index of center.
Default Value : 64
Suggested values : CenterRow ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ CenterRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10

HALCON 8.0.2
354 CHAPTER 4. GRAPHICS

. CenterCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.x(-array) ; (Htuple .) Hlong


Column index of center.
Default Value : 64
Suggested values : CenterCol ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ CenterCol ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.angle.rad(-array) ; (Htuple .) double / Hlong
Orientation of the ellipse in radians
Default Value : 0.0
Suggested values : Phi ∈ {0.0, 0.785398, 1.570796, 3.1415926, 6.283185}
Typical range of values : 0.0 ≤ Phi ≤ 6.283185 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. Radius1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius1(-array) ; (Htuple .) double / Hlong
Radius of major axis.
Default Value : 24.0
Suggested values : Radius1 ∈ {0.0, 64.0, 128.0, 256.0}
Typical range of values : 0.0 ≤ Radius1 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Radius2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius2(-array) ; (Htuple .) double / Hlong
Radius of minor axis.
Default Value : 14.0
Suggested values : Radius2 ∈ {0.0, 64.0, 128.0, 256.0}
Typical range of values : 0.0 ≤ Radius2 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Example

set_color(WindowHandle,"red") ;
draw_region(&MyRegion,WindowHandle) ;
elliptic_axis(MyRegion,&Ra,&Rb,&Phi) ;
area_center(MyRegion,_,&Row,&Column) ;
disp_ellipse(WindowHandle,Row,Column,Phi,Ra,Rb);

Result
disp_ellipse returns H_MSG_TRUE, if the parameters are correct. Otherwise an exception handling is raised.
Parallelization Information
disp_ellipse is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi,
elliptic_axis, area_center
Alternatives
disp_circle, disp_region, gen_ellipse, gen_circle
See also
open_window, open_textwindow, set_color, set_rgb, set_hsi, set_draw,
set_line_width
Module
Foundation

disp_image ( const Hobject Image, Hlong WindowHandle )


T_disp_image ( const Hobject Image, const Htuple WindowHandle )

Displays gray value images.

HALCON/C Reference Manual, 2008-5-13


4.5. OUTPUT 355

disp_image displays the gray values of an image in the output window. The gray value pixels of the defi-
nition domain ( set_comprise(WindowHandle,’object’)) or of the whole image ( set_comprise
(WindowHandle,’image’)) are used. Restriction to the definition domain is the default.
For the display of gray value images the number of gray values is usually reduced. This is due to the fact that colors
have to be reserved for the display of graphics (e.g. set_color) and the window manager. Also depending on
the number of bitplanes on the used output device often less than 256 colors (eight bitplanes) are available. The
number of "’colors"’ actually reserved for the display of gray values can be queried by get_system. Before
opening the first window this value can be modified by set_system. For instance for 8 bitplanes 200 real gray
values are the default.
The reduction of the number of gray values does not pose problems as long as only gray value information is
displayed, humans cannot distinguish 256 different shades of gray. If certain gray values are used for the rep-
resentation of region information (which is not the style commonly used in HALCON ), confusions might be
the result, since different numerical values are displayed on the screen with the same gray value. The procedure
label_to_region should be used on these images in order to transform the label data into HALCON objects.
If images of type ’int2’, ’int4’, ’real’ or ’complex’ are displayed, the smallest and largest gray value is computed.
Afterwards the pixel data is rescaled according to the number of available gray values (depending on the output
device. e.g. 200). It is possible that some pixels have a very different value than the other pixels. This might lead to
the display of an (almost) completely white or black image. In order to decide if the current image is a binary image
min_max_gray can be used. If neccessary the image can be transformed or converted by scale_image and
convert_image_type before it is displayed.
Attention
If a wrong output mode was set by set_paint, the error will be reported when disp_image is used.
Parameter

. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Gray value image to display.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example

/* Output of a gray image: */


read_image(&Image,"affe");
disp_image(Image,WindowHandle);

Result
If the used image contains valid values and a correct output mode is set, disp_image returns H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
disp_image is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, scale_image, convert_image_type,
min_max_gray
Alternatives
disp_obj, disp_color
See also
open_window, open_textwindow, reset_obj_db, set_comprise, set_paint, set_lut,
draw_lut, paint_gray, scale_image, convert_image_type, dump_window
Module
Foundation

HALCON 8.0.2
356 CHAPTER 4. GRAPHICS

disp_line ( Hlong WindowHandle, double Row1, double Column1,


double Row2, double Column2 )

T_disp_line ( const Htuple WindowHandle, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2 )

Draws lines in a window.


disp_line displays one or several lines in the output window. A line is described by the coordinates of the start
(Row1,Column1) and the coordinates of the end (Row2,Column2). The procedures used to control the display
of regions (e.g. set_color, set_gray, set_draw, set_line_width) can also be used with lines.
Several lines can be displayed with one call by using tuple parameters. For the use of colors with several lines, see
set_color.
Attention
The starting points and the ending points of the lines must be in the window.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Window identifier.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y(-array) ; (Htuple .) double
Row index of the start.
Default Value : 32
Suggested values : Row1 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Row1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x(-array) ; (Htuple .) double
Column index of the start.
Default Value : 32
Suggested values : Column1 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Column1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y(-array) ; (Htuple .) double
Row index of end.
Default Value : 64
Suggested values : Row2 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Row2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x(-array) ; (Htuple .) double
Column index of end.
Default Value : 64
Suggested values : Column2 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Column2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Example

/* Prozedur zur Ausgabe der Kontur eines Rechtecks: /*

disp_rectangle1_margin(long WindowHandle,
long Row1, long Column1,
long Row2, long Column2)
{
disp_line(WindowHandle,Row1,Column1,Row1,Column2) ;
disp_line(WindowHandle,Row1,Column2,Row2,Column2) ;
disp_line(WindowHandle,Row2,Column2,Row2,Column1) ;

HALCON/C Reference Manual, 2008-5-13


4.5. OUTPUT 357

disp_line(WindowHandle,Row2,Column1,Row1,Column1) ;
}

Result
disp_line returns H_MSG_TRUE.
Parallelization Information
disp_line is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_arrow, disp_rectangle1, disp_rectangle2, disp_region, gen_region_polygon,
gen_region_points
See also
open_window, open_textwindow, set_color, set_rgb, set_hsi, set_insert,
set_line_width
Module
Foundation

disp_obj ( const Hobject Object, Hlong WindowHandle )


T_disp_obj ( const Hobject Object, const Htuple WindowHandle )

Displays image objects (image, region, XLD).


disp_obj displays objects depending of their kind. disp_obj is equivalent to disp_image for one channel
images, equivalent to disp_color for three channel images, equivalent to disp_region for regions and
equivalent to disp_xld for XLDs.
Parameter

. Object (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject


Image object to be displayed.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example

/* Output of a gray image: */


read_image(&Image,"affe");
disp_obj(Image,WindowHandle);
threshold(Image,&Region,0.0,128.0);
disp_obj(Region,WindowHandle);

Result
If the used object is valid and a correct output mode is set, disp_obj returns H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
disp_obj is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, scale_image, convert_image_type,
min_max_gray
Alternatives
disp_color, disp_image, disp_xld, disp_region

HALCON 8.0.2
358 CHAPTER 4. GRAPHICS

See also
open_window, open_textwindow, reset_obj_db, set_comprise, set_paint, set_lut,
draw_lut, paint_gray, scale_image, convert_image_type, dump_window
Module
Foundation

T_disp_polygon ( const Htuple WindowHandle, const Htuple Row,


const Htuple Column )

Displays a polyline.
disp_polygon displays a polyline with the row coordinates Row and the column coordinates Column in the
output window. The parameters Row and Column have to be provided as tuples. Straight lines are drawn between
the given points. The start and the end of the polyline are not connected.
The procedures used to control the display of regions (e.g. set_color, set_gray, set_draw,
set_line_width) can also be used with polylines.
Attention
The given coordinates must lie within the window.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong


Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.y-array ; Htuple . Hlong / double
Row index
Default Value : [16,80,80]
Suggested values : Row ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.x-array ; Htuple . Hlong / double
Column index
Default Value : [48,16,80]
Suggested values : Column ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Example

/* display a rectangle */

disp_rectangle1_margin1(long WindowHandle,
long Row1, long Column1,
long Row2, long Column2)
{
Htuple Row, Col;
create_tuple(&Row,4) ;
create_tuple(&Col,4) ;

set_i(Row,Row1,0) ;
set_i(Col,Column1,0) ;

set_i(Row,Row1,1) ;
set_i(Col,Column2,1) ;

set_i(Row,Row2,2) ;
set_i(Col,Column2,2) ;

HALCON/C Reference Manual, 2008-5-13


4.5. OUTPUT 359

set_i(Row,Row2,3) ;
set_i(Col,Column1,3) ;

set_i(Row,Row1,4) ;
set_i(Col,Column1,4) ;

T_disp_polygon(WindowHandle,Row,Col) ;

Result
disp_polygon returns H_MSG_TRUE.
Parallelization Information
disp_polygon is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_line, gen_region_polygon, disp_region
See also
open_window, open_textwindow, set_color, set_rgb, set_hsi, set_insert,
set_line_width
Module
Foundation

disp_rectangle1 ( Hlong WindowHandle, double Row1, double Column1,


double Row2, double Column2 )

T_disp_rectangle1 ( const Htuple WindowHandle, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2 )

Display of rectangles aligned to the coordinate axes.


disp_rectangle1 displays one or several rectangles in the output window. A rectangle is described by the
upper left corner (Row1,Column1) and the lower right corner (Row2,Column2). If the given coordinates are
not within the boundary of the window the rectangle is clipped accordingly. The procedures used to control the
display of regions (e.g. set_color, set_gray, set_draw, set_line_width) can also be used with
rectangles. Several rectangles can be displayed with one call by using tuple parameters.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window identifier.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; (Htuple .) double / Hlong
Row index of the upper left corner.
Default Value : 16
Suggested values : Row1 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Row1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; (Htuple .) double / Hlong
Column index of the upper left corner.
Default Value : 16
Suggested values : Column1 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Column1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10

HALCON 8.0.2
360 CHAPTER 4. GRAPHICS

. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; (Htuple .) double / Hlong


Row index of the lower right corner.
Default Value : 48
Suggested values : Row2 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Row2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Row2 ≥ Row1
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . .rectangle.corner.x(-array) ; (Htuple .) double / Hlong
Column index of the lower right corner.
Default Value : 80
Suggested values : Column2 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Column2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Column2 ≥ Column1
Example

set_color(WindowHandle,"green") ;
draw_region(&MyRegion,WindowHandle) ;
smallest_rectangle1(MyRegion,&R1,&C1,&R2,&C2) ;
disp_rectangle1(WindowHandle,R1,C1,R2,C2) ;

Result
disp_rectangle1 returns H_MSG_TRUE.
Parallelization Information
disp_rectangle1 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_rectangle2, gen_rectangle1, disp_region, disp_line, set_shape
See also
open_window, open_textwindow, set_color, set_draw, set_line_width
Module
Foundation

disp_rectangle2 ( Hlong WindowHandle, double CenterRow,


double CenterCol, double Phi, double Length1, double Length2 )

T_disp_rectangle2 ( const Htuple WindowHandle, const Htuple CenterRow,


const Htuple CenterCol, const Htuple Phi, const Htuple Length1,
const Htuple Length2 )

Displays arbitrarily oriented rectangles.


disp_rectangle2 draws one or several arbitrarily oriented rectangles in the output window. A rectangle is
described by the center (CenterRow,CenterCol), the orientation Phi (in radians) and half the lengths of
the edges Length1 and Length2. The procedures used to control the display of regions (e.g. set_draw,
set_gray, set_draw) can also be used with rectangles. Several rectangles can be displayed with one call by
using tuple parameters. For the use of colors with several rectangles, see set_color.
Attention
The center must lie within the window boundaries.

HALCON/C Reference Manual, 2008-5-13


4.5. OUTPUT 361

Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window identifier.
. CenterRow (input_control) . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; (Htuple .) double / Hlong
Row index of the center.
Default Value : 48
Suggested values : CenterRow ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ CenterRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. CenterCol (input_control) . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; (Htuple .) double / Hlong
Column index of the center.
Default Value : 64
Suggested values : CenterCol ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ CenterCol ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; (Htuple .) double / Hlong
Orientation of rectangle in radians.
Default Value : 0.0
Suggested values : Phi ∈ {0.0, 0.785398, 1.570796, 3.1415926, 6.283185}
Typical range of values : 0.0 ≤ Phi ≤ 6.283185 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. Length1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth(-array) ; (Htuple .) double / Hlong
Half of the length of the longer side.
Default Value : 48
Suggested values : Length1 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Length1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Length2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight(-array) ; (Htuple .) double / Hlong
Half of the length of the shorter side.
Default Value : 32
Suggested values : Length2 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Length2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Length2 < Length1
Example

set_color(WindowHandle,"green") ;
draw_region(&MyRegion,WindowHandle) ;
elliptic_axis(MyRegion,&Ra,&Rb,&Phi) ;
area_center(MyRegion,_,&Row,&Column) ;
disp_gen_rectangle2(WindowHandle,Row,Column,Phi,Ra,Rb) ;

Result
disp_rectangle2 returns H_MSG_TRUE, if the parameters are correct. Otherwise an exception handling is
raised.
Parallelization Information
disp_rectangle2 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_region, gen_rectangle2, disp_rectangle1, set_shape

HALCON 8.0.2
362 CHAPTER 4. GRAPHICS

See also
open_window, open_textwindow, disp_region, set_color, set_draw, set_line_width
Module
Foundation

disp_region ( const Hobject DispRegions, Hlong WindowHandle )


T_disp_region ( const Hobject DispRegions, const Htuple WindowHandle )

Displays regions in a window.


disp_region displays the regions in DispRegions in the output window. The parameters for output can be
set with the procedures set_color, set_gray, set_draw, set_line_width, etc.
The color(s) for the display of the regions are determined with set_color, set_rgb, set_gray or
set_colored. If more than one region is displayed and more than one color is set, the colors are assigned
in a cyclic way to the regions.
The form of the region for output can be modified by set_paint (e.g. encompassing circle, convex hull).
The command set_draw determines if the region is filled or only the boundary is drawn. If only the
boundary is drawn, the thickness of the boundary will be determined by set_line_width and the style by
set_line_style.
Parameter
. DispRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to display.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example

/* Output with 12 colors: */


set_colored(WindowHandle,12) ;
disp_region(SomeSegments,WindowHandle) ;

/* Symbolic representation: */
set_draw(WindowHandle,"margin") ;
set_color(WindowHandle,"red") ;
set_shape(WindowHandle,"ellipse") ;
disp_region(SomeSegments,WindowHandle) ;

/* Representation of a margin with pattern: */


set_draw(WindowHandle,"margin") ;
create_tuple(&Color,2) ;
set_s(Color,"blue",0) ;
set_s(Color,"red",0) ;
create_tuple(&Handle,1) ;
set_i(Handle,WindowHandle,0) ;
T_set_color(Handle,Color) ;
create_tuple(&Par,2) ;

set_i(Par,12,0) ;
set_i(Par,3,1) ;
T_set_line_style(WindowHandle,Par) ;
disp_region(Segments,WindowHandle) ;

Result
disp_region returns H_MSG_TRUE.
Parallelization Information
disp_region is reentrant, local, and processed without parallelization.

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 363

Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_shape, set_line_style, set_insert,
set_fix, set_draw, set_color, set_colored, set_line_width
Alternatives
disp_obj, disp_arrow, disp_line, disp_circle, disp_rectangle1, disp_rectangle2,
disp_ellipse
See also
open_window, open_textwindow, set_color, set_colored, set_draw, set_shape,
set_paint, set_gray, set_rgb, set_hsi, set_pixel, set_line_width, set_line_style,
set_insert, set_fix, paint_region, dump_window
Module
Foundation

disp_xld ( const Hobject XLDObject, Hlong WindowHandle )


T_disp_xld ( const Hobject XLDObject, const Htuple WindowHandle )

Display an XLD object.


disp_xld serves to display an XLD object of arbitrary type.
Parameter
. XLDObject (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld-array ; Hobject
XLD object to display.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window id.
Parallelization Information
disp_xld is reentrant, local, and processed without parallelization.
See also
disp_image, disp_region, disp_channel, disp_color, disp_line, disp_arc
Module
Foundation

4.6 Parameters
get_comprise ( Hlong WindowHandle, char *Mode )
T_get_comprise ( const Htuple WindowHandle, Htuple *Mode )

Get the output treatment of an image matrix.


get_comprise returns the output mode of grayvalues in the window WindowHandle that is used by
disp_image and disp_color. The output mode defines whether only the grayvalues of objects are dis-
played or the whole image is displayed. The query is used for temporary mode settings, i.e., the current mode is
queried, then overwritten with ( set_comprise) and finally reset.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Display mode for images.
Result
get_comprise returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_comprise is reentrant and processed without parallelization.

HALCON 8.0.2
364 CHAPTER 4. GRAPHICS

Possible Successors
set_comprise, disp_image, disp_image
See also
set_comprise, disp_image, disp_color
Module
Foundation

get_draw ( Hlong WindowHandle, char *Mode )


T_get_draw ( const Htuple WindowHandle, Htuple *Mode )

Get the current region fill mode.


get_draw returns the region fill mode of the output window. It is used by operators as disp_region,
disp_circle, disp_arrow, disp_rectangle1, disp_rectangle2 etc. The region fill mode is
set with set_draw.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window_id.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Current region fill mode.
Result
get_draw returns H_MSG_TRUE, if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_draw is reentrant and processed without parallelization.
Possible Successors
set_draw, disp_region
See also
set_draw, disp_region, set_paint
Module
Foundation

get_fix ( Hlong WindowHandle, char *Mode )


T_get_fix ( const Htuple WindowHandle, Htuple *Mode )

Get mode of fixing of current look-up-table (lut).


Use get_fix to get mode of fixing of current look-up-table (look-up-table of valid window) set before by
set_fix.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Current Mode of fixing.
Result
get_fix returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_fix is reentrant, local, and processed without parallelization.
Possible Successors
set_fix, set_pixel, set_rgb

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 365

See also
set_fix
Module
Foundation

T_get_hsi ( const Htuple WindowHandle, Htuple *Hue, Htuple *Saturation,


Htuple *Intensity )

Get the HSI coding of the current color.


get_hsi returns the output color or grayvalues, respectively, for the window, described in Hue, Saturation
and Intensity. get_hsi corresponds to the procedure get_pixel but returns the entries of the color
lookup table instead of its indices. The values returned by get_hsi can be set with set_hsi.
Attention
The values returned by get_hsi may be inaccurate due to rounding errors. They do not necessarily match the
values set with set_hsi exactly (colors are stored in RGB internally).
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window_id.
. Hue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Hue (color value) of the current color.
. Saturation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Saturation of the current color.
. Intensity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Intensity of the current color.
Result
get_hsi returns H_MSG_TRUE, if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_hsi is reentrant and processed without parallelization.
Possible Successors
set_hsi, set_rgb, disp_image
See also
set_hsi, set_color, set_rgb, trans_to_rgb, trans_from_rgb
Module
Foundation

get_icon ( Hobject *Icon, Hlong WindowHandle )


T_get_icon ( Hobject *Icon, const Htuple WindowHandle )

Query the icon for region output


get_icon queries the icon that was set with set_icon.
Parameter
. Icon (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Icon for the regions center of gravity.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
Example

/* draw a region and an icon. */


/* set it and get it again. */

HALCON 8.0.2
366 CHAPTER 4. GRAPHICS

draw_region(&Region,WindowHandle) ;
draw_region(&Icon,WindowHandle) ;
set_icon(Icon) ;
set_shape(WindowHandle,"icon") ;
disp_region(Region,WindowHandle) ;
get_icon(&OldIcon) ;
disp_region(OldIcon,WindowHandle) ;

Result
get_icon always returns H_MSG_TRUE.
Parallelization Information
get_icon is reentrant and processed without parallelization.
Possible Predecessors
set_icon
Possible Successors
disp_region
Module
Foundation

get_insert ( Hlong WindowHandle, char *Mode )


T_get_insert ( const Htuple WindowHandle, Htuple *Mode )

Get the current display mode.


get_insert returns the display mode of the output window. It is used by procedures like disp_region,
disp_line, disp_rectangle1, etc. The mode is set with set_insert. Possible values for Mode can be
queried with the procedure query_insert.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window_id.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Display mode.
Result
get_insert returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_insert is reentrant and processed without parallelization.
Possible Predecessors
query_insert
Possible Successors
set_insert, disp_image
See also
set_insert, query_insert, disp_region, disp_line
Module
Foundation

get_line_approx ( Hlong WindowHandle, Hlong *Approximation )


T_get_line_approx ( const Htuple WindowHandle, Htuple *Approximation )

Get the current approximation error for contour display.

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 367

get_line_approx returns a parameter that controls the approximation error for region contour display in the
window. It is used by the procedure disp_region. Approximation controls the polygon approximation
for contour display (0 ⇔ no approximation). Approximation is only important for displaying the contour of
objects, especially if a line style was set with set_line_style.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Approximation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Hlong *
Current approximation error for contour display.
Result
get_line_approx returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_line_approx is reentrant and processed without parallelization.
Possible Successors
set_line_approx, set_line_style, disp_region
See also
get_region_polygon, set_line_approx, set_line_style, disp_region
Module
Foundation

T_get_line_style ( const Htuple WindowHandle, Htuple *Style )

Get the current graphic mode for contours.


get_line_style returns the display mode for contoures when displaying regions. It is used by pro-
cedures like disp_region, disp_line, disp_polygon, etc. Style is set with the procedure
set_line_style. Style is only important for displaying the contour of objects, especially if a line style
was set with set_line_style.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window_id.
. Style (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Template for contour display.
Result
get_line_style returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_line_style is reentrant, local, and processed without parallelization.
See also
set_line_style, disp_region
Module
Foundation

get_line_width ( Hlong WindowHandle, Hlong *Width )


T_get_line_width ( const Htuple WindowHandle, Htuple *Width )

Get the current line width for contour display.


get_line_width returns the line width for region display in the window. It is used by procedures like
disp_region, disp_line, disp_polygon, etc. Width is set with the procedure set_line_width.
Width is only important for displaying the contour of objects.

HALCON 8.0.2
368 CHAPTER 4. GRAPHICS

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window_id.
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Current line width for contour display.
Result
get_line_width returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_line_width is reentrant and processed without parallelization.
Possible Successors
set_line_width, set_line_style, disp_region
See also
set_line_width, disp_region
Module
Foundation

T_get_paint ( const Htuple WindowHandle, Htuple *Mode )

Get the current display mode for grayvalues.


get_paint returns the display mode for grayvalues in the window. Mode is used by the procedure
disp_image. get_paint is used for temporary changes of the grayvalue display mode. The current value
is queried, then changed (with procedure set_paint) and finally the old value is written back. The available
modes can be viewed with the procedure query_paint. Mode is the name of the display mode. If a mode
can be customized with parameters, the parameter values are passed in a tuple after the mode name. The order of
values is the same as in set_paint.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong


Window_id.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char * / Hlong *
Name and parameter values of the current display mode.
Result
get_paint returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_paint is reentrant and processed without parallelization.
Possible Predecessors
query_paint
Possible Successors
set_paint, disp_region, disp_image
See also
set_paint, query_paint, disp_image
Module
Foundation

get_part ( Hlong WindowHandle, Hlong *Row1, Hlong *Column1, Hlong *Row2,


Hlong *Column2 )

T_get_part ( const Htuple WindowHandle, Htuple *Row1, Htuple *Column1,


Htuple *Row2, Htuple *Column2 )

Get the image part.

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 369

get_part returns the upper left and lower right corner of the image part shown in the window. The image part
can be changed with the procedure set_part (Default is the whole image).
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong *
Row index of the image part’s upper left corner.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong *
Column index of the image part’s upper left corner.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y ; Hlong *
Row index of the image part’s lower right corner.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x ; Hlong *
Column index of the image part’s lower right corner.
Result
get_part returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_part is reentrant and processed without parallelization.
Possible Successors
set_part, disp_region, disp_image
See also
set_part, disp_image, disp_region, disp_color
Module
Foundation

get_part_style ( Hlong WindowHandle, Hlong *Style )


T_get_part_style ( const Htuple WindowHandle, Htuple *Style )

Get the current interpolation mode for grayvalue display.


get_part_style returns the interpolation mode used for displaying an image part in the window. An interpola-
tion takes place if the output window is larger than the image format or the image output format (see set_part).
HALCON supports three interpolation modes:

0 no interpolation (low quality, very fast).


1 unweighted interpolation (average quality and computation time)
2 weighted interpolation (high quality, slow)

The current mode can be changed with set_part_style.


Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Style (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Interpolation mode for image display: 0 (fast, low quality) to 2 (slow, high quality).
List of values : Style ∈ {0, 1, 2}
Result
get_part_style returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_part_style is reentrant and processed without parallelization.
Possible Successors
set_part_style, disp_region, disp_image
See also
set_part_style, set_part, disp_image, disp_color

HALCON 8.0.2
370 CHAPTER 4. GRAPHICS

Module
Foundation

T_get_pixel ( const Htuple WindowHandle, Htuple *Pixel )

Get the current color lookup table index.


get_pixel returns the internal coding of the output grayvalue or color, respectively, for the window. If the
output mode is set to color(s) or grayvalue(s) (see set_color or set_gray), then the color- or grayvalues
are transformed for internal use. The internal code is then used for (physical) screen display. The transformation
depends on the mapping characteristics and the condition of the output device and can be different in different
program runs. Don’t confuse the term "‘pixel"’ with the term "‘pixel"’ in image processing (the other procedure is
get_grayval). Here a pixel is meant to be the color loookup table index.
With get_pixel it is possible to save the output mode without knowing whether colors or grayvalues are used.
Pixel is set with the procedure set_pixel.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong


Window_id.
. Pixel (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Index of the current color look-up table.
Result
get_part_style returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_pixel is reentrant and processed without parallelization.
Possible Successors
set_pixel, disp_region, disp_image
See also
set_pixel, set_fix
Module
Foundation

T_get_rgb ( const Htuple WindowHandle, Htuple *Red, Htuple *Green,


Htuple *Blue )

Get the current color in RGB-coding.


get_rgb returns the output colors or grayvalues, respectively, for the output window. They are defined by the
three color components red, green and blue.
get_rgb is like get_pixel but returns the entries of the color lookup table rather than the indices. The values
returned by get_rgb can be set with set_rgb.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong


Window_id.
. Red (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
The current color’s red value.
. Green (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
The current color’s green value.
. Blue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
The current color’s blue value.
Result
get_rgb returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 371

Parallelization Information
get_rgb is reentrant and processed without parallelization.
Possible Successors
set_rgb, disp_region, disp_image
See also
set_rgb
Module
Foundation

get_shape ( Hlong WindowHandle, char *DisplayShape )


T_get_shape ( const Htuple WindowHandle, Htuple *DisplayShape )

Get the current region output shape.


get_shape returns the shape in which regions are displayed. The available shapes can be queried with
query_shape and then changed with set_shape.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window_id.
. DisplayShape (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Current region output shape.
Result
get_shape returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_shape is reentrant and processed without parallelization.
Possible Predecessors
query_shape
Possible Successors
set_shape, disp_region
See also
set_shape, query_shape, disp_region
Module
Foundation

T_query_all_colors ( const Htuple WindowHandle, Htuple *Colors )

Query all color names.


query_all_colors returns the names of all colors that are known to HALCON . That doesn’t mean, that
these colors are available for specific screens. On some screens there may only be a subset of colors available (see
query_color). Before opening the first window, set_system can be used to define which and how many
colors should be used. The HALCON colors are used to display regions ( disp_region, disp_polygon,
disp_circle, etc.). They can be defined with set_color.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong


Window_id.
. Colors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Color names.
Example

HALCON 8.0.2
372 CHAPTER 4. GRAPHICS

Htuple Colors,ColorsAtWindow,WindowHandleTuple ;
create_tuple(&WindowHandleTuple,1) ;
open_window(0,0,1,1,"root","invisible","",&WindowHandle) ;
set_i(WindowHandleTuple, WindowHandle, 0) ;
T_query_all_colors(WindowHandleTuple,&Colors) ;
/* interactive selection from Colors, provide als result ActColors */
set_system("graphic_colors",ActColors) ;
T_query_color(WindowHandleTuple,&ColorsAtWindow) ;
close_window(WindowHandle) ;
for (i=0; i<length_tuple(ColorsAtWindow); i++)
printf("Color #%s = %s\n",i,get_s(ColorsAtWindow,i)) ;

Result
query_all_colors always returns H_MSG_TRUE
Parallelization Information
query_all_colors is reentrant, local, and processed without parallelization.
Possible Successors
set_system, set_color, disp_region
See also
query_color, set_system, set_color, disp_region, open_window, open_textwindow
Module
Foundation

T_query_color ( const Htuple WindowHandle, Htuple *Colors )

Query all color names displayable in the window.


query_color returns the names of all colors that are usable for region output ( disp_region,
disp_polygon, disp_circle, etc.). On a b/w screen query_color returns ’black’ and ’white’. These
two "‘colors"’ are displayable on any screen. In addition to ’black’ and ’white’ several grayvalues (e.g. ’dim gray’)
are returned on screens capable of grayvalues. A list of all displayable colors is returned for screens with color
lookup table. The returned tuple of colors begins with b/w, followed by the three primaries (’red’,’green’,’blue’)
and several grayvalues. Before opening the first window it is furthermore possible to define the color list with
set_system(’graphic_colors’,...). query_all_colors(WindowHandle,Colors ) re-
turns a list of all available colors for the set_system(’graphic_colors’,...) call. For screens with
truecolor output the same list is returned by query_color. The list of available colors (to HALCON ) must
not be confused with the list of displayable colors. For screens with truecolor output the available colors are only
a small subset of the displayable colors. Colors that are not directly available to HALCON can be chosen man-
ually with set_rgb or set_hsi. If colors are chosen that are known to HALCON but cannot be displayed,
HALCON can choose a similar color. To use this faeture, set_check(’˜color’:) must be set.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window_id.
. Colors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Color names.
Example

Htuple Colors, WindowHandleTuple ;


open_window(0,0,-1,-1,0,"invisible","",&WindowHandle);
create_tuple(&WindowHandleTuple,1) ;
set_i(WindowHandleTuple, WindowHandle, 0) ;
T_query_color(WindowHandleTuple,&Colors);
close_window(WindowHandle);
for (i=0; i<length_tuple(Colors); i++)
printf("Farbe #%s = %s\n",i,get_s(Colors,i));

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 373

Result
query_color returns H_MSG_TRUE, if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_color is reentrant, local, and processed without parallelization.
Possible Successors
set_color, disp_region
See also
query_all_colors, set_color, disp_region, open_window, open_textwindow
Module
Foundation

T_query_colored ( Htuple *PossibleNumberOfColors )

Query the number of colors for color output.


query_colored returns all possible parameter values for set_colored. set_colored defines how
many colors are used for region or graphics output.
Parameter

. PossibleNumberOfColors (output_control) . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *


Tuple of the possible numbers of colors.
Example

Htuple Colors ;
regiongrowing(Image,&Seg,5,5,6,100) ;
T_query_colored(&Colors) ;
set_colored(WindowHandle,get_i(Colors,1)) ;
disp_region(Seg,WindowHandle) ;

Result
query_colored always returns H_MSG_TRUE.
Parallelization Information
query_colored is reentrant and processed without parallelization.
Possible Successors
set_colored, set_color, disp_region
Alternatives
query_color
See also
set_colored, set_color
Module
Foundation

T_query_gray ( const Htuple WindowHandle, Htuple *Grayval )

Query the displayable grayvalues.


query_gray returns all grayvalues that are used for grayvalue output ( disp_image) and that can be repro-
duced exactly in the window. They can be set with the set_gray call. The number of displayable grayvalues
can be set with set_system(’num_gray_*’,...) before opening the first window.

HALCON 8.0.2
374 CHAPTER 4. GRAPHICS

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong


Window_id.
. Grayval (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Tuple of all displayable grayvalues.
Result
query_gray returns H_MSG_TRUE, if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_gray is reentrant, local, and processed without parallelization.
Possible Successors
set_gray, disp_region
See also
set_gray, disp_image
Module
Foundation

T_query_insert ( const Htuple WindowHandle, Htuple *Mode )

Query the possible graphic modes.


query_insert returns the possible modes pixels can be displayed in the output window. New pixels may e.g.
overwrite old ones. In most of the cases there is a functional relationship between old and new values.
Possible display functions:

’copy’: overwrite displayed pixels


’xor’: display old "‘xor"’ new pixels
’complement’: complement displayed pixels

"‘copy"’ is always available.


Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong


Window_id.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Display function name.
Result
query_insert returns H_MSG_TRUE, if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_insert is reentrant, local, and processed without parallelization.
Possible Successors
set_insert, disp_region
See also
set_insert, get_insert
Module
Foundation

query_line_width ( Hlong *Min, Hlong *Max )


T_query_line_width ( Htuple *Min, Htuple *Max )

Query the possible line widths.

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 375

query_line_width returns the minimal (Min) and maximal (Max) values of widths of region border which
can be displayed. Setting of the border width is done with set_line_width. Border width is used by operators
like disp_region, disp_line, disp_circle, disp_rectangle1, disp_rectangle2 etc. if the
drawing mode is "‘margin"’ ( set_draw(::WindowHandle,’margin’:)).
Parameter
. Min (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Displayable minimum width.
. Max (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Displayable maximum width.
Result
query_line_width returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_line_width is reentrant and processed without parallelization.
Possible Successors
get_line_width, set_line_width, set_line_style, disp_line
See also
disp_circle, disp_line, disp_rectangle1, disp_rectangle2, disp_region,
set_line_width, get_line_width, set_line_style
Module
Foundation

T_query_paint ( const Htuple WindowHandle, Htuple *Mode )

Query the grayvalue display modes.


query_paint returns the names of all grayvalue display modes (e.g. ’gray’, ’3D-plot’, ’contourline’, etc.) for
the output window. These modes are used by set_paint. query_paint only returns the names of the
display values, not the additional parameters that may be necessary for some modes.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window_id.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Grayvalue display mode names.
Result
query_paint returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_paint is reentrant, local, and processed without parallelization.
Possible Successors
get_paint, set_paint, disp_image
See also
set_paint, get_paint, disp_image
Module
Foundation

T_query_shape ( Htuple *DisplayShape )

Query the region display modes.


query_shape returns the names of all region display modes (e.g. ’original’, ’circle’, ’rectangle1’, ’rectangle2’,
’ellipse’, etc.) for the window. They are used by set_shape.

HALCON 8.0.2
376 CHAPTER 4. GRAPHICS

Parameter
. DisplayShape (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
region display mode names.
Result
query_shape returns H_MSG_TRUE, if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_shape is reentrant and processed without parallelization.
Possible Successors
get_shape, set_shape, disp_region
See also
set_shape, get_shape, disp_region
Module
Foundation

set_color ( Hlong WindowHandle, const char *Color )


T_set_color ( const Htuple WindowHandle, const Htuple Color )

Set output color.


set_color defines the colors for region output in the window. The available colors can be queried with the
procedure query_color. The "‘colors"’ ’black’ and ’white’ are available for all screens. If colors are used
that are not displayable on the screen, HALCON can choose a similar, displayable color of the output. For this,
set_check(’˜color’:) must be called. Furthermore, the list of available colors can be set with the proce-
dure set_system(’graphic_colors’,...). That must be done before opening the first output window.
If only a single color is passed, all output is in this color. If a tuple of colors is passed, the output color of regions
is modulo to the number of colors. In the example below, the first circle is displayed red, the second in green
and the third in red again. HALCON always begins output with the first color passed. Note, that the number of
output colors depends on the number of objects that are displayed in one procedure call. If only single objects are
displayed, they always appear in the first color, even if the consist of more than one connected components.
The defined colors are used until set_color, set_pixel, set_rgb, set_hsi or set_gray is called
again.
Colors are defined seperately for each window. They can only be changed for the valid window.
Color is used in procedures with region output like disp_region, disp_line, disp_rectangle1,
disp_arrow etc. It is also used by procedures with grayvalue output in certain output modes (e.g. ’3D-
plot’,’histogram’, ’contourline’, etc. See set_paint).
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window_id.
. Color (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Output color names.
Default Value : "white"
Suggested values : Color ∈ {"black", "white", "red", "green", "blue", "cyan", "magenta", "yellow", "dim
gray", "gray", "light gray", "medium slate blue", "coral", "slate blue", "spring green", "orange red", "orange",
"dark olive green", "pink", "cadet blue"}
Example

Htuple Colors, WindowHandleTuple ;


create_tuple(Colors,2) ;
set_s(Colors,"red",0) ;
set_s(Colors,"green",1) ;
create_tuple(&WindowHandleTuple,1) ;
set_i(WindowHandleTuple, WindowHandle,0) ;

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 377

T_set_color(WindowHandleTuple,Colors) ;
disp_circle(WindowHandle,(double)100.0,(double)200.0,(double)100.0) ;
disp_circle(WindowHandle,(double)200.0,(double)300.0,(double)100.0) ;
disp_circle(WindowHandle,(double)300.0,(double)100.0,(double)100.0) ;

Result
set_color returns H_MSG_TRUE if the window is valid and the passed colors are displayable on the screen.
Otherwise an exception handling is raised.
Parallelization Information
set_color is reentrant, local, and processed without parallelization.
Possible Predecessors
query_color
Possible Successors
disp_region
Alternatives
set_rgb, set_hsi
See also
get_rgb, disp_region, set_fix, set_paint
Module
Foundation

set_colored ( Hlong WindowHandle, Hlong NumberOfColors )


T_set_colored ( const Htuple WindowHandle,
const Htuple NumberOfColors )

Set multiple output colors.


set_colored is a shortcut for certain set_color calls. It allows the user to display a region set in different
colors. NumberOfColors defines the number of colors that are used. Valid values for NumberOfColors
can be queried with query_colored. Furthermore, the list of available colors can be set with the procedure
set_system(’graphic_colors’,...). This must be done before opening the first output window.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window_id.
. NumberOfColors (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of output colors.
Default Value : 12
List of values : NumberOfColors ∈ {3, 6, 12}
Result
set_colored returns H_MSG_TRUE if NumberOfColors is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_colored is reentrant, local, and processed without parallelization.
Possible Predecessors
query_colored, set_color
Possible Successors
disp_region
See also
query_colored, set_color, disp_region
Module
Foundation

HALCON 8.0.2
378 CHAPTER 4. GRAPHICS

set_comprise ( Hlong WindowHandle, const char *Mode )


T_set_comprise ( const Htuple WindowHandle, const Htuple Mode )

Define the image matrix output clipping.


set_comprise defines the image matrix output clipping. If Mode is set to ’object’, only grayvalues belonging
to the output object are displayed. If set to ’image’, the whole image matrix is displayed. Default is ’object’.
Attention
If Mode was set to ’image’, undefined grayvalues may be displayed. Depending on the context they are black or
can have random content. See the examples.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Clipping mode for grayvalue output.
Default Value : "object"
List of values : Mode ∈ {"image", "object"}
Example

open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
read_image(&Image,"fabrik") ;
threshold(Image,&Seg,100,255) ;
set_system("init_new_image","false") ;
sobel_amp(Image,&Sob,"sum_abs",3) ;
disp_image(Sob,WindowHandle) ;
get_comprise(&Mode) ;
fwrite_string("Current mode for gray values: ") ;
fwrite_string(Mode) ;
fnew_line() ;
set_comprise(WindowHandle,"image") ;
get_mbutton(WindowHandle,_,_,_) ;
disp_image(Sob,WindowHandle) ;
fwrite_string("Current mode for gray values: image") ;
fnew_line() ;

Result
set_comprise returns H_MSG_TRUE if Mode is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_comprise is reentrant and processed without parallelization.
Possible Predecessors
get_comprise
Possible Successors
disp_image
See also
get_comprise, disp_image, disp_color
Module
Foundation

set_draw ( Hlong WindowHandle, const char *Mode )


T_set_draw ( const Htuple WindowHandle, const Htuple Mode )

Define the region fill mode.

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 379

set_draw defines the region fill mode. If Mode is set to ’fill’, output regions are filled, if set to ’margin’, only
contours are displayed. Setting Mode only affects the valid window. It is used by procedures with region output like
disp_region, disp_circle, disp_rectangle1, disp_rectangle2, disp_arrow etc. It is also
used by procedures with grayvalue output for some grayvalue output modes (e.g. ’histogram’, see set_paint).
If the mode is ’margin’, the contour can be affected with set_line_width, set_line_approx and
set_line_style.
Attention
If the output mode is ’margin’ and the line width is more than one, objects may not be displayed.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Fill mode for region output.
Default Value : "fill"
List of values : Mode ∈ {"fill", "margin"}
Result
set_draw returns H_MSG_TRUE if Mode is correct and the window is valid. Otherwise an exception handling
is raised.
Parallelization Information
set_draw is reentrant, local, and processed without parallelization.
Possible Predecessors
get_draw
Possible Successors
disp_region
See also
get_draw, disp_region, set_paint, disp_image, set_line_width, set_line_style
Module
Foundation

set_fix ( Hlong WindowHandle, const char *Mode )


T_set_fix ( const Htuple WindowHandle, const Htuple Mode )

Set fixing of "‘look-up-table"’ (lut)


Behaviour for Mode = ’true’: set_fix fixes that pixel lastly ascertained by one of the operators set_gray,
set_color, set_hsi or set_rgb (Remark: Here a pixel is the index within the current look-up-table). To
assign a new color to a fixed pixel set a color or gray value by using set_color, set_rgb, set_hsi or
set_gray. This makes it possible to define any color ( set_color), any gray value ( set_gray) and any
color combination ( set_rgb, set_hsi) at any position of the look-up-table.
Mode set to ’false’ reset the fixing. To modify or create a look-up-table process set_pixel, set_fix
(WindowHandle,’true’), set_rgb and set_fix(WindowHandle,’false’) one after another.
Attention
As a side effect set_fix can change colors of "‘non-HALCON windows"’.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Mode of fixing.
Default Value : "true"
List of values : Mode ∈ {"true", "false"}
Result
set_fix returns H_MSG_TRUE if the window is valid, the hardware supports a look-up-table and all parameters
are correct. Otherwise an exception handling is raised.

HALCON 8.0.2
380 CHAPTER 4. GRAPHICS

Parallelization Information
set_fix is reentrant, local, and processed without parallelization.
Possible Predecessors
get_fix
Possible Successors
set_pixel, set_rgb
See also
get_fix, set_pixel, set_rgb, set_color, set_hsi, set_gray
Module
Foundation

set_gray ( Hlong WindowHandle, Hlong GrayValues )


T_set_gray ( const Htuple WindowHandle, const Htuple GrayValues )

Define grayvalues for region output.


set_gray defines the grayvalues for region output. Grayvalues are defined as the range of the color
lookup table that is used for grayvalue output with disp_image in conjunction with set_paint
(WindowHandle,’gray’). These entries can be modified by set_lut. So a ’grayvalue’ is the color in
which a pixel with the same value is displayed (not necessarily really gray). In general, when changing the color
lookup table with set_lut, the colors of the displayed image will change too.
If a grayvalue is needed as a color for image output (i.e. no color changes with set_lut are possible), it can be
set with set_color(WindowHandle,’gray’).
If only a single grayvalue is passed, all output will take place in that grayvalue. If a tuple of grayvalues is passed,
all output will take place in grayvalues modulo the number of tuple elements. In the example below, the first circle
is displayed with grayvalue 100, the second with 200 and the third with 100 again. Every output procedure starts
with the first grayvalue. Note, that the number of output grayvalues depends on the number of objects that are
displayed in one procedure call. If only single objects are displayed, they always appear in the first grayvalue, even
if the consist of more than one connected components.
When the procedures set_gray, set_color, set_rgb, set_hsi are called, the overwrite the existing
values. If not all grayvalues are displayable on the output device, the number range of GrayValues (0..255)
is dithered to the range of displayable grayvalues. In any case 0 is displayed as black and 255 as white. The
displayable grayvalues can be queried with the procedure query_gray. Furthermore, the number of actually
displayed grayvalues can be changed with set_system(’num_gray_*’,...). This must be done before
opening the first window. With set_check(’˜color’:) error messages can be suppressed if a grayvalue
can’t be displayed on the screen. In that case, a similar grayvalue is displayed.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window_id.
. GrayValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Grayvalues for region output.
Default Value : 255
Suggested values : GrayValues ∈ {0, 1, 2, 10, 16, 32, 64, 100, 120, 128, 250, 251, 252, 253, 254, 255}
Typical range of values : 0 ≤ GrayValues ≤ 255
Example

Htuple GrayValues ;
create_tuple(&GrayValues,2) ;
set_i(GrayValues,100,0) ;
set_i(GrayValues,200,0) ;
T_set_gray(WindowHandle,GrayValues) ;
disp_circle(WindowHandle,(double)100.0,(double)200.0,(double)100.0) ;
disp_circle(WindowHandle,(double)200.0,(double)300.0,(double)100.0) ;
disp_circle(WindowHandle,(double)300.0,(double)100.0,(double)100.0) ;

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 381

Result
set_gray returns H_MSG_TRUE if GrayValues is displayable and the window is valid. Otherwise an ex-
ception handling is raised.
Parallelization Information
set_gray is reentrant, local, and processed without parallelization.
Possible Successors
disp_region
See also
get_pixel, set_color
Module
Foundation

set_hsi ( Hlong WindowHandle, Hlong Hue, Hlong Saturation,


Hlong Intensity )

T_set_hsi ( const Htuple WindowHandle, const Htuple Hue,


const Htuple Saturation, const Htuple Intensity )

Define output colors (HSI-coded).


set_hsi sets the region output color(s)/grayvalue(s) for the valid window. Colors are passed as Hue,
Saturation, and Intensity. Transformation from HSI to RGB is done with:

H = (2πHue)/255

I = ( 6Intensity)/255 √
M 1 = (sin (H)Saturation)/(255 √6)
M 2 = (cos (H)Saturation)/(255 2)

R = (2M 1 + I)/(4√6)
G = (−M 1 + M 2 + I)/(4√6
B = (−M 1 − M 2 + I)/(4 6)
Red = R ∗ 255
Green = G ∗ 255
Blue = B ∗ 255
If only one combination is passed, all output will take place in that color. If a tuple of colors is passed, the output
color of regions and geometric objects is modulo to the number of colors. HALCON always begins output with
the first color passed. Note, that the number of output colors depends on the number of objects that are displayed
in one procedure call. If only single objects are displayed, they always appear in the first color, even if the consist
of more than one connected components.
Selected colors are used until the next call of set_color, set_pixel, set_rgb or set_gray. Colors
are relevant to windows, i.e. only the colors of the valid window can be set. Region output colors are used by
operatores like disp_region, disp_line, disp_rectangle1, disp_rectangle2, disp_arrow,
etc. It is also used by procedures with grayvalue output in certain output modes (e.g. ’3D-plot’,’histogram’,
’contourline’, etc. See set_paint).
Attention
The selected intensities may not be available for the selected hues. In that case, the intensities will be lowered
automatically.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window_id.
. Hue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Hue for region output.
Default Value : 30
Typical range of values : 0 ≤ Hue ≤ 255
Restriction : (0 ≤ Hue) ∧ (Hue ≤ 255)

HALCON 8.0.2
382 CHAPTER 4. GRAPHICS

. Saturation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong


Saturation for region output.
Default Value : 255
Typical range of values : 0 ≤ Saturation ≤ 255
Restriction : (0 ≤ Saturation) ∧ (Saturation ≤ 255)
. Intensity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Intensity for region output.
Default Value : 84
Typical range of values : 0 ≤ Intensity ≤ 255
Restriction : (0 ≤ Intensity) ∧ (Intensity ≤ 255)
Result
set_hsi returns H_MSG_TRUE if the window is valid and the output colors are displayable. Otherwise an
exception handling is raised.
Parallelization Information
set_hsi is reentrant, local, and processed without parallelization.
Possible Predecessors
get_hsi
Possible Successors
disp_region
See also
get_hsi, get_pixel, trans_from_rgb, trans_to_rgb, disp_region
Module
Foundation

set_icon ( const Hobject Icon, Hlong WindowHandle )


T_set_icon ( const Hobject Icon, const Htuple WindowHandle )

Icon definition for region output.


set_icon defines an icon for region output ( disp_region). It is displayed in the regions center of gravity.
The use of this icon is activated with set_shape.
Parameter
. Icon (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Icon for center of gravity.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
Example

/* draw a region and an icon */


draw_region(&Region,WindowHandle) ;
draw_region(&Icon,WindowHandle) ;
set_icon(Icon) ;
set_shape(WindowHandle,"icon") ;
disp_region(Region,WindowHandle) ;

Result
set_icon returns H_MSG_TRUE if exactly one region is passed. Otherwise an exception handling is raised.
Parallelization Information
set_icon is reentrant and processed without parallelization.
Possible Predecessors
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region
Possible Successors
set_shape, disp_region

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 383

Module
Foundation

set_insert ( Hlong WindowHandle, const char *Mode )


T_set_insert ( const Htuple WindowHandle, const Htuple Mode )

Define the pixel output function.


set_insert defines the function, with which pixels are displayed in the output window. It is e.g. possible for a
pixel to overwrite the old value. In most of the cases there is a functional relationship between old and new values.
The definition value is only valid for the valid window. Output procedures that honor Mode are e.g.
disp_region, disp_polygon, disp_circle.
Possible display functions are:

’copy’: overwrite displayed pixels


’xor’: display old "xor" new pixels
’complement’: complement displayed pixels

There may not be all functions available, depending on the physical display. However, "‘copy"’ is always available.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the display function.
Default Value : "copy"
List of values : Mode ∈ {"copy", "xor", "complement"}
Result
set_insert returns H_MSG_TRUE if the paramter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_insert is reentrant, local, and processed without parallelization.
Possible Predecessors
query_insert, get_insert
Possible Successors
disp_region
See also
get_insert, query_insert
Module
Foundation

set_line_approx ( Hlong WindowHandle, Hlong Approximation )


T_set_line_approx ( const Htuple WindowHandle,
const Htuple Approximation )

Define the approximation error for contour display.


set_line_approx defines the approximation error for region contour display in the window.
Approximation values greater than zero cause a polygon approximation ≈ smoothing (with a maximum
polygon/contour deviation of Approximation pixel). The approximation algorithm is the same as in
get_region_polygon. set_line_approx is important for contour output via set_line_style.

HALCON 8.0.2
384 CHAPTER 4. GRAPHICS

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window_id.
. Approximation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum deviation from the original contour.
Default Value : 0
Typical range of values : 0 ≤ Approximation
Restriction : Approximation ≥ 0
Example

/* Calling */
set_line_approx(WindowHandle,Approximation) ;
set_draw(WindowHandle,"margin") ;
disp_region(Obj,WindowHandle) ;

/* correspond with */
Htuple Approximation,Row,Col, WindowHandleTuple ;
create_tuple(&Approximation,1) ;
set_i(Approximation,0,0) ;
create_tuple(&WindowHandleTuple,1) ;
set_i(WindowHandleTuple,WindowHandle, 0) ;
T_get_region_polygon(Obj,Approximation,&Row,&Col) ;
T_disp_polygon(WindowHandleTuple,Row,Col) ;

Result
set_line_approx returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_line_approx is reentrant and processed without parallelization.
Possible Predecessors
get_line_approx
Possible Successors
disp_region
Alternatives
get_region_polygon, disp_polygon
See also
get_line_approx, set_line_style, set_draw, disp_region
Module
Foundation

T_set_line_style ( const Htuple WindowHandle, const Htuple Style )

Define a contour output pattern.


set_line_style defines the output pattern of region contours. The information is used by proce-
dures like disp_region, disp_line, disp_polygon etc. The current value can be queried with
get_line_style. Style contains up to five pairs of values. The first value is the length of the visible
contour part, the second is the length of the invisible part. The value pairs are used cyclical for contour output.
Attention
set_line_style does an implicit polygon approximation (see set_line_approx
(WindowHandle,3)). It is only possible to enlarge it with set_line_approx.

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 385

Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window_id.
. Style (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Contour pattern.
Default Value : []
Example

Htuple LineStyle ;

/* stroke line: X-Windows */


create_tuple(&LineStyle,2) ;
set_i(LineStyle,20,0) ;
set_i(LineStyle,7,1) ;
T_set_line_style(WindowHandle,LineStyle) ;
destroy_tuple(LineStyle) ;

/* point-stroke line: X-Windows */


create_tuple(&LineStyle,4) ;
set_i(LineStyle,20,0) ;
set_i(LineStyle,7,1) ;
set_i(LineStyle,3,2) ;
set_i(LineStyle,7,3) ;
T_set_line_style(WindowHandle,LineStyle) ;
destroy_tuple(LineStyle) ;

/* passing line (standard) */


create_tuple(&LineStyle,0) ;
T_set_line_style(WindowHandle,LineStyle) ;
destroy_tuple(LineStyle) ;

Result
set_line_style returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_line_style is reentrant, local, and processed without parallelization.
Possible Predecessors
get_line_style
Possible Successors
disp_region
See also
get_line_style, set_line_approx, disp_region
Module
Foundation

set_line_width ( Hlong WindowHandle, Hlong Width )


T_set_line_width ( const Htuple WindowHandle, const Htuple Width )

Define the line width for region contour output.


set_line_width defines the line width (in pixel) in which a region contour is displayed (e.g. with
disp_region, disp_line, disp_polygon, etc.) The procedure get_line_width returns the cur-
rent value for the window. Some output devices may not allow to change the contour width. If it is possible for the
current device, it can be queried with query_line_width.

HALCON 8.0.2
386 CHAPTER 4. GRAPHICS

Attention
The line width is important if the output mode was set to ’margin’ (see set_draw). If the line width is greater
than one, regions may not always be displayed correctly.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Line width for region output in contour mode.
Default Value : 1
Restriction : (Width ≥ 1) ∧ (Width ≤ 2000)
Result
set_line_width returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_line_width is reentrant and processed without parallelization.
Possible Predecessors
query_line_width, get_line_width
Possible Successors
disp_region
See also
get_line_width, query_line_width, set_draw, disp_region
Module
Foundation

T_set_paint ( const Htuple WindowHandle, const Htuple Mode )

Define the grayvalue output mode.


set_paint defines the output mode for gray value display (single- or multichannel) in the window. The mode
is used by disp_obj, disp_image, and disp_color.
This page describes the different modes that can be used for gray value output. It should be noted that the mode
’default’ is the most suitable in almost all cases.
The hardware characteristics determine how gray values can be displayed. On a screen with one to seven bit planes,
only binary data can be displayed. On screens with at least eight bit planes, it is possible to display multiple gray
values. For binary displays, HALCON includes algorithms using a dithering matrix (fast, but low resolution),
minimal error (good, but slow) and thresholding. Using the thresholding algorithm, the threshold can be passed as
a second parameter (a tuple with the string ’threshold’ and the actual threshold, e.g.: [’theshold’, 100]).
Displays with eight bit planes use approximately 200 gray values for output. Of course it is still possible to use a
binary display on those displays.
A different way to display gray values is the histogram (mode: ’histogram’). This mode has two additional
parameter values: Row (second value) and column (third value). They denote row and column of the histogram
center for positioning on the screen. The scale factor (fourth value) determines the histogram size: a scale factor
of 1 distinguishes 256 grayvalues, 2 distinguishes 128 gray values, 3 distinguishes 64 gray values, and so on. The
four values are passed as a tuple, e.g. [’histogram’,256,256,1]. If only the first value is passed (’histogram’), the
other values are set to defaults or the last values, respectively. For histogram computation see gray_histo.
Histogram output honors the same parameters as procedures like disp_region etc. (e.g. set_color,
set_draw, etc.)
Yet another mode is the display of relative frequencies of the number of connection components ("’compo-
nent_histogram"’). For informations on computing the component histogram see shape_histo_all). Po-
sitioning and resolution are exactly as in the mode ’histogram’.
In mode ’mean’, all object regions are displayed in their mean gray value.
The modes ’row’ and ’column’ allow the display of lines or columns, respecively. The position (row and column
index) is passed with the second paramter value. The third parameter value is the scale factor in percent (100
means 1 pixel per grayvalue, 50 means one pixel per two gray values).

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 387

Gray images can also be interpreted as 3d data, depending on the grayvalue. To view these 3d plots, select the
modes ’contourline’, ’3D-plot’ or ’3D-plot_hidden’.
Three-channel images are interpreted as RGB images. They can be displayed in three different modes. Two of
them can be optimized by Floyd-Steinberg dithering.
Vector field images can be viewed as ’vector_field’.
All available painting modes can be queried with query_paint.
Paramters for modes that need more than one parameter can be passed the following ways:

• Only the name of the mode is passed: the defaults or the most recently used values are used, respectively.
Example: set_paint(WindowHandle,’contourline’)
• All values are passed: all output characteristics can be set. Example: set_paint
(WindowHandle,[’contourline’,10,1])
• Only the first n values are passed: only the passed values are changed. Example: set_paint
(WindowHandle,[’contourline’,10])
• Some of the values are replaced by an asterisk (’*’): The value of the replaced parameters is not changed.
Example: set_paint(WindowHandle,[’contourline’,’*’,1])

If the current mode is ’default’, HALCON chooses a suitable algorithm for the output of 2- and 3-channel images.
No set_paint call is necessary in this case.
Apart from set_paint there are other operators that affect the output of grayvalues. The most important of
them are set_part, set_part_style, set_lut and set_lut_style. Some output modes display
grayvalues using region output (e.g. ’histogram’,’contourline’,’3D-plot’, etc.). In these modes, paramters set with
set_color, set_rgb, set_hsi, set_pixel, set_shape, set_line_width and set_insert
influence grayvalue output. This can lead to unexpected results when using set_shape(’convex’) and
set_paint(WindowHandle,’histogram’). Here the convex hull of the histogram is displayed.
Modes:

• one-channel images:
’default’ optimal display on given hardware
’gray’ grayvalue output
’mean’ mean grayvalue
’dither4_1’ binary image, dithering matrix 4x4
’dither4_2’ binary image, dithering matrix 4x4
’dither4_3’ binary image, dithering matrix 4x4
’dither8_1’ binary image, dithering matrix 8x8
’floyd_steinberg’ binary image, optimal grayvalue simulation
[’threshold’,Threshold ]
’threshold’ binary image, threshold: 128 (default)
[’threshold’,200 ] binary image, any threshold: (here: 200)
[’histogram’,Line,Column,Scale ]
’histogram’ grayvalue output as histogram.
position default: max. size, in the window center
[’histogram’,256,256,2 ] grayvalue output as histogram, any parameter values.
positioning: window center (here (256,256))
size: (here 2, half the max. size)
[’component_histogram’,Line,Column,Scale ]
’component_histogram’ output as histogram of the connection components.
Positioning: default
[’component_histogram’,256,256,1 ] output as histogram of the connection components.
Positioning: (here (256, 256))
Scaling: (here 1, max. size)
[’row’,Line,Scale ]
’row’ output of the grayvalue profile along the given line.
line: image center (default)
Scaling: 50

HALCON 8.0.2
388 CHAPTER 4. GRAPHICS

[’row’,100,20 ] output of the grayvalue profile of line 100 with a scaling of 0.2 (20
[’column’,Column,Scale ]
’column’ output of the grayvalue profile along the given column.
column: image center (default)
Scaling: 50
[’column’,100,20 ] output of the grayvalue profile of column 100 with a scaling of 0.2 (20
[’contourline’,Step,Colored ]
’contourline’ grayvalue output as contour lines: the grayvalue difference per line is defined with the
parameter ’Step’ (default: 30, i.e. max. 8 lines for 256 grayvalues). The line can be displayed in
a given color (see set_color) or in the grayvalue they represent. This behaviour is defined with the
parameter ’Colored’ (0 = color, 1 = grayvalues). Default is color.
[’contourline’,15,1 ] grayvalue output as contour lines with a step of 15 and gray output.
[’3D-plot’, Step, Colored, EyeHeight, EyeDistance, ScaleGray, LinePos, ColumnPos]
’3D-plot’ grayvalues are interpreted as 3d data: the greater the value, the ’higher’ the assumed moun-
tain. Lines with step 2 (second paramter value) are drawn along the x- and y-axes. The third pa-
rameter (Colored) determines, if the output should be in color (default) or grayvalues. To define the
projection of the 3d data, use the parameters EyeHeight and EyeDistance. The projection parameters
take values from 0 to 255. ScaleGray defines a factor, by which the grayvalues are multiplied for
’height’ interpretation (given in percent. 100EyeHeight and EyeDistance the image can be shifted
out of place. Use RowPos and ColumnPos to move the whole output. Values from -127 to 127 are
possible.
[’3D-plot’, 5, 1, 110, 160, 150, 70, -10 ] line step: 5 pixel
Colored: yes (1)
EyeHeight: 110
EyeDistance: 160
ScaleGray: 1.5 (150)
RowPos: 70 pixel down
ColumnPos: 10 pixel right
[’3D-plot_hidden’, Step, Colored, EyeHeight, EyeDistance, ScaleGray, LinePos, ColumnPos]
’3D-plot_hidden’ like ’3D-plot’, but computes hidden lines.
• Two-channel images:
’default’ output the first channel.
• Three-channel images:
’default’ output as RGB image with ’median_cut’.
’television’ color addition algorithm for RGB images: (three components necessary for disp_image).
Images are displayed via a fixed color lookup table. Fast, but non-optimal color resolution. Only recom-
mended on bright screens.
’grid_scan’ grid-scan algorithm for RGB images (three components necessary for disp_image). An
optimized color lookup table is generated for each image. Slower than ’television’. Disadvantages:
Hard color boundaries (no dithering). Different color lookup table for every image.
’grid_scan_floyd_steinberg’ grid-scan with Floyd-Steinberg dithering for smooth color boundaries.
’median_cut’ median-cut algorithm for RGB images (three components necessary for disp_image).
Similar to grid-scan. Disadvantages: Hard color boundaries (no dithering). Different color lookup table
for every image.
’median_cut_floyd_steinberg’ median-cut algorithm with Floyd-Steinberg dithering for smooth color
boundaries.
• Vector field images:
[’vector_field’, Step, MinLengh, ScaleLength ]
’vector_field’ Output a vector field. In this mode, a circle is drawn for each vector at the position of
the pixel. Furthermore, a line segment is drawn with the current vector. The step size for drawing
the vectors, i.e., the distance between the drawn vectors, can be set with the parameter Step. Short
vectors can be suppressed with the third parameter value (MinLength). The fourth parameter value
scales the vector length. It should be noted that by setting ’vector_field’ only the internal param-
eters Step, MinLengh, and ScaleLength are changed. The current display mode is not changed.
Vector field images are always displayed as vector field, no matter which mode is selected with
set_paint.

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 389

[’vector_field’,16,2,3 ] Output of every 16. vector, that is longer than 2 pixel. Each vector is multiplied
by 3 for output.

Attention

• Display of color images (’television’, ’grid_scan’, etc.) changes the color lookup tables.
• If a wrong color mode is set, the error message may appear not until the disp_image call.
• Grayvalue output may be influenced by region output parameters. This can yield unexpected results.

Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window_id.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char * / Hlong
Output mode. Additional parameters possible.
Default Value : "default"
List of values : Mode ∈ {"default", "histogram", "row", "column", "contourline", "3D-plot",
"3D-plot_hidden", "3D-plot_point", "vector_field"}
Example

Htuple Modi,HilfsTuple1,HilfsTuple2,HilfsTuple3, WindowHandleTuple ;


create_tuple(&HilfsTuple1,1) ;
create_tuple(&HilfsTuple2,2) ;
create_tuple(&HilfsTuple3,3) ;
create_tuple(&WindowHandleTuple,1) ;

read_image(&Image,"fabrik") ;
open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
T_query_paint(WindowHandleTuple,Modi) ;
T_fwrite_string(Modi) ;
fnew_line() ;
disp_image(Image,WindowHandle) ;
get_mbutton(WindowHandle,_,_,_) ;

set_s(HilfsTuple1,"red",0) ;
set_i(WindowHandleTuple,WindowHandle,0);
T_set_color(WindowHandleTuple,HilfsTuple1) ;
set_draw(WindowHandle,"margin") ;
set_s(HilfsTuple1,"histogram",0) ;
T_set_paint(WindowHandleTuple,HilfsTuple1) ;
disp_image(Image,WindowHandle) ;
set_s(HilfsTuple1,"blue",0) ;
T_set_color(WindowHandleTuple,HilfsTuple1) ;

set_s(HilfsTuple3,"histogram",0) ;
set_s(HilfsTuple3,100,1) ;
set_s(HilfsTuple3,100,2) ;
T_set_paint(WindowHandleTuple,HilfsTuple3) ;
disp_image(Image,WindowHandle) ;
set_s(HilfsTuple1,"yellow",0) ;
T_set_color(WindowHandleTuple,HilfsTuple1) ;

set_s(HilfsTuple2,"line",0) ;
set_s(HilfsTuple2,100,1) ;
T_set_paint(WindowHandleTuple,HilfsTuple3) ;
disp_image(Image,WindowHandle) ;
get_mbutton(WindowHandle,_,_,_) ;
clear_window(WindowHandle) ;

HALCON 8.0.2
390 CHAPTER 4. GRAPHICS

set_s(HilfsTuple3,"contourline",0) ;
set_s(HilfsTuple3,10,1) ;
set_s(HilfsTuple3,1,2) ;
T_set_paint(WindowHandleTuple,HilfsTuple3) ;
disp_image(Image,WindowHandle) ;
set_lut(WindowHandle,"color") ;
get_mbutton(WindowHandle,_,_,_) ;
clear_window(WindowHandle) ;
set_part(WindowHandle,100,100,300,300) ;
set_s(HilfsTuple1,"3D-plot",0) ;
T_set_paint(WindowHandleTuple,HilfsTuple1) ;
disp_image(Image,WindowHandle) ;

Result
set_paint returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_paint is reentrant, local, and processed without parallelization.
Possible Predecessors
query_paint, get_paint
Possible Successors
disp_image
See also
get_paint, query_paint, disp_image, set_shape, set_rgb, set_color, set_gray
Module
Foundation

set_part ( Hlong WindowHandle, Hlong Row1, Hlong Column1, Hlong Row2,


Hlong Column2 )

T_set_part ( const Htuple WindowHandle, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2 )

Modify the displayed image part.


set_part modifies the image part that is displayed in the window. (Row1,Column1) denotes the upper left cor-
ner and (Row2,Column2) the lower right corner of the image part to display. The changed values are used by gray-
value output operators ( disp_image, disp_color) as well as region output operators ( disp_region)).
If only part of an image is displayed, it will be zoomed to full window size. The zooming interpolation method
can be set with set_part_style. get_part returns the values of the image part to display.
Beside setting the image part directly, the following special modes are supported:

Row1 = Column1 = Row2 = Column2 = -1: The window size is choosen as the image part, i.e. no zooming of
the image will be performed.
Row1, Column1 > -1 and Row2 = Column2 = -1: The size of the last displayed image (in this window) is
choosen as the image part, i.e. the image can completely be displayed in the image. For this the image
will be zoomed if necessary.

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window_id.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong
Row of the upper left corner of the chosen image part.
Default Value : 0

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 391

. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong


Column of the upper left corner of the chosen image part.
Default Value : 0
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y ; Hlong
Row of the lower right corner of the chosen image part.
Default Value : -1
Restriction : (Row2 ≥ Row1) ∨ (Row2 = -1)
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .rectangle.corner.x ; Hlong
Column of the lower right corner of the chosen image part.
Default Value : -1
Restriction : (Column2 ≥ Column1) ∨ (Column2 = -1)
Example

get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Height-1,Width-1) ;
disp_image(Image,WindowHandle) ;
draw_rectangle1(WindowHandle,&Row1,&Column1,&Row2,&Column2) ;
set_part(WindowHandle,Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;

Result
set_part returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
set_part is reentrant and processed without parallelization.
Possible Predecessors
get_part
Possible Successors
set_part_style, disp_image, disp_region
Alternatives
affine_trans_image
See also
get_part, set_part_style, disp_region, disp_image, disp_color
Module
Foundation

set_part_style ( Hlong WindowHandle, Hlong Style )


T_set_part_style ( const Htuple WindowHandle, const Htuple Style )

Define an interpolation method for grayvalue output.


set_part_style defines the interpolation method to zoom an image part which is displayed in the window.
Interpolation takes place, if the output window has different size than the image to display (e.g. after a call to
set_part or a window resize). Three modes are supported:

0 no interpolation (low quality, very fast).


1 unweighted interpolation (medium quality and run time)
2 weighted interpolation (high quality, slow)

The current value can be queried with get_part_style.

HALCON 8.0.2
392 CHAPTER 4. GRAPHICS

Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Style (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Interpolation method for image output: 0 (fast, low quality) to 2 (slow, high quality).
Default Value : 0
List of values : Style ∈ {0, 1, 2}
Result
set_part_style returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_part_style is reentrant and processed without parallelization.
Possible Predecessors
get_part_style
Possible Successors
set_part, disp_image, disp_region
Alternatives
affine_trans_image
See also
get_part_style, set_part, disp_image, disp_color
Module
Foundation

set_pixel ( Hlong WindowHandle, Hlong Pixel )


T_set_pixel ( const Htuple WindowHandle, const Htuple Pixel )

Define a color lookup table index.


set_pixel sets pixel values: colors ( set_color, set_rgb, etc.) and grayvalues ( set_gray) are coded
together into a number, called pixel. This ’pixel’ is an index in the color lookup table. It ranges from 0 to 1 in b/w
images and 0 to 255 color images with 8 bit planes. It is different from the ’pixel’ ("picture element") in image
processing. Therefore HALCON distinguishes between pixel and image element (or grayvalue).
The current value can be queried with get_pixel.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window_id.
. Pixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Color lookup table index.
Default Value : 128
Typical range of values : 0 ≤ Pixel ≤ 255
Result
set_pixel returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_pixel is reentrant, local, and processed without parallelization.
Possible Predecessors
get_pixel
Possible Successors
disp_image, disp_region
Alternatives
set_rgb, set_color, set_hsi

HALCON/C Reference Manual, 2008-5-13


4.6. PARAMETERS 393

See also
get_pixel, set_lut, disp_region, disp_image, disp_color
Module
Foundation

set_rgb ( Hlong WindowHandle, Hlong Red, Hlong Green, Hlong Blue )


T_set_rgb ( const Htuple WindowHandle, const Htuple Red,
const Htuple Green, const Htuple Blue )

Set the color definition via RGB values.


set_rgb sets the output color(s) or the grayvalues, respectively, for region output for the window. The colors are
defined with the red, green and blue components. If only one combination is passed, all output takes place in that
color. If a tuple is passed, region output and output of geometric objects takes place modulo the passed colors.
For every call of an output procedure, output is started with the first color. If only one object is displayed per call,
it will always be displayed in the first color. This is even true for objects with multiple connection components.
If multiple objects are displayed per procedure call, multiple colors are used. The defined colors are used until
set_color, set_pixel, set_rgb or set_gray is called again. The values are used by procedures like
disp_region, disp_line, disp_rectangle1, disp_rectangle2, disp_arrow, etc.
Attention
If a passed is not available, an exception handling is raised. If set_check(’˜color’:) was called before,
HALCON uses a similar color and suppresses the error.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Window_id.
. Red (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Red component of the color.
Default Value : 255
Typical range of values : 0 ≤ Red ≤ 255
Restriction : (0 ≤ Red) ∧ (Red ≤ 255)
. Green (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Green component of the color.
Default Value : 0
Typical range of values : 0 ≤ Green ≤ 255
Restriction : (0 ≤ Green) ∧ (Green ≤ 255)
. Blue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Blue component of the color.
Default Value : 0
Typical range of values : 0 ≤ Blue ≤ 255
Restriction : (0 ≤ Blue) ∧ (Blue ≤ 255)
Result
set_rgb returns H_MSG_TRUE if the window is valid and all passed colors are available and displayable.
Otherwise an exception handling is raised.
Parallelization Information
set_rgb is reentrant, local, and processed without parallelization.
Possible Successors
disp_image, disp_region
Alternatives
set_hsi, set_color, set_gray
See also
set_fix, disp_region
Module
Foundation

HALCON 8.0.2
394 CHAPTER 4. GRAPHICS

set_shape ( Hlong WindowHandle, const char *Shape )


T_set_shape ( const Htuple WindowHandle, const Htuple Shape )

Define the region output shape.


set_shape defines the shape for region output. It is only valid for the window with the logical window number
WindowHandle. The output shape is used by disp_region. The available shapes can be queried with
query_shape.
Available modes:

’original’: The shape is displayed unchanged. Nevertheless modifications via parameters like set_line_width or
set_line_approx can take place. This is also true for all other modes.
’outer_circle’: Each region is displayed by the smallest surrounding circle. (See smallest_circle.)
’inner_circle’: Each region is displayed by the largest included circle. (See inner_circle.)
’ellipse’: Each region is displayed by an ellipse with the same moments and orientation (See elliptic_axis.)
’rectangle1’: Each region is displayed by the smallest surrounding rectangle parallel to the coordinate axes. (See
smallest_rectangle1.)
’rectangle2’: Each region is displayed by the smallest surrounding rectangle. (See smallest_rectangle2.)
’convex’: Each region is displayed by its convex hull (See convexity.)
’icon’ Each region is displayed by the icon set with set_icon in the center of gravity.

Attention
Caution is advised for grayvalue output procedures with output parameter settings that use region out-
put, e.g. disp_image with set_paint(WindowHandle,’histogram’) and set_shape
(WindowHandle,’convex’). In that case the convex hull of the grayvalue histogram is displayed.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Region output mode.
Default Value : "original"
List of values : Shape ∈ {"original", "convex", "outer_circle", "inner_circle", "rectangle1", "rectangle2",
"ellipse", "icon"}
Example

read_image(&Image,"fabrik");
regiongrowing(Image,&Seg,5,5,6.0,100);
set_colored(WindowHandle,12);
set_shape(WindowHandle,"rectangle2");
disp_region(Seg,WindowHandle);

Result
set_shape returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_shape is reentrant and processed without parallelization.
Possible Predecessors
set_icon, query_shape, get_shape
Possible Successors
disp_region
See also
get_shape, query_shape, disp_region
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


4.7. TEXT 395

4.7 Text
get_font ( Hlong WindowHandle, char *Font )
T_get_font ( const Htuple WindowHandle, Htuple *Font )

Get the current font.


get_font queries the name of the font used in the output window. The font is used by the operators
write_string, read_string etc. The font is set by the operator set_font. Text windows as well
as windows for image display use fonts. Both types of windows have a default font that can be modified with
set_system(’default_font’,Fontname) prior to opening the window. A list of all available fonts can
be obtained using query_font.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Font (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Name of the current font.
Example

get_font(WindowHandle,&CurrentFont) ;
set_font(WindowHandle,MyFont) ;
create_tuple(&String,1) ;
sprintf(buf,"The name of my Font is: %s ",Myfont) ;
set_s(String,buf,0) ;
T_write_string(TupleWindowHandle,String) ;
new_line(WindowHandle) ;
set_font(WindowHandle,CurrentFont) ;

Result
get_font returns H_MSG_TRUE.
Parallelization Information
get_font is reentrant and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, query_font
Possible Successors
set_font
See also
set_font, query_font, open_window, open_textwindow, set_system
Module
Foundation

get_string_extents ( Hlong WindowHandle, const char *Values,


Hlong *Ascent, Hlong *Descent, Hlong *Width, Hlong *Height )

T_get_string_extents ( const Htuple WindowHandle, const Htuple Values,


Htuple *Ascent, Htuple *Descent, Htuple *Width, Htuple *Height )

Get the spatial size of a string.


get_string_extents queries width and height of the output size of a string using the font of the window. In
addition the extension above and below the current baseline for writing is returned (Ascent bzw. Descent).
The sizes are measured in the coordinate system of the window (for text windows in pixels). Using
get_string_extents it is possible to determine text output and input independently from the used font. The
conversion from integer numbers and floating point numbers to text strings is the same as in write_string.

HALCON 8.0.2
396 CHAPTER 4. GRAPHICS

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Window identifier.
. Values (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char * / double / Hlong
Values to consider.
Default Value : "test_string"
. Ascent (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong *
Maximum height above baseline for writing.
. Descent (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong *
Maximum extension below baseline for writing.
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong *
Text width.
. Height (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong *
Text height.
Result
get_string_extents returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is
raised.
Parallelization Information
get_string_extents is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font
Possible Successors
set_tposition, write_string, read_string, read_char
See also
set_tposition, set_font
Module
Foundation

get_tposition ( Hlong WindowHandle, Hlong *Row, Hlong *Column )


T_get_tposition ( const Htuple WindowHandle, Htuple *Row,
Htuple *Column )

Get cursor position.


get_tposition queries the current position of the text cursor in the output window. The position is measured
in the coordinate system of the window (in pixels for text windows). The next output of text in this window starts
at the cursor position. The left end of the baseline for writing the next string (not considering descenders) is placed
on this position. The position is changed by the output or input of text ( write_string, read_string) or
by an explicit change of position by ( set_tposition, new_line).
Attention
If the output text does not fit completely into the window, an exception handling is raised. This can be avoided by
set_check(’˜text’).
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong *
Row index of text cursor position.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong *
Column index of text cursor position.
Result
get_tposition returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.

HALCON/C Reference Manual, 2008-5-13


4.7. TEXT 397

Parallelization Information
get_tposition is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font
Possible Successors
set_tposition, write_string, read_string, read_char
See also
new_line, read_string, set_tposition, write_string, set_check
Module
Foundation

get_tshape ( Hlong WindowHandle, char *TextCursor )


T_get_tshape ( const Htuple WindowHandle, Htuple *TextCursor )

Get the shape of the text cursor.


get_tshape queries the shape of the text cursor for the output window. A new cursor shape is set by the operator
set_tshape.
A text cursor marks the current position for text output (which can also be invisible). It is different from the mouse
cursor (although both will be called "’cursor"’ if the context makes misconceptions impossible). The available
shapes for the text cursor can be queried with query_tshape.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. TextCursor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Name of the current text cursor.
Result
get_tshape returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_tshape is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font
Possible Successors
set_tshape, set_tposition, write_string, read_string, read_char
See also
set_tshape, query_tshape, write_string, read_string
Module
Foundation

new_line ( Hlong WindowHandle )


T_new_line ( const Htuple WindowHandle )

Set the position of the text cursor to the beginning of the next line.
new_line sets the position of the text cursor to the beginning of the next line. The new position depends on the
current font. The left end of the baseline for writing the following text string (not considering descenders) is placed
on this position.
If the next line does not fit into the window the content of the window is scrolled by the height of one line in the
upper direction. In order to reach the correct new cursor position the font used in the next line must be set before

HALCON 8.0.2
398 CHAPTER 4. GRAPHICS

new_line is called. The position is changed by the output or input of text ( write_string, read_string)
or by an explicit change of position by ( set_tposition).
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Result
new_line returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
new_line is reentrant and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font, write_string
Alternatives
get_tposition, get_string_extents, set_tposition, move_rectangle
See also
write_string, set_font
Module
Foundation

T_query_font ( const Htuple WindowHandle, Htuple *Font )

Query the available fonts.


query_font queries the fonts available for text output in the output window. They can be set with the operator
set_font. Fonts are used by the operators write_string, read_char, read_string and new_line.
Attention
For different machines the available fonts may differ a lot. Therefore query_font will return different fonts on
different machines.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window identifier.
. Font (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Tuple with available font names.
Example

open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
set_check("~text") ;
create_tuple(&Fontlist,1) ;
create_tuple(&String,1) ;
create_tuple(&WindowHandleTuple,1) ;
set_i(WindowHandleTuple,WindowHandle,0) ;
T_query_font(WindowHandleTuple,&Fontlist) ;
set_color(WindowHandle,"white") ;
for(i=0; i<length_tuple(Fontlist); i++) ;
{
charstring = get_s(Fontlist,i) ;
set_font(WindowHandle,charstring) ;
set_s(String,charstring,0) ;
T_write_string(WindowHandleTuple,String) ;
new_line(WindowHandle) ;
}

Result
query_font returns H_MSG_TRUE.

HALCON/C Reference Manual, 2008-5-13


4.7. TEXT 399

Parallelization Information
query_font is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
set_font, write_string, read_string, read_char
See also
set_font, write_string, read_string, read_char, new_line
Module
Foundation

T_query_tshape ( const Htuple WindowHandle, Htuple *TextCursor )

Query all shapes available for text cursors.


query_tshape queries the available shapes of text cursors for the output window. The retrieved shapes can be
used by the operator set_tshape.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window identifier.
. TextCursor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Names of the available text cursors.
Result
query_tshape returns H_MSG_TRUE.
Parallelization Information
query_tshape is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
set_tshape, write_string, read_string
See also
set_tshape, get_shape, set_tposition, write_string, read_string
Module
Foundation

read_char ( Hlong WindowHandle, char *Char, char *Code )


T_read_char ( const Htuple WindowHandle, Htuple *Char, Htuple *Code )

Read a character from a text window.


read_char reads a character from the keyboard in the input window (= output window). If the character is
printable it is returned in Char. If a control key has been pressed, this will be indicated by the value of Code.
Some important keys are recognizable by this value. Possible values are:

’character’: printable character


’left’: cursor left
’right’: cursor right
’up’: cursor up
’down’: cursor down
’insert’: insert

HALCON 8.0.2
400 CHAPTER 4. GRAPHICS

’none’: none of these keys

Attention
The window has to be a text window.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Char (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Input character (if it is not a control character).
. Code (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Code for input character.
Result
read_char returns H_MSG_TRUE if the text window is valid. Otherwise an exception handling is raised.
Parallelization Information
read_char is reentrant, local, and processed without parallelization.
Possible Predecessors
open_textwindow, set_font
Alternatives
read_string, fread_char, fread_string
See also
write_string, set_font
Module
Foundation

read_string ( Hlong WindowHandle, const char *InString, Hlong Length,


char *OutString )

T_read_string ( const Htuple WindowHandle, const Htuple InString,


const Htuple Length, Htuple *OutString )

Read a string in a text window.


read_string reads a string with a predetermined maximum size (Length) from the keyboard in the input
window (= output window). The string is read from the current position of the text cursor using the current font.
The maximum size has to be small enough to keep the string within the right window boundary. A default string
which can be edited or simply accepted by the user may be provided. After text input the text cursor is positioned
at the end of the edited string. Commands for editing:

RETURN finish input


BACKSPACE delete the character on the left side of the cursor and move the cursor to this position.

Attention
The window has to be a text window.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. InString (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Default string (visible before input).
Default Value : ""
. Length (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum number of characters.
Default Value : 32
Restriction : Length > 0
. OutString (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Read string.

HALCON/C Reference Manual, 2008-5-13


4.7. TEXT 401

Result
read_string returns H_MSG_TRUE if the text window is valid and a string of maximal length fits within the
right window boundary. Otherwise an exception handling is raised.
Parallelization Information
read_string is reentrant, local, and processed without parallelization.
Possible Predecessors
open_textwindow, set_font
Alternatives
read_char, fread_string, fread_char
See also
set_tposition, new_line, open_textwindow, set_font, set_color
Module
Foundation

set_font ( Hlong WindowHandle, const char *Font )


T_set_font ( const Htuple WindowHandle, const Htuple Font )

Set the font used for text output.


set_font sets the font for the output window. The font is used by the operators write_string,
read_string etc. A default font (which can be set via set_system(’default_font’,Fontname)) is
assigned when a window is opened. The assigned font can be changed with set_font. All available fonts can
be queried with query_font. Fonts are not used for file operations.
The syntax for the specification of a font (in Font) differs for UNIX and Windows environments: In Windows a
string with the following components is used:

-FontName-Height-Width-Italic-Underlined-Strikeout-Bold-CharSet-

where “Italic”, “Underlined”, “Strikeout” and “Bold” can take the values 1 and 0 to activate or de-
activate the corresponding feature. “Charset” can be used to select the character set, if it differs
from the default one. You can use the names of the defines (ANSI_CHARSET, BALTIC_CHARSET,
CHINESEBIG5_CHARSET, DEFAULT_CHARSET, EASTEUROPE_CHARSET, GB2312_CHARSET,
GREEK_CHARSET, HANGUL_CHARSET, MAC_CHARSET, OEM_CHARSET, RUSSIAN_CHARSET,
SHIFTJIS_CHARSET, SYMBOL_CHARSET, JOHAB_CHARSET, HEBREW_CHARSET, ARA-
BIC_CHARSET) or the integer value.
All parameters beside “FontName” und “Height” are optional, however it is only possible to omit parameters from
the end of the string. At the begin and end of the string a minus is required. To use the default setting, a * can be
used for the corresponding feature. Examples:

• -Arial-10-*-1-*-*-1-ANSI_CHARSET-
• -Arial-10-*-1-*-*-1-
• -Arial-10-

Please refer to the Windows documentation (Fonts and Text in the MSDN) for a detailed discussion.
On UNIX environments the Font is specified by a string with the following components:
-FOUNDRY-FAMILY_NAME-WEIGHT_NAME-SLANT-SETWIDTH_NAME-ADD_STYLE_NAME-PIXEL_SIZE
-POINT_SIZE-RESOLUTION_X-RESOLUTION_Y-SPACING-AVERAGE_WIDTH-CHARSET_REGISTRY
-CHARSET_ENCODING,
where FOUNDRY identifies the organisation that supplied the Font. The actual name of Font is given in FAM-
ILY_NAME (e.g. ’courier’). WEIGHT_NAME describes the typographic weight of the Font in human readable
form (e.g. ’medium’, ’semibold’, ’demibold’, or ’bold’). SLANT is one of the following codes:

• r for Roman

HALCON 8.0.2
402 CHAPTER 4. GRAPHICS

• i for Italic
• o for Oblique
• ri for Reverse Italic
• ro for Reverse Oblique
• ot for Other

SET_WIDTH_NAME describes the proportionate width of the font (e.g. ’normal’). ADD_STYLE_NAME iden-
tifies additional typographic style information (e.g. ’serif’ or ’sans serif’) and is empty in most cases.
The PIXEL_SIZE is the height of the Font on the screen in pixel, while POINT_SIZE is the print size the Font
was designed for. RESOLUTION_Y and RESOLUTION_X contain the vertical and horizontal Resolution of the
Font. SPACING may be one of the following three codes:

• p for Proportional,
• m for Monospaced, or
• c for CharCell.

The AVERAGE_WIDTH is the mean of the width of each character in Font. The character set encoded in Font
is described in CHARSET_REGISTRY and CHARSET_ENCODING (e.g. ISO8859-1).
An example of a valid string for Font would be
’-adobe-courier-medium-r-normal–12-120-75-75-m-70-iso8859-1’,
which is a 12px medium weighted courier font. As on Windows systems not all fields have to be specified and a *
can be used instead:
’-adobe-courier-medium-r-*–12-*-*-*-*-*-*-*’.
Please refer to "X Logical Font Description Conventions" for detailed information on individual parameters.
Attention
For different machines the available fonts may differ a lot. Therefore it is suggested to use wildcards, tables of
fonts and/or the operator query_font.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Font (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of new font.
Example (Syntax: HDevelop)

get_system (’operating_system’, OS)


if (OS{0:2} = ’Win’)
set_font (WindowHandle, ’-Courier New-18-*-*-*-*-1-’)
else
set_font (WindowHandle, ’-*-courier-bold-r-normal--22-*-*-*-*-*-iso8859-1’)

Result
set_font returns H_MSG_TRUE if the font name is correct. Otherwise an exception handling is raised.
Parallelization Information
set_font is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
query_font
See also
get_font, query_font, open_textwindow, open_window
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


4.7. TEXT 403

set_tposition ( Hlong WindowHandle, Hlong Row, Hlong Column )


T_set_tposition ( const Htuple WindowHandle, const Htuple Row,
const Htuple Column )

Set the position of the text cursor.


set_tposition sets the position of the text cursor in the output window. The reference position is the upper
left corner of an upper case character.
The position is measured in the image coordinate system. The position of the text cursor can be marked, e.g., by
an underscore. The next text output in this window starts at the cursor position. The left end of the baseline for
writing the following text string (not considering descenders) is placed on this position.
The position is changed by the output or input of text ( write_string, read_string) or by an explicit
change of position by ( set_tposition, new_line). In order to stop the display of the cursor, the operator
set_tshape with the parameter "’invisible"’ can be used.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row index of text cursor position.
Default Value : 24
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column index of text cursor position.
Default Value : 12
Result
set_tposition returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
set_tposition is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
set_tshape, write_string, read_string
Alternatives
new_line
See also
read_string, set_tshape, write_string
Module
Foundation

set_tshape ( Hlong WindowHandle, const char *TextCursor )


T_set_tshape ( const Htuple WindowHandle, const Htuple TextCursor )

Set the shape of the text cursor.


set_tshape sets the shape and the display mode of the text cursor of the output window.
A text cursor marks the current position for text output. It is different from the mouse cursor (although both will
be called ’cursor’, if the context makes misconceptions impossible). The available shapes for the text cursor can
be queried with query_tshape.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.

HALCON 8.0.2
404 CHAPTER 4. GRAPHICS

. TextCursor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *


Name of cursor shape.
Default Value : "invisible"
Result
get_tshape returns H_MSG_TRUE if the window is valid and the given cursor shape is defined for this window.
Otherwise an exception handling is raised.
Parallelization Information
set_tshape is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, query_tshape, get_tshape
Possible Successors
write_string, read_string
See also
get_tshape, query_tshape, write_string, read_string
Module
Foundation

write_string ( Hlong WindowHandle, const char *String )


T_write_string ( const Htuple WindowHandle, const Htuple String )

Print text in a window.


write_string prints String in the output window starting at the current cursor position. The output text has
to fit within the right window boundary (the width of the string can be queried by get_string_extents).
The font currently assigned to the window will used. The text cursor is positioned at the end of the text.
write_string can output all three types of data used in HALCON . The conversion to a string is guided by the
following rules:

• strings are not converted.


• integer numbers are converted without any spaces before or after the number.
• floating numbers are printed (if possible) with a floating point and without an exponent.
• the resulting strings are concatenated without spaces.

For buffering of texts see set_system with the flag ’flush_graphic’.


Attention
If clipping at the window boundary is desired, exceptions can be switched off by set_check(’˜text’).
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Window identifier.
. String (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char * / Hlong / double
Tuple of output values (all types).
Default Value : "hello"
Result
write_string returns H_MSG_TRUE if the window is valid and the output text fits within the current line (see
set_check). Otherwise an exception handling is raised.
Parallelization Information
write_string is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font, get_string_extents
Alternatives
fwrite_string

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 405

See also
set_tposition, get_string_extents, open_textwindow, set_font, set_system,
set_check
Module
Foundation

4.8 Window
clear_rectangle ( Hlong WindowHandle, Hlong Row1, Hlong Column1,
Hlong Row2, Hlong Column2 )

T_clear_rectangle ( const Htuple WindowHandle, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2 )

Delete a rectangle on the output window.


clear_rectangle deletes all entries in the rectangle which is defined through the upper left corner
(Row1,Column1) and the lower right corner (Row2,Column2). Deletion significates that the specified rectangle
is set to the background color (see open_window, open_textwindow).
If you want to delete more than one rectangle, you may pass several rectangles, i.e., the parameters Row1,
Column1, Row2 and Column2 are tupel.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Window identifier.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; (Htuple .) Hlong
Line index of upper left corner.
Default Value : 10
Typical range of values : 0 ≤ Row1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; (Htuple .) Hlong
Column index of upper left corner.
Default Value : 10
Typical range of values : 0 ≤ Column1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; (Htuple .) Hlong
Row index of lower right corner.
Default Value : 118
Typical range of values : 0 ≤ Row2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Row2 > Row1
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x(-array) ; (Htuple .) Hlong
Column index of lower right corner.
Default Value : 118
Typical range of values : 0 ≤ Column2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Column2 ≥ Column1
Example

/* "Interactiv" erase of a rectangle in output window */


draw_rectangle1(WindowHandle,&L1,&C1,&L2,&C2) ;
clear_rectangle(WindowHandle,L1,C1,L2,C2) ;

HALCON 8.0.2
406 CHAPTER 4. GRAPHICS

Result
If an output window exists and the specified parameters are correct clear_rectangle returns H_MSG_TRUE.
If necessary an exception handling is raised.
Parallelization Information
clear_rectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi,
draw_rectangle1
Alternatives
clear_window, disp_rectangle1
See also
open_window, open_textwindow
Module
Foundation

clear_window ( Hlong WindowHandle )


T_clear_window ( const Htuple WindowHandle )

Delete an output window.


clear_window deletes all entries in the output window. The window (background and edge) is reset to its orig-
inal state. Parameters assigned to this window (e.g., with set_color, set_paint, etc.) remain unmodified.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example

clear_window(WindowHandle) ;

Result
If the output window is valid clear_window returns H_MSG_TRUE. If necessary an exception handling is
raised.
Parallelization Information
clear_window is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
clear_rectangle, disp_rectangle1
See also
open_window, open_textwindow
Module
Foundation

close_window ( Hlong WindowHandle )


T_close_window ( const Htuple WindowHandle )

Close an output window.


close_window closes a window which have been opened by open_window or by open_textwindow.
Afterwards the output device or the window area, respectively, is ready to accept new calls of open_window or
open_textwindow.

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 407

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
Result
If the output window is valid close_window returns H_MSG_TRUE. If necessary an exception handling is
raised.
Parallelization Information
close_window is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
See also
open_window, open_textwindow
Module
Foundation

copy_rectangle ( Hlong WindowHandleSource,


Hlong WindowHandleDestination, Hlong Row1, Hlong Column1, Hlong Row2,
Hlong Column2, Hlong DestRow, Hlong DestColumn )

T_copy_rectangle ( const Htuple WindowHandleSource,


const Htuple WindowHandleDestination, const Htuple Row1,
const Htuple Column1, const Htuple Row2, const Htuple Column2,
const Htuple DestRow, const Htuple DestColumn )

Copy all pixels within rectangles between output windows.


copy_rectangle copies all pixels from the specified window with the logical window
number WindowHandleSource in the specified window with the logical window number
WindowHandleDestination. It copies pixels which reside inside a rectangle which is specified by
parameters Row1, Column1, Row2 and Column2. The target position is specified through the upper left corner
of the rectangle (DestRow, DestColumn).
If you want to move more than one rectangle, you may pass them at once (in form of the tupel mode).
You may use copy_rectangle to copy edited graphics from an "‘invisible"’ window in a visible window.
Therefore a window with the option ’buffer’ is opened. The graphics is then displayed in this window and is
copied in a visible window afterwards. The advantage of this strategy is, that copy_rectangle is much more
rapid than output procedures as, e.g., disp_channel. This means a particular advantage while using demo
programs. You could even realise short "‘clips"’: you have to create for every image of a sequence a window of a
’buffer’ type and pass the data into it. Output is the image sequence whereat all buffers are copied one after another
in a visible window.
Attention
Both windows have to reside on the same computer.
Parameter

. WindowHandleSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Number of the source window.
. WindowHandleDestination (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Number of the destination window.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; (Htuple .) Hlong
Row index of upper left corner in the source window.
Default Value : 0
Typical range of values : 0 ≤ Row1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1

HALCON 8.0.2
408 CHAPTER 4. GRAPHICS

. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; (Htuple .) Hlong


Column index of upper left corner in the source window.
Default Value : 0
Typical range of values : 0 ≤ Column1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; (Htuple .) Hlong
Row index of lower right corner in the source window.
Default Value : 128
Typical range of values : 0 ≤ Row2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Row2 ≥ Row1
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x(-array) ; (Htuple .) Hlong
Column index of lower right corner in the source window.
Default Value : 128
Typical range of values : 0 ≤ Column2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Column2 ≥ Column1
. DestRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) Hlong
Row index of upper left corner in the target window.
Default Value : 0
Typical range of values : 0 ≤ DestRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. DestColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) Hlong
Column index of upper left corner in the target window.
Default Value : 0
Typical range of values : 0 ≤ DestColumn ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Example

read_image(Image,"affe") ;
open_window(0,0,-1,-1,"root","buffer","",&WindowHandle) ;
disp_image(Image,WindowHandle) ;
open_window(0,0,-1,-1,"root","visible","",&WindowHandleDestination) ;
do{
get_mbutton(WindowHandleDestination,&Row,&Column,&Button) ;
copy_rectangle(BufferID,WindowHandleDestination,90,120,390,Row,Column) ;
}
while(Button > 1) ;
close_window(WindowHandleDestination) ;
close_window(WindowHandle) ;
clear_obj(Image) ;

Result
If the output window is valid and if the specified parameters are correct close_window returns H_MSG_TRUE.
If necessary an exception handling is raised.
Parallelization Information
copy_rectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
close_window

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 409

Alternatives
move_rectangle, slide_image
See also
open_window, open_textwindow
Module
Foundation

dump_window ( Hlong WindowHandle, const char *Device,


const char *FileName )

T_dump_window ( const Htuple WindowHandle, const Htuple Device,


const Htuple FileName )

Write the window content to a file.


dump_window writes the content of the window to a file. You may continue to process this file by convenient
printers or other programs. The content of a display is prepared for each special device (Device), i.e., it is
formated in a manner, that you may print this file directly or it can be processed furthermore by a graphical
program.
To transform gray values the current color table of the window is used, i.e., the values of set_lut_style
remain unconsidered.
Possible values for Device

’postscript’: PostScript - file.


File extension: ’.ps’
’postscript’,Width,Height: PostScript - file with specification of the output size. W idth and Height refer to the
size. In this case a tupel with three values as Device is passed.
File extension: ’.ps’
’tiff’: TIFF - file, 1 byte per pixel incl. current color table or 3 bytes per sample (dependent on VGA card),
uncompressed.
File extension: ’.tiff’
’bmp’: Windows-BMP format, RGB image, 3 bytes per pixel. The color resolution pendends on the VGA card.
File extension: ’.bmp’
’jpeg’: JPEG format, with lost of information; together with the format string the quality value determining the
compression rate can be provided: e.g., ’jpeg 30’.
File extension: ’.jpg’
’jp2’: JPEG2000 format (lossless and lossy compression); together with the format string the quality value de-
terming the compression rate can be provided (e.g. ’jp2 40’). This value corresponds to the ratio of the size
of the compressed image and the size of the uncompressed image (in percent). As lossless JPEG2000 com-
pression reduces the file size significantly already, only smaller values (typically smaller than 50) influence
the file size. Is no value provided (and only then), the image is compressed without loss.
File extension: ’.jp2’
’png’: PNG format (lossless compression); together with the format string, a compresion level between 0 and
9 can be specified, where 0 corresponds to no compression and 9 to the best possible compression. Alter-
natively, the compression can be selected with the following strings: ’best’, ’fastest’, and ’none’. Hence,
examples for correct parameters are ’png’, ’png 7’, and ’png none’.
File extension: ’.png’

Attention
Under UNIX, the graphics window must be completely visible on the root window, because otherwise the contents
of the window cannot be read due to limitations in X Windows. If larger graphical displays are to be written to a
file, the window type ’pixmap’ can be used.

HALCON 8.0.2
410 CHAPTER 4. GRAPHICS

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Window identifier.
. Device (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char * / Hlong
Name of the target device or of the graphic format.
Default Value : "postscript"
List of values : Device ∈ {"postscript", "tiff", "bmp", "jpeg", "jp2", "png", "jpeg 100", "jpeg 80", "jpeg 60",
"jpeg 40", "jpeg 20", "jp2 50", "jp2 40", "jp2 30", "jp2 20", "png best", "png fastest", "png none"}
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .filename.write ; (Htuple .) const char *
File name (without extension).
Default Value : "halcon_dump"
Example

/* PostScript - Dump of Image and Regions */


disp_image(Image,WindowHandle) ;
set_colored(WindowHandle,12) ;
disp_region(Regions,WindowHandle) ;
dump_window(WindowHandle,"postscript","/tmp/halcon_dump") ;
system_call("lp -d ps /tmp/halcon_dump.ps") ;

Result
If the appropriate window is valid and the specified parameters are correct dump_window returns
H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
dump_window is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow,
disp_region
Possible Successors
system_call
See also
open_window, open_textwindow, set_system, dump_window_image
Module
Foundation

dump_window_image ( Hobject *Image, Hlong WindowHandle )


T_dump_window_image ( Hobject *Image, const Htuple WindowHandle )

Write the window content in an image object.


dump_window_image writes the content of the graphics window (WindowHandle) in an image (Image). To
transform gray values the current color table of the window is used, i.e., the values of set_lut_style remain
unconsidered.
Attention
Under UNIX, the graphics window must be completely visible on the root window, because otherwise the contents
of the window cannot be read due to limitations in X Windows.
Parameter

. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte


Saved image.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 411

Result
If the window is valid dump_window_image returns H_MSG_TRUE. If necessary an exception handling is
raised.
Parallelization Information
dump_window_image is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow,
disp_region
See also
open_window, open_textwindow, set_system, dump_window
Module
Foundation

get_os_window_handle ( Hlong WindowHandle, Hlong *OSWindowHandle,


Hlong *OSDisplayHandle )

T_get_os_window_handle ( const Htuple WindowHandle,


Htuple *OSWindowHandle, Htuple *OSDisplayHandle )

Get the operating system window handle.


get_os_window_handle returns the operating system window handle of the HALCON window
WindowHandle in OSWindowHandle. Under UNIX, additionally the operating system display handle is re-
turned in OSDisplayHandle. The operating system window handle can be used to access the window using
functions from the operating system, e.g., to draw in a user-defined manner into the window. Under Windows,
OSWindowHandle can be cast to a variable of type HWND. Under UNIX systems, OSWindowHandle can be
cast into a variable of type Window, while OSDisplayHandle can be cast into a variable of type Display.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Window identifier.
. OSWindowHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Operating system window handle.
. OSDisplayHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Operating system display handle (under UNIX only).
Example

/* Draw a line into a HALCON window under UNIX using X11 calls. */
#include "HalconC.h"
#include <X11/X.h>
#include <X11/Xlib.h>

int main(int argc, char **argv)


{
long hwin, win, disp;
Display *display;
Window window;
GC gc;
XGCValues values;
static char dashes[] = { 20, 20 };

open_window(0, 0, 500, 500, 0, "visible", "", &hwin);


get_os_window_handle(hwin, &win, &disp);
display = (Display *)disp;
window = (Window)win;
gc = XCreateGC(display, window, 0, &values);

HALCON 8.0.2
412 CHAPTER 4. GRAPHICS

XSetFunction(display, gc, GXset);


XSetLineAttributes(display, gc, 10, LineOnOffDash, CapRound, JoinRound);
XSetDashes(display, gc, 0, dashes, 2);
XSetForeground(display, gc, WhitePixel(display, DefaultScreen(display)));
XSetBackground(display, gc, BlackPixel(display, DefaultScreen(display)));
XDrawLine(display, win, gc, 20, 20, 480, 480);
XFlush(display);
XFreeGC(display, gc);
wait_seconds(5);
return 0;
}

/* Draw a line into a HALCON window under Windows using GDI calls. */
#include "HalconC.h"
#include "windows.h"

int main(int argc, char **argv)


{
long hwin, win, disp;
HDC hdc;
HPEN hpen;
HPEN *hpen_old;
LOGBRUSH logbrush;
POINT point;
static DWORD dashes[] = { 20, 20 };

open_window(0, 0, 500, 500, 0, "visible", "", &hwin);


get_os_window_handle(hwin, &win, &disp);
logbrush.lbColor = RGB(255,255,255);
logbrush.lbStyle = BS_SOLID;
hpen = ExtCreatePen(PS_USERSTYLE|PS_GEOMETRIC, 10, &logbrush, 2, dashes);
hdc = GetDC((HWND)win);
hpen_old = (HPEN *)SelectObject(hdc, hpen);
MoveToEx(hdc, 20, 20, &point);
LineTo(hdc, 480, 480);
DeleteObject(SelectObject(hdc, hpen_old));
ReleaseDC((HWND)win, hdc);
wait_seconds(5);
return 0;
}

Result
If the window is valid get_os_window_handle returns H_MSG_TRUE. Otherwise, an exception handling
is raised.
Parallelization Information
get_os_window_handle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 413

get_window_attr ( const char *AttributeName, char *AttributeValue )


T_get_window_attr ( const Htuple AttributeName,
Htuple *AttributeValue )

Get window characteristics.


The operator get_window_attr can be used to read characteristics of graphics windows that were set using
set_window_attr. The following parameters of a window may be queried:
’border_width’ Width of the window border in pixels.
’border_color’ Color of the window border.
’background_color’ Background color of the window.
’window_title’ Name of the window in the titlebar.
Parameter
. AttributeName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the attribute that should be returned.
List of values : AttributeName ∈ {"border_width", "border_color", "background_color", "window_title"}
. AttributeValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char * / Hlong *
Attribute value.
Result
If the parameters are correct get_window_attr returns H_MSG_TRUE. If necessary an exception handling is
raised.
Parallelization Information
get_window_attr is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow
See also
open_window, set_window_attr
Module
Foundation

get_window_extents ( Hlong WindowHandle, Hlong *Row, Hlong *Column,


Hlong *Width, Hlong *Height )

T_get_window_extents ( const Htuple WindowHandle, Htuple *Row,


Htuple *Column, Htuple *Width, Htuple *Height )

Information about a window’s size and position.


get_window_extents returns the position of the upper left corner, as well as width and height of the output
window.
Attention
Size and position of a window may be modified by the window manager, without explicit instruction in the pro-
gram. Therefore the values which are returned by get_window_extents may change cause of side effects.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong *
Row index of upper left corner of the window.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong *
Column index of upper left corner of the window.
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.x ; Hlong *
Window width.
. Height (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.y ; Hlong *
Window height.

HALCON 8.0.2
414 CHAPTER 4. GRAPHICS

Example

open_window(100,100,200,200,"root","visible","",&WindowHandle) ;
fwrite_string("Move the window with the mouse!") ;
fnew_line() ;
create_tuple(&String,1) ;
do
{
get_mbutton(WindowHandle,_,_,&Button) ;
get_window_extents(WindowHandle,&Row,&Column,&Width,&Height) ;
sprintf(buf,"Row %d Col %d ",Row,Column) ;
set_s(String,buf,0) ;
T_fwrite_string(String) ;
fnew_line() ;
}
while(Button < 4) ;

Result
If the window is valid get_window_extents returns H_MSG_TRUE. If necessary an exception handling is
raised.
Parallelization Information
get_window_extents is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow
See also
set_window_extents, open_window, open_textwindow
Module
Foundation

get_window_pointer3 ( Hlong WindowHandle, Hlong *ImageRed,


Hlong *ImageGreen, Hlong *ImageBlue, Hlong *Width, Hlong *Height )

T_get_window_pointer3 ( const Htuple WindowHandle, Htuple *ImageRed,


Htuple *ImageGreen, Htuple *ImageBlue, Htuple *Width, Htuple *Height )

Access to a window’s pixel data.


get_window_pointer3 enables (in some window systems) the direct access to the bitmap. Result values are
the three pointers on the color extracts of a 24-bit window (ImageRed, ImageGreen, ImageBlue), as well as
the window size (Width, Height). In the language C the type of the image points is unsigned char.
Attention
get_window_pointer3 is usable only for window type ’pixmap’.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. ImageRed (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Pointer on red channel of pixel data.
. ImageGreen (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Pointer on green channel of pixel data.
. ImageBlue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Pointer on blue channel of pixel data.
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong *
Length of an image line.
. Height (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong *
Number of image lines.

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 415

Result
If a window of type ’pixmap’ exists and it is valid get_window_pointer3 returns H_MSG_TRUE. If neces-
sary an exception handling is raised.
Parallelization Information
get_window_pointer3 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
See also
open_window, set_window_type
Module
Foundation

get_window_type ( Hlong WindowHandle, char *WindowType )


T_get_window_type ( const Htuple WindowHandle, Htuple *WindowType )

Get the window type.


get_window_type determines the type or the graphical software, respectively, of the output device for the win-
dow. You may query the available types of output devices with procedure query_window_type. A reasonable
use for get_window_type might be in the field of the development of machine independent software. Possible
values are:

’X-Window’ X-Window Version 11.


’WIN32-Window’ Microsoft Windows.
’pixmap’ Windows are not shown, but managed in memory. By this means HALCON programs can be ported on
computers without a graphical display.
’PostScript’ Objects are output to a PostScript File.
’default’ Current window type.
’system_default’ Default window type for current platform.

Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong / const char *


Window identifier.
Suggested values : WindowHandle ∈ {"default", "system_default"}
. WindowType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Window type
Example

open_window(100,100,200,200,"root","visible","",&WindowHandle) ;
get_window_type(WindowHandle,&WindowType) ;
fwrite_string("Window type:") ;
sprintf(buf,"%d",WindowType) ;
fwrite_string(buf) ;
fnew_line() ;

Result
If the window is valid get_window_type returns H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
get_window_type is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow

HALCON 8.0.2
416 CHAPTER 4. GRAPHICS

See also
query_window_type, set_window_type, get_window_pointer3, open_window,
open_textwindow
Module
Foundation

move_rectangle ( Hlong WindowHandle, Hlong Row1, Hlong Column1,


Hlong Row2, Hlong Column2, Hlong DestRow, Hlong DestColumn )

T_move_rectangle ( const Htuple WindowHandle, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2,
const Htuple DestRow, const Htuple DestColumn )

Copy inside an output window.


move_rectangle copies all entries in the rectangle (Row1,Column1), (Row2,Column2) of the output win-
dow to a new position inside the same window. This position is determined by the upper left corner (DestRow,
DestColumn). Regions of the window, which are "‘uncovered"’ through moving the rectangle, are set to the
color of the background.
If you want to move several rectangles at once, you may pass parameters in form of tupels.
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong


Window identifier.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; (Htuple .) Hlong
Row index of upper left corner of the source rectangle.
Default Value : 0
Typical range of values : 0 ≤ Row1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; (Htuple .) Hlong
Column index of upper left corner of the source rectangle.
Default Value : 0
Typical range of values : 0 ≤ Column1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; (Htuple .) Hlong
Row index of lower right corner of the source rectangle.
Default Value : 64
Typical range of values : 0 ≤ Row2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x(-array) ; (Htuple .) Hlong
Column index of lower right corner of the source rectangle.
Default Value : 64
Typical range of values : 0 ≤ Column2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. DestRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) Hlong
Row index of upper left corner of the target position.
Default Value : 64
Typical range of values : 0 ≤ DestRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 417

. DestColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) Hlong


Column index of upper left corner of the target position.
Default Value : 64
Typical range of values : 0 ≤ DestColumn ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Example

/* "Interactiv" copy of a rectangle in the same window */


draw_rectangle1(WindowHandle,&L1,&C1,&L2,&C2) ;
get_mbutton(WindowHandle,LN,CN,_) ;
move_rectangle(WindowHandle,L1,C1,L2,C2,LN,CN) ;

Result
If the window is valid and the specified parameters are correct move_rectangle returns H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
move_rectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
copy_rectangle
See also
open_window, open_textwindow
Module
Foundation

new_extern_window ( Hlong WINHWnd, Hlong WINHDC, Hlong Row,


Hlong Column, Hlong Width, Hlong Height, Hlong *WindowHandle )

T_new_extern_window ( const Htuple WINHWnd, const Htuple WINHDC,


const Htuple Row, const Htuple Column, const Htuple Width,
const Htuple Height, Htuple *WindowHandle )

Create a virtual graphics window under Windows NT.


new_extern_window opens a new virtual window. Virtual means that a new window will not be created, but
the window whose Windows NT handle is given in the parameter WINHWnd is used to perform output of gray
value data, regions, graphics as well as to perform textual output. Visualization parameters for the output of data
can be done either using HALCON commands or by the appropriate Windows NT commands.
Example: setting of the drawing color:

HALCON:
set\_color(WindowHandle,"green");
disp\_region(WindowHandle,region);

Windows NT:
HPEN* penold;
HPEN penGreen = CreatePen(PS\_SOLID,1,RGB(0,255,0));
pen = (HPEN*)SelectObject(WINHDC,penGreen);
disp\_region(WindowHandle,region);

Interactive operators, for example draw_region, draw_circle or get_mbutton cannot be used in this
window. The following operators can be used:

• Output of gray values: set_paint, set_comprise, ( set_lut and set_lut_style after output)

HALCON 8.0.2
418 CHAPTER 4. GRAPHICS

• Regions: set_color, set_rgb, set_hsi, set_gray, set_pixel, set_shape,


set_line_width, set_insert, set_line_style, set_draw
• Image part: set_part
• Text: set_font

You may query current set values by calling procedures like get_shape. As some parameters are specified
through the hardware (Resolution/Colors), you may query current available resources by calling operators like
query_color.
The parameter WINHWnd is used to pass the window handle of the Windows NT window, in which output should
be done. The parameter WINHDC is used to pass the device context of the window WINHWnd. This device context
is used in the output routines of HALCON.
The origin of the coordinate system of the window resides in the upper left corner (coordinates: (0,0)). The row
index grows downward (maximum: Height-1), the column index grows to the right (maximal: Width-1).
You may use the value -1 for parameters Width and Height. This means, that the corresponding value is chosen
automatically. In particular, this is important if the aspect ratio of the pixels is not 1.0 (see set_system). If
one of the two parameters is set to -1, it will be chosen through the size which results out of the aspect ratio of the
pixels. If both parameters are set to -1, they will be set to the current image format.
The position and size of a window may change during runtime of a program. This may be achieved by call-
ing set_window_extents, but also through external influences (window manager). For the latter case the
procedure set_window_extents is provided.
Opening a window causes the assignment of a default font. It is used in connection with procedures
like write_string and you may change it by performing set_font after calling open_window.
On the other hand, you have the possibility to specify a default font by calling set_system
(’default_font’,<Fontname>) before opening a window (and all following windows; see also
query_font).
You may set the color of graphics and font, which is used for output procedures like disp_region or
disp_circle, by calling set_rgb, set_hsi, set_gray or set_pixel. Calling set_insert
specifies how graphics is combined with the content of the image repeat memory. Thereto you may achieve by
calling, e.g., set_insert(::’not’:) to eliminate the font after writing text twice at the same position.
The content of the window is not saved, if other windows overlap the window. This must be done in the program
code that handles the Windows NT window in the calling program.
For graphical output ( disp_image, disp_region, etc.) you may adjust the window by calling procedure
set_part in order to represent a logical clipping of the image format. In particular this implies that only this
part (appropriately scaled) of images and regions is displayed. Before you close your window, you have to close
the HALCON-window.
Steps to use new_extern_window:

Creation: • Create a Windows-window.


• Call new_extern_window with the WINHWnd of the above created window.
Use: • Before drawing in the window you have to call the method set_window_dc. This ensures that the halcon
drawing routines use the right DC. After drawing call again set_window_dc, but this time with the
address of a long set to zero, this ensures that HALCON can delete the created graphic objects.
Destroy: • Call close_window.

Attention
Note that parameters as Row, Column, Width and Height are constrained through the output device, i.e., the
size of the Windows NT desktop.
Parameter

. WINHWnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong


Windows windowhandle of a previously created window.
Restriction : WINHWnd 6= 0
. WINHDC (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Device context of WINHWnd.
Restriction : WINHDC 6= 0

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 419

. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong


Row coordinate of upper left corner.
Default Value : 0
Restriction : Row ≥ 0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong
Column coordinate of upper left corner.
Default Value : 0
Restriction : Column ≥ 0
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.x ; Hlong
Width of the window.
Default Value : 512
Restriction : (Width > 0) ∨ (Width = -1)
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.y ; Hlong
Height of the window.
Default Value : 512
Restriction : (Height > 0) ∨ (Height = -1)
. WindowHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong *
Window identifier.
Example (Syntax: C++)

HTuple m_tHalconWindow ;
Hobject m_objImage ;

WM_CREATE:
/* here you should create your extern halcon window*/
HTuple tWnd, tDC ;
::set_check("~father") ;
tWnd = (INT)((INT*)&m_hWnd) ;
tDC = (INT)(INT*)GetWindowDC() ;
::new_extern_window(tWnd, tDC, 0, 0, sizeTotal.cx, sizeTotal.cy, &m_tHalconWindow) ;
::set_check("father") ;

WM_PAINT:
/* here you can draw halcon objects */
long l = 0 ;
if (m_thWindow != -1) {
/* don´t forget to set the dc !! */
HTuple tDC((INT)(INT*)&pDC->m_hDC) ;
HTuple tDCNull((INT)(INT*)&l) ;
::set_window_dc(m_tHalconWindow,tDC) ;
::disp_obj(pDoc->m_objImage, m_tHalconWindow) ;
/* release the graphic objects */
::set_window_dc(m_tHalconWindow, tDCNull) ;
}

WM_CLOSE:
/* close the halcon window */
if (m_tHalconWindow != -1) {
::close_window(m_tHalconWindow) ;
}

Result
If the values of the specified parameters are correct new_extern_window returns H_MSG_TRUE. If necessary,
an exception is raised.
Parallelization Information
new_extern_window is reentrant, local, and processed without parallelization.
Possible Predecessors
reset_obj_db

HALCON 8.0.2
420 CHAPTER 4. GRAPHICS

Possible Successors
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Alternatives
open_window, open_textwindow
See also
open_window, disp_region, disp_image, disp_color, set_lut, query_color,
set_color, set_rgb, set_hsi, set_pixel, set_gray, set_part, set_part_style,
query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_window_extents, get_window_extents, set_window_attr,
set_check, set_system
Module
Foundation

open_textwindow ( Hlong Row, Hlong Column, Hlong Width, Hlong Height,


Hlong BorderWidth, const char *BorderColor,
const char *BackgroundColor, Hlong FatherWindow, const char *Mode,
const char *Machine, Hlong *WindowHandle )

T_open_textwindow ( const Htuple Row, const Htuple Column,


const Htuple Width, const Htuple Height, const Htuple BorderWidth,
const Htuple BorderColor, const Htuple BackgroundColor,
const Htuple FatherWindow, const Htuple Mode, const Htuple Machine,
Htuple *WindowHandle )

Open a textual window.


open_textwindow opens a new textual window, which can be used to perform textual input and output, as
well as to perform output of images. All output ( write_string, read_string, disp_region, etc.) is
redirected to this window, if the same logical window number WindowHandle is used.
Besides the mouse cursor textual windows possess also a textual cursor which indicates the current writing position
(more exactly: the lower left corner of the output string without consideration of descenders). Its position is
indicated through an underscore or another type (the indication of this position may also be disabled (= default
setting); cf. set_tshape). You may set or query the position by calling the procedures set_tposition or
get_tposition.
After you opened a textual window the position of the cursor is set to (H,0). Whereby H significates the height of
the default font less the descenders. But the cursor is not shown. Hence the output starts for writing in the upper
left corner of the window.
You may query the colors of the background and the image edges by calling query_color. In the same way
you may use query_color in a window of type ’invisible’. During output ( write_string) you may set the
clipping of text out of the window edges by calling set_check(::’∼ text’:). This disables the creation of error
messages, if text passes over the edge of the window.
The origin of the coordinate system of the window resides in the upper left corner (coordinates: (0,0)). The row
index grows downward (maximal: Height-1), the column index grows to the right (maximal: Width-1).
The parameter Machine indicates the name of the computer, which has to open the window. In case of a X-
window, TCP-IP only sets the name, DEC-Net sets in addition a colon behind the name. The "‘server"’ or the
"‘screen"’, respectively, are not specified. If the empty string is passed the environment variable DISPLAY is used.
It indicates the target computer. At this the name is indicated in common syntax

<Host>:0.0
.
For windows of type ’X-Window’ and ’WIN32-Window’ the parameter FatherWindow can be used to de-
termine the father window for the window to be opened. In case the control ’father’ is set via set_check,
FatherWindow relates to the ID of a HALCON window, otherwise ( set_check(’∼ father’)) it relates to the

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 421

ID of an operating system window. If FatherWindow is passed the value 0 or ’root’, then under Windows and
Unix the desktop and the root window become the father window, respectively. In this case, the value of the control
’father’ (set via set_check) is irrelevant.
Position and size of a window may change during runtime of a program. This may be achieved by calling
set_window_extents, but also through external interferences (window manager). In the latter case the pro-
cedure set_window_extents is provided.
Opening a window causes the assignment of a called default font. It is used in connection with
procedures like write_string and you may overwrite it by performing set_font after calling
open_textwindow. On the other hand you have the possibility to specify a default font by calling
set_system(’default_font’,<Fontname>) before opening a window (and all following windows; see
also query_font).
You may set the color of the font ( write_string, read_string) by calling set_color, set_rgb,
set_hsi, set_gray or set_pixel. Calling set_insert specifies how the text or the graphics, re-
spectively, is combined with the content of the image repeat memory. So you may achieve by calling, e.g.,
set_insert(::’not’:) to eliminate the font after writing text twice at the same position.
Normally every output (e.g., write_string, disp_region, disp_circle, etc.) in a window is termi-
nated by a "‘flush"’. This causes the data to be fully visible on the display after termination of the output procedure.
But this is not necessary in all cases, in particular if there are permanently output tasks or there is a mouse proce-
dure active. Therefore it is more favorable (i.e., more rapid) to store the data until sufficient data is available. You
may stop this behavior by calling set_system(’flush_graphic’,’false’).
The content of windows is saved (in case it is supported by special driver software); i.e., it is preserved, also
if the window is hidden by other windows. But this is not necessary in all cases: If you use a textual window,
e.g., as a parent window for other windows, you may suppress the security mechanism for it and save the nec-
essary memory at the same moment. You achieve this before opening the window by calling set_system
(’backing_store’,’false’).
Difference: graphical window - textual window
• In contrast to graphical windows ( open_window) you may specify more parameters (color, edge) for a
textual window while opening it.
• You may use textual windows only for input of user data ( read_string).
• Using textual windows, the output of images, regions and graphics is "‘clipped"’ at the edges. Whereas
during the use of graphical windows the edges are "‘zoomed"’.
• The coordinate system (e.g., with get_mbutton or get_mposition) consists of display coordinates
independently of image size. The maximum coordinates are equal to the size of the window minus 1. In
contrast to this, graphical windows ( open_window) use always a coordinate system, which corresponds to
the image format.

The parameter Mode specifies the mode of the window. It can have following values:

’visible’: Normal mode for textual windows: The window is created according to the parameters and all inputs
and outputs are possible.
’invisible’: Invisible windows are not displayed in the display. Parameters like Row, Column, BorderWidth,
BorderColor, BackgroundColor and FatherWindow do not have any meaning. Output to these
windows has no effect. Input ( read_string, mouse, etc.) is not possible. You may use these windows
to query representation parameter for an output device without opening a (visible) window. General queries
are, e.g., query_color and get_string_extents.
’transparent’: These windows are transparent: the window itself is not visible (edge and background), but
all the other operations are possible and all output is displayed. Parameters like BorderColor and
BackgroundColor do not have any meaning. A common use for this mode is the creation of mouse
sensitive regions.
’buffer’: These are also not visible windows. The output of images, regions and graphics is not visible on
the display, but is stored in memory. Parameters like Row, Column, BorderWidth, BorderColor,
BackgroundColor and FatherWindow do not have any meaning. You may use buffer windows, if you
prepare output (in the background) and copy it finally with copy_rectangle in a visible window. An-
other usage might be the rapid processing of image regions during interactive manipulations. Textual input
and mouse interaction are not possible in this mode.

HALCON 8.0.2
422 CHAPTER 4. GRAPHICS

Attention
You have to keep in mind that parameters like Row, Column, Width and Height are restricted by the output
device. Is a father window (FatherWindow <> ’root’) specified, then the coordinates are relative to this window.
Parameter

. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong


Row index of upper left corner.
Default Value : 0
Typical range of values : 0 ≤ Row (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Row ≥ 0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong
Column index of upper left corner.
Default Value : 0
Typical range of values : 0 ≤ Column (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Column ≥ 0
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.x ; Hlong
Window’s width.
Default Value : 256
Typical range of values : 0 ≤ Width (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Width > 0
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.y ; Hlong
Window’s height.
Default Value : 256
Typical range of values : 0 ≤ Height (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Height > 0
. BorderWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Window border’s width.
Default Value : 2
Typical range of values : 0 ≤ BorderWidth (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : BorderWidth ≥ 0
. BorderColor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Window border’s color.
Default Value : "white"
. BackgroundColor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Background color.
Default Value : "black"
. FatherWindow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Logical number of the father window. For the display as father you may specify ’root’ or 0.
Default Value : 0
Restriction : FatherWindow ≥ 0
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Window mode.
Default Value : "visible"
List of values : Mode ∈ {"visible", "invisible", "transparent", "buffer"}
. Machine (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Computer name, where the window has to be opened or empty string.
Default Value : ""

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 423

. WindowHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong *


Window identifier.
Example

open_textwindow(0,0,900,600,1,"black","slate blue","root","visible",
"",&WindowHandle1) ;
open_textwindow(10,10,300,580,3,"red","blue",Father,"visible",
"",&WindowHandle2) ;
open_window(10,320,570,580,Father,"visible","",&WindowHandle) ;
set_color(WindowHandle,"red") ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
create_tuple(&String,1) ;
do {
get_mposition(WindowHandle,&Row,&Column,&Button) ;
get_grayval(Image,Row,Column,1,&Gray) ;
sprintf(buf,"Position( %d,%d ) ",Row,Column) ;
set_s(String,buf,0) ;
T_fwrite_string(String) ;
new_line(WindowHandle) ;
}
while(Button < 4) ;
close_window(WindowHandle) ;
clear_obj(Image) ;

Result
If the values of the specified parameters are correct open_textwindow returns H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
open_textwindow is reentrant, local, and processed without parallelization.
Possible Predecessors
reset_obj_db
Possible Successors
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Alternatives
open_window
See also
write_string, read_string, new_line, get_string_extents, get_tposition,
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Module
Foundation

open_window ( Hlong Row, Hlong Column, Hlong Width, Hlong Height,


Hlong FatherWindow, const char *Mode, const char *Machine,
Hlong *WindowHandle )

T_open_window ( const Htuple Row, const Htuple Column,


const Htuple Width, const Htuple Height, const Htuple FatherWindow,
const Htuple Mode, const Htuple Machine, Htuple *WindowHandle )

Open a graphics window.

HALCON 8.0.2
424 CHAPTER 4. GRAPHICS

open_window opens a new window, which can be used to perform output of gray value data, regions, graphics as
well as to perform textual output. All output ( disp_region, disp_image, etc.) is redirected to this window,
if the same logical window number WindowHandle is used.
The background of the created window is set to black in advance and it has a white border, which is 2 pixels wide
(see also set_window_attr(’border_width’,<Breite>).
Certain parameters used for the editing of output data are assigned to a window. These parameters are considered
during the output itself (e.g., with disp_image or disp_region). They are not specified by an output
procedure, but by "‘configuration procedures"’. If you want to set, e.g., the color red for the output of regions, you
have to call set_color(WindowHandle,’red’) before calling disp_region. These parameters are
always set for the window with the logical window number WindowHandle and remain assigned to a window as
long as they will be overwritten. You may use the following configuration procedures:

• Output of gray values: set_paint, set_comprise, ( set_lut and set_lut_style after output)
• Regions: set_color, set_rgb, set_hsi, set_gray, set_pixel, set_shape,
set_line_width, set_insert, set_line_style, set_draw
• Image clipping: set_part
• Text: set_font

You may query current set values by calling procedures like get_shape. As some parameters are specified
through the hardware (Resolution/Colors), you may query current available ressources by calling query_color.
The origin of the coordinate system of the window resides in the upper left corner (coordinates: (0,0)). The row
index grows downward (maximal: Height-1), the column index grows to the right (maximal: Width-1). You
have to keep in mind, that the range of the coordinate system is independent of the window size. It is specified
only through the image format (see reset_obj_db).
The parameter Machine indicates the name of the computer, which has to open the window. In case of a X-
window, TCP-IP only sets the name, DEC-Net sets in addition a colon behind the name. The "‘server"’ resp. the
"‘screen"’ are not specified. If the empty string is passed the environment variable DISPLAY is used. It indicates
the target computer. At this the name is indicated in common syntax

<Host>:0.0

.
For windows of type ’X-Window’ and ’WIN32-Window’ the parameter FatherWindow can be used to de-
termine the father window for the window to be opened. In case the control ’father’ is set via set_check,
FatherWindow relates to the ID of a HALCON window, otherwise ( set_check(’∼ father’)) it relates to the
ID of an operating system window. If FatherWindow is passed the value 0 or ’root’, then under Windows and
Unix the desktop and the root window become the father window, respectively. In this case, the value of the control
’father’ (set via set_check) is irrelevant.
You may use the value "‘-1"’ for parameters Width and Height. This means, that the according value has
to be specified automatically. In particular this is of importance, if the proportion of pixels is not 1.0 (see
set_system): Is one of the two parameters set to "‘-1"’, it will be specified through the size which results
out of the proportion of pixels. Are both parameters set to "‘-1"’, they will be set to the maximum image format,
which is currently used (further information about the currently used maximum image format can be found in the
description of get_system using "‘width"’ or "‘height"’).
Position and size of a window may change during runtime of a program. This may be achieved by calling
set_window_extents, but also through external interferences (window manager). In the latter case the pro-
cedure set_window_extents is provided.
Opening a window causes the assignment of a called default font. It is used in connection with
procedures like write_string and you may overwrite it by performing set_font after calling
open_window. On the other hand you have the possibility to specify a default font by calling set_system
(’default_font’,<Fontname>) before opening a window (and all following windows; see also
query_font).
You may set the color of graphics and font, which is used for output procedures like disp_region or
disp_circle, by calling set_rgb, set_hsi, set_gray or set_pixel. Calling set_insert
specifies how graphics is combined with the content of the image repeat memory. Thereto you may achieve by
calling, e.g., set_insert(::’not’:) to eliminate the font after writing text twice at the same position.

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 425

Normally every output (e.g., disp_image, disp_region, disp_circle, etc.) in a window is terminated
by a called "‘flush"’. This causes the data to be fully visible on the display after termination of the output procedure.
But this is not necessary in all cases, in particular if there are permanently output tasks or if there is a mouse
procedure active. Therefore it is more favorable (i.e., more rapid) to store the data until sufficient data is available.
You may stop this behavior by calling set_system(’flush_graphic’,’false’).
The content of windows is saved (in case it is supported by special driver software); i.e., it is preserved, also if the
window is hidden by other windows. But this is not necessary in all cases: If the content of a window is built up
permanently new ( copy_rectangle), you may suppress the security mechanism for that and hence you can
save the necessary memory. This is done by calling set_system(’backing_store’,’false’) before
opening a window. In doing so you save not only memory but also time to compute. This is significant for the
output of video clips (see copy_rectangle).
For graphical output ( disp_image, disp_region, etc.) you may adjust the window by calling procedure
set_part in order to represent a logical clipping of the image format. In particular this implicates that you
obtain this clipping (with appropriate enlargement) of images and regions only.
Difference: graphical window - textual window

• Using graphical windows the layout is not as variable as concerned to textual windows.
• You may use textual windows for the input of user data only ( read_string).
• During the output of images, regions and graphics a "‘zooming"’ is performed using graphical windows:
Independent on size and side ratio of the window images are transformed in that way, that they are displayed
in the window by filling it completely. On the opposite side using textual windows the output does not care
about the size of the window (only if clipping is necessary).
• Using graphical windows the coordinate system of the window corresponds to the coordinate system of
the image format. Using textual windows, its coordinate system is always equal to the display coordinates
independent on image size.

The parameter Mode determines the mode of the window. It may have following values:

’visible’: Normal mode for graphical windows: The window is created according to the parameters and all input
and output are possible.
’invisible’: Invisible windows are not displayed in the display. Parameters like Row, Column and
FatherWindow do not have any meaning. Output to these windows has no effect. Input ( read_string,
mouse, etc.) is not possible. You may use these windows to query representation parameter for an
output device without opening a (visible) window. Common queries are, e.g., query_color and
get_string_extents.
’transparent’: These windows are transparent: the window itself is not visible (edge and background), but all
the other operations are possible and all output is displayed. A common use for this mode is the creation of
mouse sensitive regions.
’buffer’: These are also not visible windows. The output of images, regions and graphics is not visible on the
display, but is stored in memory. Parameters like Row, Column and FatherWindow do not have any
meaning. You may use buffer windows, if you prepare output (in the background) and copy it finally with
copy_rectangle in a visible window. Another usage might be the rapid processing of image regions
during interactive manipulations. Textual input and mouse interaction are not possible in this mode.

Attention
You may keep in mind that parameters as Row, Column, Width and Height are constrained by the output
device. If you specify a father window (FatherWindow <> ’root’) the coordinates are relative to this window.
Parameter

. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong


Row index of upper left corner.
Default Value : 0
Typical range of values : 0 ≤ Row (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Row ≥ 0

HALCON 8.0.2
426 CHAPTER 4. GRAPHICS

. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong


Column index of upper left corner.
Default Value : 0
Typical range of values : 0 ≤ Column (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Column ≥ 0
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.x ; Hlong
Width of the window.
Default Value : 256
Typical range of values : 0 ≤ Width (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : (Width > 0) ∨ (Width = -1)
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.y ; Hlong
Height of the window.
Default Value : 256
Typical range of values : 0 ≤ Height (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : (Height > 0) ∨ (Height = -1)
. FatherWindow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Logical number of the father window. To specify the display as father you may enter ’root’ or 0.
Default Value : 0
Restriction : FatherWindow ≥ 0
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Window mode.
Default Value : "visible"
List of values : Mode ∈ {"visible", "invisible", "transparent", "buffer"}
. Machine (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the computer on which you want to open the window. Otherwise the empty string.
Default Value : ""
. WindowHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong *
Window identifier.
Example

open_window(0,0,400,-1,"root","visible","",&WindowHandle) ;
read_image(&Image,"fabrik") ;
disp_image(Image,WindowHandle) ;
write_string(WindowHandle,"File: fabrik.ima") ;
new_line(WindowHandle) ;
get_mbutton(WindowHandle,_,_,_) ;
set_lut(WindowHandle,"temperature") ;
set_color(WindowHandle,"blue") ;
write_string(WindowHandle,"temperature") ;
new_line(WindowHandle) ;
write_string(WindowHandle,"Draw Rectangle") ;
new_line(WindowHandle) ;
draw_rectangle1(WindowHandle,&Row1,&Column1,&Row2,&Column2) ;
set_part(Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
new_line(WindowHandle) ;

Result
If the values of the specified parameters are correct open_window returns H_MSG_TRUE. If necessary an
exception handling is raised.
Parallelization Information
open_window is reentrant, local, and processed without parallelization.

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 427

Possible Predecessors
reset_obj_db
Possible Successors
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Alternatives
open_textwindow
See also
disp_region, disp_image, disp_color, set_lut, query_color, set_color, set_rgb,
set_hsi, set_pixel, set_gray, set_part, set_part_style, query_window_type,
get_window_type, set_window_type, get_mposition, set_tposition,
set_window_extents, get_window_extents, set_window_attr, set_check, set_system
Module
Foundation

T_query_window_type ( Htuple *WindowTypes )

Query all available window types.


query_window_type returns a tupel which contains all devices or software systems, respectively, which are
used to display image objects. You may use query_window_type usefully while developing machine inde-
pendent programs. Possible values are:

’X-Window’ X-Window Version 11.


’pixmap’ Windows are not displayed, but managed in memory. In this manner it is possible to port HALCON
programs to computers without graphical display.
’PostScript’ Objects are output to a PostScript File.

Parameter
. WindowTypes (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Names of available window types.
Result
query_window_type always returns H_MSG_TRUE.
Parallelization Information
query_window_type is reentrant, local, and processed without parallelization.
Possible Predecessors
reset_obj_db
Module
Foundation

set_window_attr ( const char *AttributeName,


const char *AttributeValue )

T_set_window_attr ( const Htuple AttributeName,


const Htuple AttributeValue )

Set window characteristics.


You may use set_window_attr to set specific characteristics of graphics windows. With it you may modify
the following default parameters of a window:

’border_width’ Width of the window border in pixels. Is not implemented under Windows.
’border_color’ Color of the window border. Is not implemented under Windows.

HALCON 8.0.2
428 CHAPTER 4. GRAPHICS

’background_color’ Background color of the window.


’window_title’ Name of the window in the titlebar.

Attention
You have to call set_window_attr before calling open_window.
Parameter
. AttributeName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the attribute that should be modified.
List of values : AttributeName ∈ {"border_width", "border_color", "background_color", "window_title"}
. AttributeValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong
Value of the attribute that should be set.
List of values : AttributeValue ∈ {0, 1, 2, "white", "black", "MyName", "default"}
Result
If the parameters are correct set_window_attr returns H_MSG_TRUE. If necessary an exception handling is
raised.
Parallelization Information
set_window_attr is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow
See also
open_window, get_window_attr
Module
Foundation

set_window_dc ( Hlong WindowHandle, Hlong WINHDC )


T_set_window_dc ( const Htuple WindowHandle, const Htuple WINHDC )

Set the device context of a virtual graphics window (Windows NT).


set_window_dc sets the device context of a window previously opened with new_extern_window. All
output ( disp_region, disp_image, etc.) is done in the window with this device context.
The parameter WINHDC contains the device context of the window in which HALCON should output its data. This
device context is used in all output routines of HALCON.
Attention
The window WindowHandle has to be created with new_extern_window beforehand.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. WINHDC (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
devicecontext of WINHWnd.
Restriction : WINHDC 6= 0
Example

hWnd = createWINDOW(...) ;
new_extern_window(hwnd, hdc, 0,0,400,-1,WindowHandle) ;
set_device_context(WindowHandle, hdc) ;
read_image(&Image,"fabrik") ;
disp_image(Image,WindowHandle) ;
write_string(WindowHandle,"File: fabrik.ima") ;
new_line(WindowHandle) ;
get_mbutton(WindowHandle,_,_,_) ;
set_lut(WindowHandle,"temperature") ;

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 429

set_color(WindowHandle,"blue") ;
write_string(WindowHandle,"temperature") ;
new_line(WindowHandle) ;
write_string(WindowHandle,"Draw Rectangle") ;
new_line(WindowHandle) ;
draw_rectangle1(WindowHandle,&Row1,&Column1,&Row2,&Column2) ;
set_part(Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
new_line(WindowHandle) ;

Result
If the values of the specified parameters are correct, set_window_dc returns H_MSG_TRUE. If necessary, an
exception is raised.
Parallelization Information
set_window_dc is reentrant, local, and processed without parallelization.
Possible Predecessors
new_extern_window
Possible Successors
disp_image, disp_region
See also
new_extern_window, disp_region, disp_image, disp_color, set_lut, query_color,
set_color, set_rgb, set_hsi, set_pixel, set_gray, set_part, set_part_style,
query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_window_extents, get_window_extents, set_window_attr,
set_check, set_system
Module
Foundation

set_window_extents ( Hlong WindowHandle, Hlong Row, Hlong Column,


Hlong Width, Hlong Height )

T_set_window_extents ( const Htuple WindowHandle, const Htuple Row,


const Htuple Column, const Htuple Width, const Htuple Height )

Modify position and size of a window.


set_window_extents positions the upper left corner of the output window at (Row,Column) and changes
the size of the window to Width and Height at the same time.
Attention
If you modify the size of a window an adaptation of the displayed date to the new format is not processed automat-
ically. This has to be done by the program in performing another output of these data.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong
Row index of upper left corner in target position.
Default Value : 0
Typical range of values : 0 ≤ Row (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong
Column index of upper left corner in target position.
Default Value : 0
Typical range of values : 0 ≤ Column (lin)
Minimum Increment : 1
Recommended Increment : 1

HALCON 8.0.2
430 CHAPTER 4. GRAPHICS

. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.x ; Hlong


Width of the window.
Default Value : 512
Typical range of values : 0 ≤ Width (lin)
Minimum Increment : 1
Recommended Increment : 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.y ; Hlong
Height of the window.
Default Value : 512
Typical range of values : 0 ≤ Height (lin)
Minimum Increment : 1
Recommended Increment : 1
Result
If the window is valid and the parameters are correct set_window_extents returns H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
set_window_extents is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
See also
get_window_extents, open_window, open_textwindow
Module
Foundation

set_window_type ( const char *WindowType )


T_set_window_type ( const Htuple WindowType )

Specify a window type.


set_window_type determines on which type of output device the output is going to be displayed. This spec-
ification is going to be used by procedure open_window while opening the windows. You may open different
windows on different types of output devices. Therefore you have to specify the wanted type before opening. You
may request the available types of output devices by calling procedure query_window_type. Possible values
are:

’X-Window’ X-Window Version 11.


’WIN32-Window’ Microsoft Windows.
’pixmap’ Windows are not displayed, but managed in memory only. In this manner you may port HALCON
programs to computers without graphical display.
’PostScript’ Objects are output to a PostScript File.
’system_default’ Default for current platfrom.

A useful usage of set_window_type could be the development of machine independent programs.


Parameter

. WindowType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *


Name of the window type which has to be set.
Default Value : "X-Window"
List of values : WindowType ∈ {"X-Window", "WIN32-Window", "pixmap", "PostScript",
"system_default"}
Result
If the type of the output device is available, then set_window_type returns H_MSG_TRUE. If necessary an
exception handling is raised.

HALCON/C Reference Manual, 2008-5-13


4.8. WINDOW 431

Parallelization Information
set_window_type is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
See also
open_window, open_textwindow, query_window_type, get_window_type
Module
Foundation

slide_image ( Hlong WindowHandleSource1, Hlong WindowHandleSource2,


Hlong WindowHandle )

T_slide_image ( const Htuple WindowHandleSource1,


const Htuple WindowHandleSource2, const Htuple WindowHandle )

Interactive output from two window buffers.


slide_image divides the window horizontal in two logical areas dependent of the mouse position. The content
of the first indicated window is copied in the upper area, the content of the second window is copied in the lower
area. If you press the left mouse button you may scroll the delimitation between the two areas (you may move
the mouse outside the window, too. In doing so the position of the mouse relative to the window determines the
borderline).
Pressing the right mouse button in the window terminates the procedure slide_image.
A useful application of procedure slide_image might be the visualisation of the effect of a filtering operation
for an image. The output is directed to the current set window (WindowHandle).
Attention
The three windows must have the same size and have to reside on the same computer.
Parameter

. WindowHandleSource1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong


Logical window number of the "‘upper window"’.
. WindowHandleSource2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Logical window number of the "‘lower window"’.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example

read_image(&Image,"fabrik") ;
sobel_amp(Image,&Amp,"sum_abs",3) ;
open_window(0,0,-1,-1,"root","buffer","",&WindowHandle) ;
disp_image(Amp,WindowHandle) ;
sobel_dir(Image,&Dir,"sum_abs",3) ;
open_window(0,0,-1,-1,"root","buffer","",&WindowHandle) ;
disp_image(Dir,WindowHandle) ;
open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
slide_image(Puffer1,Puffer2,WindowHandle) ;

Result
If the both windows exist and one of these windows is valid slide_image returns H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
slide_image is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow

HALCON 8.0.2
432 CHAPTER 4. GRAPHICS

Alternatives
copy_rectangle, get_mposition
See also
open_window, open_textwindow, move_rectangle
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


Chapter 5

Image

5.1 Access
get_grayval ( const Hobject Image, Hlong Row, Hlong Column,
double *Grayval )

T_get_grayval ( const Hobject Image, const Htuple Row,


const Htuple Column, Htuple *Grayval )

Access the gray values of an image object.


The parameter Grayval is a tuple of floating point numbers, or integer numbers, respectively, which returns the
gray values of several pixels of Image. The line coordinates of the pixels are in the tuple Row, the columns in
Column.
Attention
The type of the values of Grayval depends on the type of the gray values.
Gray values which do not belong to the image can also be accessed. The state of these gray values is not ascertained.
The operator get_grayval produces quite some overhead. Typically, it is used to get single gray values of
an image (e.g., get_mposition followed by get_grayval). It is not suitable for programming image
processing operations such as filters. In this case it is more useful to use the procedure get_image_pointer1
or to directly use the C interface for integrating own procedures.
Parameter

. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex
Image whose gray value is to be accessed.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) Hlong
Line numbers of pixels to be viewed.
Default Value : 0
Suggested values : Row ∈ {0, 64, 128, 256, 512, 1024}
Typical range of values : 0 ≤ Row ≤ 32768 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : (0 ≤ Row) ∧ (Row < height(Image))
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) Hlong
Column numbers of pixels to be viewed.
Default Value : 0
Suggested values : Column ∈ {0, 64, 128, 256, 512, 1024}
Typical range of values : 0 ≤ Column ≤ 32768 (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : Column = Row
Restriction : (0 ≤ Column) ∧ (Column < width(Image))

433
434 CHAPTER 5. IMAGE

. Grayval (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . grayval(-array) ; (Htuple .) double * / Hlong *


Gray values of indicated pixels.
Number of elements : Grayval = Row
Result
If the state of the parameters is correct the operator get_grayval returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_grayval is reentrant and processed without parallelization.
Possible Predecessors
read_image
Alternatives
get_image_pointer1
See also
set_grayval
Module
Foundation

get_image_pointer1 ( const Hobject Image, Hlong *Pointer, char *Type,


Hlong *Width, Hlong *Height )

T_get_image_pointer1 ( const Hobject Image, Htuple *Pointer,


Htuple *Type, Htuple *Width, Htuple *Height )

Access the pointer of a channel.


The operator get_image_pointer1 returns a pointer to the first channel of the image Image. Addition-
ally, the image type (Type = ’byte’, ’int2’, ’uint2’, etc.) and the image size (width and height) are returned.
Consequently, a direct access to the image data in the HALCON database via the pointer is possible from the pro-
gramming language in which HALCON is used. An image is stored in HALCON linearized in row major order,
i.e., line by line.
Attention
The pointer returned by get_image_pointer1 may only be used as long as the corresponding image object
exists in the HALCON database. This is the case as long as the corresponding variable in the the programming
language in which HALCON is used is valid. If this is not observed, unexpected behavior or program crashes may
result.
If data is written to an existing image via the pointer, all image objects that reference the image are modified. If, for
example, the domain of an image is restricted via reduce_domain, the original image object with the full do-
main and the image object with the reduced domain share the same image matrix (i.e., get_image_pointer1
returns the same pointer for both images). Consequently, if one of the two images in this example is modified, both
image objects are affected. Therefore, if the pointer is used to write image data in the the programming language
in which HALCON is used, the image data should be written into an image object that has been created solely for
this purpose, e.g., using gen_image1.
Parameter
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Input image.
. Pointer (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .pointer ; Hlong *
Pointer to the image data in the HALCON database.
. Type (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Type of image.
List of values : Type ∈ {"int1", "int2", "uint2", "int4", "byte", "real", "direction", "cyclic", "complex",
"vector_field"}
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong *
Width of image.

HALCON/C Reference Manual, 2008-5-13


5.1. ACCESS 435

. Height (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong *


Height of image.
Example

Hobject Bild;
char typ[128];
long width,height;
unsigned char *ptr;

read_image(&Bild,"fabrik");
get_image_pointer1(Bild,(long*)&ptr,typ,&width,&height);

Result
The operator get_image_pointer1 returns the value H_MSG_TRUE if exactly one image was passed.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_image_pointer1 is reentrant and processed without parallelization.
Possible Predecessors
read_image
Alternatives
set_grayval, get_grayval, get_image_pointer3
See also
paint_region, paint_gray
Module
Foundation

get_image_pointer1_rect ( const Hobject Image, Hlong *PixelPointer,


Hlong *Width, Hlong *Height, Hlong *VerticalPitch,
Hlong *HorizontalBitPitch, Hlong *BitsPerPixel )

T_get_image_pointer1_rect ( const Hobject Image,


Htuple *PixelPointer, Htuple *Width, Htuple *Height,
Htuple *VerticalPitch, Htuple *HorizontalBitPitch,
Htuple *BitsPerPixel )

Access to the image data pointer and the image data inside the smallest rectangle of the domain of the input image.
The operator get_image_pointer1_rect returns the pointer PixelPointer which points to the
beginning of the image data inside the smallest rectangle of the domain of Image. VerticalPitch
corresponds to the width of the input image Image multiplied with the number of bytes per pixel
(HorizontalBitPitch / 8). Width and Height correspond to the size of the smallest rectangle of the
input region. HorizontalBitPitch is the horizontal distance (in bits) between two neighbouring pixels.
BitsPerPixel is the number of used bits per pixel. get_image_pointer1_rect is symmetrical to
gen_image1_rect.
Attention
The operator get_image_pointer1_rect should only be used for entry into newly created images, since
otherwise the gray values of other images might be overwritten (see relational structure).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2 / int4
Input image (Himage).
. PixelPointer (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong *
Pointer to the image data.
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong *
Width of the output image.

HALCON 8.0.2
436 CHAPTER 5. IMAGE

. Height (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong *


Height of the output image.
. VerticalPitch (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Width(input image)*(HorizontalBitPitch/8).
. HorizontalBitPitch (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Distance between two neighbouring pixels in bits .
Default Value : 8
List of values : HorizontalBitPitch ∈ {8, 16, 32}
. BitsPerPixel (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of used bits per pixel.
Default Value : 8
List of values : BitsPerPixel ∈ {8, 16, 32}
Example

Hobject image,reg,imagereduced;
char typ[128];
long width,height,vert_pitch,hori_bit_pitch,bits_per_pix, winID;
unsigned char *ptr;

open_window(0,0,512,512,"black",winID);
read_image(&image,"monkey");
draw_region(&reg,winID);
reduce_domain(image,reg,&imagereduced);
get_image_pointer1_rect(imagereduced,(long*)&ptr,&width,&height,
&vert_pitch,&hori_bit_pitch,&bits_per_pix);

Result
The operator get_image_pointer1_rect returns the value H_MSG_TRUE if exactly one image was
passed. The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_image_pointer1_rect is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image1_rect
Alternatives
set_grayval, get_grayval, get_image_pointer3, get_image_pointer1
See also
paint_region, paint_gray, gen_image1_rect
Module
Foundation

get_image_pointer3 ( const Hobject ImageRGB, Hlong *PointerRed,


Hlong *PointerGreen, Hlong *PointerBlue, char *Type, Hlong *Width,
Hlong *Height )

T_get_image_pointer3 ( const Hobject ImageRGB, Htuple *PointerRed,


Htuple *PointerGreen, Htuple *PointerBlue, Htuple *Type,
Htuple *Width, Htuple *Height )

Access the pointers of a colored image.


The operator get_image_pointer3 returns a C pointer to the three channels of a colored image (ImageRGB).
Additionally the image type (Type = ’byte’, ’int2’,’float’ etc.) and the image size (Width and Height) are
returned. Consequently a direct access to the image data in the HALCON database from the HALCON host
language via the pointer is possible. An image is stored in HALCON as a vector of image lines. The three
channels must have the same pixel type and the same size.

HALCON/C Reference Manual, 2008-5-13


5.1. ACCESS 437

Attention
Only one image can be passed. The operator get_image_pointer3 should only be used for entry into newly
created images, since otherwise the gray values of other images might be overwritten (see relational structure).
Parameter

. ImageRGB (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Input image.
. PointerRed (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong *
Pointer to the pixels of the first channel.
. PointerGreen (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong *
Pointer to the pixels of the second channel.
. PointerBlue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong *
Pointer to the pixels of the third channel.
. Type (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Type of image.
List of values : Type ∈ {"int1", "int2", "uint2", "int4", "byte", "real", "direction", "cyclic", "complex",
"vector_field"}
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong *
Width of image.
. Height (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong *
Height of image.
Result
The operator get_image_pointer3 returns the value H_MSG_TRUE if exactly one image is passed.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_image_pointer3 is reentrant and processed without parallelization.
Possible Predecessors
read_image
Alternatives
set_grayval, get_grayval, get_image_pointer1
See also
paint_region, paint_gray
Module
Foundation

get_image_time ( const Hobject Image, Hlong *MSecond, Hlong *Second,


Hlong *Minute, Hlong *Hour, Hlong *Day, Hlong *YDay, Hlong *Month,
Hlong *Year )

T_get_image_time ( const Hobject Image, Htuple *MSecond,


Htuple *Second, Htuple *Minute, Htuple *Hour, Htuple *Day,
Htuple *YDay, Htuple *Month, Htuple *Year )

Request time at which the image was created.


The operator get_image_time returns the time at which the image was created.
Parameter

. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Input image.
. MSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Milliseconds (0..999).

HALCON 8.0.2
438 CHAPTER 5. IMAGE

. Second (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *


Seconds (0..59).
. Minute (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Minutes (0..59).
. Hour (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Hours (0..11).
. Day (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Day of the month (1..31).
. YDay (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Day of the year (1..365).
. Month (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Month (1..12).
. Year (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Year (xxxx).
Result
The operator get_image_time returns the value H_MSG_TRUE if exactly one image was passed.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_image_time is reentrant and processed without parallelization.
Possible Predecessors
read_image, grab_image
See also
count_seconds
Module
Foundation

5.2 Acquisition

close_all_framegrabbers ( )
T_close_all_framegrabbers ( )

Close all image acquisition devices.


The operator close_all_framegrabbers closes all currently open image acquisition devices. It is
used to cope with deadlocks resulting from damaged image acquisition handles (in that case the use of
close_framegrabber is impossible).
Attention
close_all_framegrabbers exists solely for the purpose of implementing the “reset program” functionality
in HDevelop. close_all_framegrabbers must not be used in any application.
Result
If it is possible to close all image acquisition devices, the operator close_all_framegrabbers returns the
value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
close_all_framegrabbers is local and processed completely exclusively without parallelization.
Possible Predecessors
grab_image, grab_image_async
See also
open_framegrabber
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


5.2. ACQUISITION 439

close_framegrabber ( Hlong AcqHandle )


T_close_framegrabber ( const Htuple AcqHandle )

Close specified image acquisition device.


The operator close_framegrabber closes the image acquisition device specified by AcqHandle. In par-
ticular, allocated memory for data buffers is released and the image acquisition device is made available for other
processes.
Parameter
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; Hlong
Handle of the image acquisition device to be closed.
Result
If the specified image acquisition device could be closed, close_framegrabber returns the value
H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
close_framegrabber is processed completely exclusively without parallelization.
Possible Predecessors
grab_image, grab_image_async
See also
open_framegrabber
Module
Foundation

T_get_framegrabber_lut ( const Htuple AcqHandle, Htuple *ImageRed,


Htuple *ImageGreen, Htuple *ImageBlue )

Query look-up table of the image acquisition device.


The operator get_framegrabber_lut queries the look-up table (LUT) of the image acquisition device spec-
ified by AcqHandle. Note that this operation is not supported for all kinds of image acquisition devices.
Parameter
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; Htuple . Hlong
Handle of the acquisition device to be used.
. ImageRed (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Red level of the LUT entries.
. ImageGreen (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Green level of the LUT entries.
. ImageBlue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Blue level of the LUT entries.
Result
The operator get_framegrabber_lut returns the value H_MSG_TRUE if the image acquisition device is
open.
Parallelization Information
get_framegrabber_lut is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber
Possible Successors
set_framegrabber_lut
See also
set_framegrabber_lut, open_framegrabber
Module
Foundation

HALCON 8.0.2
440 CHAPTER 5. IMAGE

get_framegrabber_param ( Hlong AcqHandle, const char *Param,


char *Value )

T_get_framegrabber_param ( const Htuple AcqHandle,


const Htuple Param, Htuple *Value )

Query specific parameters of a image acquisition device.


The operator get_framegrabber_param returns specific parameter values for the image acquisition device
specified by AcqHandle. The standard parameters listed below are available for all image acquisition devices.
Additional parameters may be supported by a specific image acquisition device. A list of those parameters can be
obtained with the query ’parameter’ via info_framegrabber.
Standard values for Param, see open_framegrabber:
’name’ Name of the image acquisition interface.
’horizontal_resolution’ Horizontal resolution of the image acquisition device.
’vertical_resolution’ Vertical resolution of the image acquisition device.
’image_width’ Width of the specified image part.
’image_height’ Height of the specified image part.
’start_row’ Row coordinate of upper left corner of specified image part.
’start_column’ Column coordinate of upper left corner of specified image part.
’field’ Selected video field or full frame.
’bits_per_channel’ Number of transferred bits per pixel and image channel.
’color_space’ Color space of resulting image.
’generic’ Generic value with device-specific meaning.
’external_trigger’ External triggering (’true’ / ’false’).
’camera_type’ Type of used camera (interface-specific).
’device’ Device name of the image acquistion device.
’port’ Port the image acquisition device is connected to.
’line_in’ Camera input line of multiplexer (optional).
Parameter

. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; (Htuple .) Hlong


Handle of the acquisition device to be used.
. Param (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Parameter of interest.
Default Value : "revision"
Suggested values : Param ∈ {"bits_per_channel", "camera_type", "color_space", "continuous_grabbing",
"device", "external_trigger", "field", "generic", "grab_timeout", "horizontal_resolution", "image_available",
"image_height", "image_width", "line_in", "name", "port", "revision", "start_column", "start_row",
"vertical_resolution", "volatile"}
. Value (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char * / double * / Hlong *
Parameter value.
Result
If the image acquisition device is open and the specified parameter is supported, the operator
set_framegrabber_param returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
get_framegrabber_param is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, set_framegrabber_param
Possible Successors
grab_image, grab_data, grab_image_start, grab_image_async, grab_data_async,
set_framegrabber_param, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param

HALCON/C Reference Manual, 2008-5-13


5.2. ACQUISITION 441

Module
Foundation

grab_data ( Hobject *Image, Hobject *Region, Hobject *Contours,


Hlong AcqHandle, char *Data )

T_grab_data ( Hobject *Image, Hobject *Region, Hobject *Contours,


const Htuple AcqHandle, Htuple *Data )

Grab images and preprocessed image data from the specified image acquisition device.
The operator grab_data grabs images and preprocessed image data via the image acquisition device specified
by AcqHandle. The desired operational mode of the image acquisition device as well as a suitable image part
can be adjusted via the operator open_framegrabber. Additional interface-specific settings can be specified
via set_framegrabber_param. Depending on the current configuration of the image acquisition device,
the preprocessed image data can be returned in terms of images (Image), regions (Region), XLD contours
(Contours), and control data (Data).
Parameter

. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *


Grabbed image data.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Preprocessed image regions.
. Contours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *
Preprocessed XLD contours.
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; (Htuple .) Hlong
Handle of the acquisition device to be used.
. Data (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char * / double * / Hlong *
Preprocessed control data.
Example

/* Select a suitable image acquisition interface name AcqName*/


info_framegrabber(AcqName,"port",&Information,&Values) ;
/* Choose the port P and the input line L your camera is connected to */
open_framegrabber(AcqName,1,1,0,0,0,0,"default",-1,"default",-1.0,
"default","default","default",P,L,&AcqHandle) ;
/* Grab and segment image */
grab_data(&Region,AcqHandle) ;
/* Process Region... */
close_framegrabber(AcqHandle) ;

Result
If the image acquisition device is open and supports the image acquisition via grab_data, the operator
grab_data returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
grab_data is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, grab_image_start, set_framegrabber_param
Possible Successors
grab_data, grab_data_async, grab_image_start, grab_image, grab_image_async,
set_framegrabber_param, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation

HALCON 8.0.2
442 CHAPTER 5. IMAGE

grab_data_async ( Hobject *Image, Hobject *Region, Hobject *Contours,


Hlong AcqHandle, double MaxDelay, char *Data )

T_grab_data_async ( Hobject *Image, Hobject *Region, Hobject *Contours,


const Htuple AcqHandle, const Htuple MaxDelay, Htuple *Data )

Grab images and preprocessed image data from the specified image image acquisition device and start the next
asynchronous grab.
The operator grab_data grabs images and preprocessed image data via the image acquisition device specified
by AcqHandle and starts the next asynchronous grab. The desired operational mode of the image acquisition
device as well as a suitable image part can be adjusted via the operator open_framegrabber. Additional
interface-specific settings can be specified via set_framegrabber_param. The segmented image regions
are returned in Region. Depending on the current configuration of the image acquisition device, the preprocessed
image data can be returned in terms of images (Image), regions (Region), XLD contours (Contours), and
control data (Data).
The grab of the next image is finished by calling grab_data_async or grab_image_async. If more
than MaxDelay ms have passed since the asynchronous grab was started, the asynchronously grabbed image is
considered as too old and a new image is grabbed. If a negative value is assigned to MaxDelay this control
mechanism is deactivated.
Please note that if you call the operators grab_image or grab_data after grab_data_async, the asyn-
chronous grab started by grab_data_async is aborted and a new image is grabbed (and waited for).
Parameter
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Grabbed image data.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Pre-processed image regions.
. Contours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *
Pre-processed XLD contours.
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; (Htuple .) Hlong
Handle of the acquisition device to be used.
. MaxDelay (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms].
Default Value : -1.0
Suggested values : MaxDelay ∈ {-1.0, 20.0, 33.3, 40.0, 66.6, 80.0, 99.9}
. Data (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char * / double * / Hlong *
Pre-processed control data.
Example

/* Select a suitable image acquisition interface name AcqName*/


open_framegrabber(AcqName,1,1,0,0,0,0,"default",-1,"default",-1.0,
"default","default","default",-1,-1,&AcqHandle) ;
/* Grab image, segment it, and start next grab */
grab_data(&Region,AcqHandle) ;
/* Process Region1... */
/* Finish asynchronous grab, segment this image, and start next grab
grab_data_async(Region2,AcqHandle,-1.0)
/* Process Region2... */
close_framegrabber(AcqHandle) ;

Result
If the image acquisition device is open and supports the image acquisition via grab_data_async, the operator
grab_data_async returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
grab_data_async is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, grab_image_start, set_framegrabber_param

HALCON/C Reference Manual, 2008-5-13


5.2. ACQUISITION 443

Possible Successors
grab_data_async, grab_image_async, set_framegrabber_param, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation

grab_image ( Hobject *Image, Hlong AcqHandle )


T_grab_image ( Hobject *Image, const Htuple AcqHandle )

Grab an image from the specified image acquisition device.


The operator grab_image grabs an image via the image acquisition device specified by AcqHandle.
The desired operational mode of the image acquisition device as well as a suitable image part can be ad-
justed via the operator open_framegrabber. Additional interface-specific settings can be specified via
set_framegrabber_param.
Parameter

. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / int2


Grabbed image.
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; Hlong
Handle of the acquisition device to be used.
Example

/* Select a suitable image acquisition interface name AcqName*/


info_framegrabber(AcqName,"port",&Information,&Values) ;
/* Choose the port P and the input line L your camera is connected to */
open_framegrabber(AcqName,1,1,0,0,0,0,"default",-1,"default",-1.0,
"default","default","default",P,L,&AcqHandle) ;
grab_image(Image,AcqHandle) ;
close_framegrabber(AcqHandle) ;

Result
If the image could be acquired successfully, the operator grab_image returns the value H_MSG_TRUE. Oth-
erwise an exception handling is raised.
Parallelization Information
grab_image is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, set_framegrabber_param
Possible Successors
grab_image, grab_image_start, grab_image_async, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation

grab_image_async ( Hobject *Image, Hlong AcqHandle, double MaxDelay )


T_grab_image_async ( Hobject *Image, const Htuple AcqHandle,
const Htuple MaxDelay )

Grab an image from the specified image acquisition device and start the next asynchronous grab.

HALCON 8.0.2
444 CHAPTER 5. IMAGE

The operator grab_image_async grabs an image via the image acquisition device by AcqHandle and starts
the asynchronous grab of the next image. The desired operational mode of the image acquisition device as well
as a suitable image part can be adjusted via the operator open_framegrabber. Additional interface-specific
settings can be specified via set_framegrabber_param.
The grab of the next image is finished by calling grab_image_async or grab_data_async. If more
than MaxDelay ms have passed since the asynchronous grab was started, the asynchronously grabbed image is
considered as too old and a new image is grabbed. If a negative value is assigned to MaxDelay this control
mechanism is deactivated.
Please note that if you call the operators grab_image or grab_data after grab_image_async, the
asynchronous grab started by grab_image_async is aborted and a new image is grabbed (and waited for).
Parameter
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / int2
Grabbed image.
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; Hlong
Handle of the acquisition device to be used.
. MaxDelay (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms].
Default Value : -1.0
Suggested values : MaxDelay ∈ {-1.0, 20.0, 33.3, 40.0, 66.6, 80.0, 99.9}
Example

/* Select a suitable image acquisition interface name AcqName*/


open_framegrabber(AcqName,1,1,0,0,0,0,"default",-1,"default",-1.0,
"default","default","default",-1,-1,&AcqHandle) ;
/* Grab image + start next grab */
grab_image_async(&Image1,AcqHandle,-1.0) ;
/* Process Image1... */
/* Finish asynchronous grab + start next grab */
grab_image_async(&Image2,AcqHandle,-1.0) ;
/* Process Image2... */
close_framegrabber(AcqHandle) ;

Result
If the image acquisition device is open and supports asynchronous grabbing the operator grab_image_start
returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
grab_image_async is reentrant and processed without parallelization.
Possible Predecessors
grab_image_start, open_framegrabber, set_framegrabber_param
Possible Successors
grab_image_async, grab_data_async, set_framegrabber_param, close_framegrabber
See also
grab_image_start, open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation

grab_image_start ( Hlong AcqHandle, double MaxDelay )


T_grab_image_start ( const Htuple AcqHandle, const Htuple MaxDelay )

Start an asynchronous grab from the specified image acquisition device.


The operator grab_image_start starts the asynchronous grab of an image via the image acquisition device
specified by AcqHandle. The desired operational mode of the image acquisition device as well as a suitable

HALCON/C Reference Manual, 2008-5-13


5.2. ACQUISITION 445

image part can be adjusted via the operator open_framegrabber. Additional interface-specific settings can
be specified via set_framegrabber_param.
The grab is finished via grab_image_async or grab_data_async. If one of those operators is called
more than MaxDelay ms later, the asynchronously grabbed image is considered as too old and a new image is
grabbed. If a negative value is assigned to MaxDelay this control mechanism is deactivated.
Please note that the operator grab_image_start makes sense only when used together with
grab_image_async or grab_data_async. If you call the operators grab_image or grab_data
instead, the asynchronous grab started by grab_image_start is aborted and a new image is grabbed (and
waited for).
Parameter

. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; Hlong


Handle of the acquisition device to be used.
. MaxDelay (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms].
Default Value : -1.0
Suggested values : MaxDelay ∈ {-1.0, 20.0, 33.3, 40.0, 66.6, 80.0, 99.9}
Example

/* Select a suitable image acquisition interface name AcqName*/


open_framegrabber(AcqName,1,1,0,0,0,0,"default",-1,"default",-1.0,
"default","default","default",-1,-1,&AcqHandle) ;
grab_image(&Image1,AcqHandle) ;
/* Start next grab */
grab_image_start(AcqHandle,-1.0) ;
/* Process Image1... */
/* Finish asynchronous grab + start next grab */
grab_image_async(&Image2,AcqHandle,-1.0) ;
/* Process Image2... */
close_framegrabber(AcqHandle) ;

Result
If the image acquisition device is open and supports asynchronous grabbing the operator grab_image_start
returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
grab_image_start is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, set_framegrabber_param
Possible Successors
grab_image_async, grab_data_async, set_framegrabber_param, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation

T_info_framegrabber ( const Htuple Name, const Htuple Query,


Htuple *Information, Htuple *ValueList )

Query information about the specified image acquisition interface.


The operator info_framegrabber returns information about the image acquisition device Name. The de-
sired information is specified via Query. A textual description according to the selected topic is returned in
Information. If applicable, ValueList contains a list of supported values. Up to now, the following queries
are possible:

HALCON 8.0.2
446 CHAPTER 5. IMAGE

’bits_per_channel’: List of all supported values for the parameter ’BitsPerChannel’, see
open_framegrabber.
’camera_type’: Description and list of all supported values for the parameter ’CameraType’, see
open_framegrabber.
’color_space’: List of all supported values for the parameter ’ColorSpace’, see open_framegrabber.
’defaults’: Interface-specific default values in ValueList, see open_framegrabber.
’device’: List of all supported values for the parameter ’Device’, see open_framegrabber.
’external_trigger’: List of all supported values for the parameter ’ExternalTrigger’, see
open_framegrabber.
’field’: List of all supported values for the parameter ’Field’, see open_framegrabber.
’general’: General information (in Information).
’horizontal_resolution’: List of all supported values for the parameter ’HorizontalResolution’, see
open_framegrabber.
’image_height’: List of all supported values for the parameter ’ImageHeight’, see open_framegrabber.
’image_width’: List of all supported values for the parameter ’ImageWidth’, see open_framegrabber.
’info_boards’: Information about actually installed boards or cameras. This data is especially useful for the auto-
detect mechansim of ActivVisionTools and for the Image Acquisition Assistant in HDevelop.
’line_in’: List of all supported values for the parameter ’LineIn’, see open_framegrabber.
’parameters’: List of all interface-specific parameters which are accessible via set_framegrabber_param
or get_framegrabber_param.
’parameters_readonly’: List of all interface-specific parameters which are only accessible via
get_framegrabber_param.
’parameters_writeonly’: List of all interface-specific parameters which are only accessible via
set_framegrabber_param.
’port’: List of all supported values for the parameter ’Port’, see open_framegrabber.
’revision’: Version number of the image acquisition interface.
’start_column’: List of all supported values for the parameter ’StartColumn’, see open_framegrabber.
’start_row’: List of all supported values for the parameter ’StartRow’, see open_framegrabber.
’vertical_resolution’: List of all supported values for the parameter ’VerticalResolution’, see
open_framegrabber.
Please check also the directory doc/html/manuals for documentation about specific image grabber interfaces.
Parameter
. Name (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library
(Linux/UNIX).
Default Value : "File"
Suggested values : Name ∈ {"1394IIDC", "ABS", "BaumerFCAM", "BitFlow", "DahengCAM",
"DahengFG", "DFG-LC", "DirectFile", "DirectShow", "dPict", "DT315x", "DT3162", "eneo", "eXcite",
"FALCON", "File", "FlashBusMV", "FlashBusMX", "GigEVision", "Ginga++", "GingaDG", "INSPECTA",
"INSPECTA5", "iPORT", "Leutron", "LinX", "LuCam", "MatrixVisionAcquire", "MILLite", "mEnableIII",
"mEnableIV", "mEnableVisualApplets", "MultiCam", "Opteon", "p3i2", "p3i4", "PX", "PXC", "PXD",
"PXR", "pylon", "RangerC", "RangerE", "SaperaLT", "SonyXCI", "TAG", "TWAIN", "uEye",
"VRmUsbCam"}
. Query (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Name of the chosen query.
Default Value : "info_boards"
List of values : Query ∈ {"defaults", "general", "info_boards", "parameters", "parameters_readonly",
"parameters_writeonly", "revision", "bits_per_channel", "camera_type", "color_space", "device",
"external_trigger", "field", "generic", "horizontal_resolution", "image_height", "image_width", "port",
"start_column", "start_row", "vertical_resolution"}
. Information (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
Textual information (according to Query).
. ValueList (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char * / Hlong * / double *
List of values (according to Query).

HALCON/C Reference Manual, 2008-5-13


5.2. ACQUISITION 447

Example

/* Select a suitable image acquisition interface name AcqName*/


info_framegrabber(AcqName,"port",&Information,&Values) ;
/* Choose the port P and the input line L your camera is connected to */
open_framegrabber(AcqName,1,1,0,0,0,0,"default",-1,"default",-1.0,
"default","default","default",P,L,&AcqHandle) ;
grab_image(Image,AcqHandle) ;
close_framegrabber(AcqHandle) ;

Result
If the parameter values are correct and the specified image acquistion interface is available,
info_framegrabber returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
info_framegrabber is processed completely exclusively without parallelization.
Possible Predecessors
open_framegrabber
Possible Successors
open_framegrabber
See also
open_framegrabber
Module
Foundation

open_framegrabber ( const char *Name, Hlong HorizontalResolution,


Hlong VerticalResolution, Hlong ImageWidth, Hlong ImageHeight,
Hlong StartRow, Hlong StartColumn, const char *Field,
Hlong BitsPerChannel, const char *ColorSpace, double Generic,
const char *ExternalTrigger, const char *CameraType,
const char *Device, Hlong Port, Hlong LineIn, Hlong *AcqHandle )

T_open_framegrabber ( const Htuple Name,


const Htuple HorizontalResolution, const Htuple VerticalResolution,
const Htuple ImageWidth, const Htuple ImageHeight,
const Htuple StartRow, const Htuple StartColumn, const Htuple Field,
const Htuple BitsPerChannel, const Htuple ColorSpace,
const Htuple Generic, const Htuple ExternalTrigger,
const Htuple CameraType, const Htuple Device, const Htuple Port,
const Htuple LineIn, Htuple *AcqHandle )

Open and configure a image acquisition device.


The operator open_framegrabber opens and configures the chosen image acquisition device. During this
process, the connection to the image acquisition device is tested, the image acquisition device is locked for other
processes, and, if necessary, memory is reserved for the data buffers. The actual image grabbing is done via the
operators grab_image, grab_data, grab_image_async, or grab_data_async. If the image acqui-
sition device is not needed anymore, it should be closed via the operator close_framegrabber, releasing it
for other processes. Some image acquisition devices allow to open several instances of the same image acquisition
device class.
For all parameters image acquisition device-specific default values can be chosen explicitly (see the pa-
rameter description below). Additional information for a specific image acquisition device is available via
info_framegrabber. A comprehensive documentation of all image acquistion device-specific parameters
con be found in the corresponding description file in the directory doc/html/manuals.
The meaning of the particular parameters is as follows:

HorizontalResolution, VerticalResolution Desired resolution of the image acquisition device.

HALCON 8.0.2
448 CHAPTER 5. IMAGE

ImageWidth, ImageHeight Size of the image part to be returned by grab_image etc.


StartRow, StartColumn Upper left corner of the desired image area.
Field Desired half image (’first’, ’second’, or ’next’) or selection of a full image.
BitsPerChannel Number of bits, which are transferred from the image acquisition device per pixel and image
channel (typically 5, 8, 10, 12, or 16).
ColorSpace Output color format of the grabbed images (typically ’gray’ or ’raw’ for single-channel or ’rgb’ or
’yuv’ for three-channel images).
Generic Generic parameter with device-specific meaning which can be queried by info_framegrabber.
ExternalTrigger Activation of external triggering (if available).
CameraType More detailed specification of the desired image acquistion device (typically the type of the analog
video format or the name of the desired camera configuration file).
Device Device name of the image acquistion device.
Port Port the image acquistion device is connected to.
LineIn Camera input line of multiplexer (if available).

The operator open_framegrabber returns a handle (AcqHandle) to the opened image acquisition device.
Attention
Due to the multitude of supported image acquisition devices, open_framegrabber contains a large number
of parameters. However, not all parameters are needed for a specific image acquisition device.
Parameter

. Name (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *


HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library
(Linux/UNIX).
Default Value : "File"
Suggested values : Name ∈ {"1394IIDC", "ABS", "BaumerFCAM", "BitFlow", "DahengCAM",
"DahengFG", "DFG-LC", "DirectFile", "DirectShow", "dPict", "DT315x", "DT3162", "eneo", "eXcite",
"FALCON", "File", "FlashBusMV", "FlashBusMX", "GigEVision", "Ginga++", "GingaDG", "INSPECTA",
"INSPECTA5", "iPORT", "Leutron", "LinX", "LuCam", "MatrixVisionAcquire", "MILLite", "mEnableIII",
"mEnableIV", "mEnableVisualApplets", "MultiCam", "Opteon", "p3i2", "p3i4", "PX", "PXC", "PXD",
"PXR", "pylon", "RangerC", "RangerE", "SaperaLT", "SonyXCI", "TAG", "TWAIN", "uEye",
"VRmUsbCam"}
. HorizontalResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; (Htuple .) Hlong
Desired horizontal resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half
resolution, or 4 for quarter resolution).
Default Value : 1
Suggested values : HorizontalResolution ∈ {1, 2, 4, 1600, 1280, 768, 640, 384, 320, 192, 160, -1}
. VerticalResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; (Htuple .) Hlong
Desired vertical resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half
resolution, or 4 for quarter resolution).
Default Value : 1
Suggested values : VerticalResolution ∈ {1, 2, 4, 1200, 1024, 576, 480, 288, 240, 144, 120, -1}
. ImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.x ; (Htuple .) Hlong
Width of desired image part (absolute value or 0 for HorizontalResolution - 2*StartColumn).
Default Value : 0
Suggested values : ImageWidth ∈ {0, -1}
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.y ; (Htuple .) Hlong
Height of desired image part (absolute value or 0 for VerticalResolution - 2*StartRow).
Default Value : 0
Suggested values : ImageHeight ∈ {0, -1}
. StartRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; (Htuple .) Hlong
Line number of upper left corner of desired image part (or border height if ImageHeight = 0).
Default Value : 0
Suggested values : StartRow ∈ {0, -1}

HALCON/C Reference Manual, 2008-5-13


5.2. ACQUISITION 449

. StartColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; (Htuple .) Hlong


Column number of upper left corner of desired image part (or border width if ImageWidth = 0).
Default Value : 0
Suggested values : StartColumn ∈ {0, -1}
. Field (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Desired half image or full image.
Default Value : "default"
Suggested values : Field ∈ {"first", "second", "next", "interlaced", "progressive", "default"}
. BitsPerChannel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Number of transferred bits per pixel and image channel (-1: device-specific default value).
Default Value : -1
Suggested values : BitsPerChannel ∈ {5, 8, 10, 12, 14, 16, -1}
. ColorSpace (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Output color format of the grabbed images, typically ’gray’ or ’raw’ for single-channel or ’rgb’ or ’yuv’ for
three-channel images (’default’: device-specific default value).
Default Value : "default"
Suggested values : ColorSpace ∈ {"gray", "raw", "rgb", "yuv", "default"}
. Generic (input_control) . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) double / const char * / Hlong
Generic parameter with device-specific meaning.
Default Value : -1
. ExternalTrigger (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
External triggering.
Default Value : "default"
List of values : ExternalTrigger ∈ {"true", "false", "default"}
. CameraType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Type of used camera (’default’: device-specific default value).
Default Value : "default"
Suggested values : CameraType ∈ {"ntsc", "pal", "auto", "default"}
. Device (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Device the image acquisition device is connected to (’default’: device-specific default value).
Default Value : "default"
Suggested values : Device ∈ {"-1", "0", "1", "2", "3", "default"}
. Port (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Port the image acquisition device is connected to (-1: device-specific default value).
Default Value : -1
Suggested values : Port ∈ {0, 1, 2, 3, -1}
. LineIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Camera input line of multiplexer (-1: device-specific default value).
Default Value : -1
Suggested values : LineIn ∈ {1, 2, 3, 4, -1}
. AcqHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; (Htuple .) Hlong *
Handle of the opened image acquisition device.
Example

/* Select a suitable image acquisition interface name AcqName*/


info_framegrabber(AcqName,"port",&Information,&Values) ;
/* Choose the port P and the input line L your camera is connected to */
open_framegrabber(AcqName,1,1,0,0,0,0,"default",-1,"default",-1.0,
"default","default","default",P,L,&AcqHandle) ;
grab_image(Image,AcqHandle) ;
close_framegrabber(AcqHandle) ;

Result
If the parameter values are correct and the desired image acquisition device could be opened,
open_framegrabber returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
open_framegrabber is processed completely exclusively without parallelization.

HALCON 8.0.2
450 CHAPTER 5. IMAGE

Possible Predecessors
info_framegrabber
Possible Successors
grab_image, grab_data, grab_image_start, grab_image_async, grab_data_async,
set_framegrabber_param
See also
info_framegrabber, close_framegrabber, grab_image
Module
Foundation

T_set_framegrabber_lut ( const Htuple AcqHandle,


const Htuple ImageRed, const Htuple ImageGreen,
const Htuple ImageBlue )

Set look-up table of the image acquisition device.


The operator set_framegrabber_lut sets the look-up table (LUT) of the image acquisition device specified
by AcqHandle. Note that this operation is not supported for all kinds of image acquisition devices.
Parameter
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; Htuple . Hlong
Handle of the acquisition device to be used.
. ImageRed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Red level of the LUT entries.
. ImageGreen (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Green level of the LUT entries.
. ImageBlue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Blue level of the LUT entries.
Result
The operator set_framegrabber_lut returns the value H_MSG_TRUE if the specified LUT is correct and
the image acquisition device is open.
Parallelization Information
set_framegrabber_lut is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, get_framegrabber_lut
Possible Successors
grab_image, grab_data, grab_image_start, grab_image_async, grab_data_async
See also
get_framegrabber_lut, open_framegrabber
Module
Foundation

set_framegrabber_param ( Hlong AcqHandle, const char *Param,


const char *Value )

T_set_framegrabber_param ( const Htuple AcqHandle,


const Htuple Param, const Htuple Value )

Set specific parameters of a image acquistion device.


The operator set_framegrabber_param sets specific parameters for the image acquisition device specified
by AcqHandle.

HALCON/C Reference Manual, 2008-5-13


5.3. CHANNEL 451

Parameter
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; (Htuple .) Hlong
Handle of the acquisition device to be used.
. Param (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Parameter name.
Suggested values : Param ∈ {"color_space", "continuous_grabbing", "external_trigger", "grab_timeout",
"image_height", "image_width", "port", "start_column", "start_row", "volatile"}
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char * / double / Hlong
Parameter value to be set.
Result
If the image acquisition device is open and the specified parameter / parameter value is supported, the operator
set_framegrabber_param returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
set_framegrabber_param is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber
Possible Successors
grab_image, grab_data, grab_image_start, grab_image_async, grab_data_async,
get_framegrabber_param
See also
open_framegrabber, info_framegrabber, get_framegrabber_param
Module
Foundation

5.3 Channel
access_channel ( const Hobject MultiChannelImage, Hobject *Image,
Hlong Channel )

T_access_channel ( const Hobject MultiChannelImage, Hobject *Image,


const Htuple Channel )

Access a channel of a multichannel image.


The operator access_channel accesses a channel of the (multichannel) input image. The result is a one-
channel image. The definition domain of the input is adopted. The channels are numbered from 1 to n. The
number of channels can be determined via the operator count_channels.
Parameter
. MultiChannelImage (input_object) . . . . . . multichannel-image ; Hobject : byte / direction / cyclic /
int1 / int2 / uint2 / int4 / real / com-
plex / vector_field
Multichannel image.
. Image (output_object) . . . . . . singlechannel-image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2
/ int4 / real / complex / vector_field
One channel of MultiChannelImage.
. Channel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . channel ; Hlong
Index of channel to be accessed.
Default Value : 1
Suggested values : Channel ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
Typical range of values : 1 ≤ Channel
Example

read_image(&Color,"patras"); /* Farbbild einlesen */


access_channel(Color,&Red,1); /* Rotkanal extrahieren */
disp_image(Red,WindowHandle);

HALCON 8.0.2
452 CHAPTER 5. IMAGE

Parallelization Information
access_channel is reentrant and processed without parallelization.
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
decompose2, decompose3, decompose4, decompose5
See also
count_channels
Module
Foundation

append_channel ( const Hobject MultiChannelImage, const Hobject Image,


Hobject *ImageExtended )

T_append_channel ( const Hobject MultiChannelImage,


const Hobject Image, Hobject *ImageExtended )

Append additional matrices (channels) to the image.


The operator append_channel appends the matrices of the image Image to the matrices
of MultiChannelImage. The result is an image containing as many matrices (channels) as
MultiChannelImage and Image combined. The definition domain of the ouptput image is calculated as
the average of the definition domains of both input images.
Parameter
. MultiChannelImage (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2
/ int4 / real / complex / vector_field
Multichannel image.
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Image to be appended.
. ImageExtended (output_object) . . . . . . multichannel-image ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Image appended by Image.
Parallelization Information
append_channel is reentrant and processed without parallelization.
Possible Successors
disp_image
Alternatives
compose2, compose3, compose4, compose5
Module
Foundation

channels_to_image ( const Hobject Images, Hobject *MultiChannelImage )


T_channels_to_image ( const Hobject Images,
Hobject *MultiChannelImage )

Convert one-channel images into a multichannel image


The operator channels_to_image converts several one-channel images into a multichannel image. The new
definition domain is the average of the definition domains of the input images.

HALCON/C Reference Manual, 2008-5-13


5.3. CHANNEL 453

Parameter

. Images (input_object) . . . . . . singlechannel-image-array ; Hobject : byte / direction / cyclic / int1 / int2 /


uint2 / int4 / real / complex / vector_field
One-channel images to be combined into a one-channel image.
. MultiChannelImage (output_object) . . . . . . multichannel-image ; Hobject * : byte / direction / cyclic
/ int1 / int2 / uint2 / int4 / real / com-
plex / vector_field
Multichannel image.
Parallelization Information
channels_to_image is reentrant and processed without parallelization.
Possible Successors
count_channels, disp_image
Module
Foundation

compose2 ( const Hobject Image1, const Hobject Image2,


Hobject *MultiChannelImage )

T_compose2 ( const Hobject Image1, const Hobject Image2,


Hobject *MultiChannelImage )

Convert two images into a two-channel image.


The operator compose2 converts 2 one-channel images into a 2-channel image. The definition domain is calcu-
lated as the intersection of the definition domains of the input images.
Parameter

. Image1 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2


/ uint2 / int4 / real / complex / vector_field
Input image 1.
. Image2 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 2.
. MultiChannelImage (output_object) . . . . . . multichannel-image(-array) ; Hobject * : byte / direction
/ cyclic / int1 / int2 / uint2
/ int4 / real / complex / vec-
tor_field
Multichannel image.
Parallelization Information
compose2 is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
append_channel
See also
decompose2
Module
Foundation

HALCON 8.0.2
454 CHAPTER 5. IMAGE

compose3 ( const Hobject Image1, const Hobject Image2,


const Hobject Image3, Hobject *MultiChannelImage )

T_compose3 ( const Hobject Image1, const Hobject Image2,


const Hobject Image3, Hobject *MultiChannelImage )

Convert 3 images into a three-channel image.


The operator compose3 converts 3 one-channel images into a 3-channel image. The definition domain is calcu-
lated as the intersection of the definition domains of the input images.
Parameter
. Image1 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 1.
. Image2 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 2.
. Image3 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 3.
. MultiChannelImage (output_object) . . . . . . multichannel-image(-array) ; Hobject * : byte / direction
/ cyclic / int1 / int2 / uint2
/ int4 / real / complex / vec-
tor_field
Multichannel image.
Parallelization Information
compose3 is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
append_channel
See also
decompose3
Module
Foundation

compose4 ( const Hobject Image1, const Hobject Image2,


const Hobject Image3, const Hobject Image4,
Hobject *MultiChannelImage )

T_compose4 ( const Hobject Image1, const Hobject Image2,


const Hobject Image3, const Hobject Image4,
Hobject *MultiChannelImage )

Convert 4 images into a four-channel image.


The operator compose4 converts 4 one-channel images into a 4-channel image. The definition domain is calcu-
lated as the intersection of the definition domains of the input images.
Parameter
. Image1 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 1.
. Image2 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 2.

HALCON/C Reference Manual, 2008-5-13


5.3. CHANNEL 455

. Image3 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2


/ uint2 / int4 / real / complex / vector_field
Input image 3.
. Image4 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 4.
. MultiChannelImage (output_object) . . . . . . multichannel-image(-array) ; Hobject * : byte / direction
/ cyclic / int1 / int2 / uint2
/ int4 / real / complex / vec-
tor_field
Multichannel image.
Parallelization Information
compose4 is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
append_channel
See also
decompose4
Module
Foundation

compose5 ( const Hobject Image1, const Hobject Image2,


const Hobject Image3, const Hobject Image4, const Hobject Image5,
Hobject *MultiChannelImage )

T_compose5 ( const Hobject Image1, const Hobject Image2,


const Hobject Image3, const Hobject Image4, const Hobject Image5,
Hobject *MultiChannelImage )

Convert 5 images into a five-channel image.


The operator compose5 converts 5 one-channel images into a 5-channel image. The definition domain is calcu-
lated as the intersection of the definition domains of the input images.
Parameter
. Image1 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 1.
. Image2 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 2.
. Image3 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 3.
. Image4 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 4.
. Image5 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 5.
. MultiChannelImage (output_object) . . . . . . multichannel-image(-array) ; Hobject * : byte / direction
/ cyclic / int1 / int2 / uint2
/ int4 / real / complex / vec-
tor_field
Multichannel image.

HALCON 8.0.2
456 CHAPTER 5. IMAGE

Parallelization Information
compose5 is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
append_channel
See also
decompose5
Module
Foundation

compose6 ( const Hobject Image1, const Hobject Image2,


const Hobject Image3, const Hobject Image4, const Hobject Image5,
const Hobject Image6, Hobject *MultiChannelImage )

T_compose6 ( const Hobject Image1, const Hobject Image2,


const Hobject Image3, const Hobject Image4, const Hobject Image5,
const Hobject Image6, Hobject *MultiChannelImage )

Convert 6 images into a six-channel image.


The operator compose6 converts 6 one-channel images into a 6-channel image. The definition domain is calcu-
lated as the intersection of the definition domains of the input images.
Parameter
. Image1 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 1.
. Image2 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 2.
. Image3 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 3.
. Image4 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 4.
. Image5 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 5.
. Image6 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 6.
. MultiChannelImage (output_object) . . . . . . multichannel-image(-array) ; Hobject * : byte / direction
/ cyclic / int1 / int2 / uint2
/ int4 / real / complex / vec-
tor_field
Multichannel image.
Parallelization Information
compose6 is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
append_channel
See also
decompose6

HALCON/C Reference Manual, 2008-5-13


5.3. CHANNEL 457

Module
Foundation

compose7 ( const Hobject Image1, const Hobject Image2,


const Hobject Image3, const Hobject Image4, const Hobject Image5,
const Hobject Image6, const Hobject Image7,
Hobject *MultiChannelImage )

T_compose7 ( const Hobject Image1, const Hobject Image2,


const Hobject Image3, const Hobject Image4, const Hobject Image5,
const Hobject Image6, const Hobject Image7,
Hobject *MultiChannelImage )

Convert 7 images into a seven-channel image.


The operator compose7 converts 7 one-channel images into a 7-channel image. The definition domain is calcu-
lated as the intersection of the definition domains of the input images.
Parameter

. Image1 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2


/ uint2 / int4 / real / complex / vector_field
Input image 1.
. Image2 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 2.
. Image3 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 3.
. Image4 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 4.
. Image5 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 5.
. Image6 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 6.
. Image7 (input_object) . . . . . . singlechannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Input image 7.
. MultiChannelImage (output_object) . . . . . . multichannel-image(-array) ; Hobject * : byte / direction
/ cyclic / int1 / int2 / uint2
/ int4 / real / complex / vec-
tor_field
Multichannel image.
Parallelization Information
compose7 is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
append_channel
See also
decompose7
Module
Foundation

HALCON 8.0.2
458 CHAPTER 5. IMAGE

count_channels ( const Hobject MultiChannelImage, Hlong *Channels )


T_count_channels ( const Hobject MultiChannelImage, Htuple *Channels )

Count channels of image.


The operator count_channels counts the number of channels of all input images.
Parameter

. MultiChannelImage (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction /


cyclic / int1 / int2 / uint2 /
int4 / real / complex / vec-
tor_field
One- or multichannel image.
. Channels (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Number of channels.
Example

read_image(&Color,"patras");
count_channels(Color,&num_channels);
for (i=1; i<=num_channels; i++)
{
access_channel(Color,&Channel,i);
disp_image(Channel,WindowHandle);
clear_obj(Channel);
}

Parallelization Information
count_channels is reentrant and processed without parallelization.
Possible Successors
access_channel, append_channel, disp_image
See also
append_channel, access_channel
Module
Foundation

decompose2 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2 )

T_decompose2 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2 )

Convert a two-channel image into two images.


The operator decompose2 converts a 2-channel image into two one-channel images with the same definition
domain.
Parameter

. MultiChannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /


cyclic / int1 / int2 / uint2 / int4
/ real / complex / vector_field
Multichannel image.
. Image1 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 1.

HALCON/C Reference Manual, 2008-5-13


5.3. CHANNEL 459

. Image2 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1


/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 2.
Parallelization Information
decompose2 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
access_channel, image_to_channels
See also
compose2
Module
Foundation

decompose3 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2, Hobject *Image3 )

T_decompose3 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2, Hobject *Image3 )

Convert a three-channel image into three images.


The operator decompose3 converts a 3-channel image into three one-channel images with the same definition
domain.
Parameter
. MultiChannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / uint2 / int4
/ real / complex / vector_field
Multichannel image.
. Image1 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 1.
. Image2 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 2.
. Image3 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 3.
Parallelization Information
decompose3 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
access_channel, image_to_channels
See also
compose3

HALCON 8.0.2
460 CHAPTER 5. IMAGE

Module
Foundation

decompose4 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2, Hobject *Image3, Hobject *Image4 )

T_decompose4 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2, Hobject *Image3, Hobject *Image4 )

Convert a four-channel image into four images.


The operator decompose4 converts a 4-channel image into four one-channel images with the same definition
domain.
Parameter
. MultiChannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / uint2 / int4
/ real / complex / vector_field
Multichannel image.
. Image1 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 1.
. Image2 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 2.
. Image3 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 3.
. Image4 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 4.
Parallelization Information
decompose4 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
access_channel, image_to_channels
See also
compose4
Module
Foundation

decompose5 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2, Hobject *Image3, Hobject *Image4, Hobject *Image5 )

T_decompose5 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2, Hobject *Image3, Hobject *Image4, Hobject *Image5 )

Convert a five-channel image into five images.

HALCON/C Reference Manual, 2008-5-13


5.3. CHANNEL 461

The operator decompose5 converts a 5-channel image into five one-channel images with the same definition
domain.
Parameter

. MultiChannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /


cyclic / int1 / int2 / uint2 / int4
/ real / complex / vector_field
Multichannel image.
. Image1 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 1.
. Image2 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 2.
. Image3 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 3.
. Image4 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 4.
. Image5 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 5.
Parallelization Information
decompose5 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
access_channel, image_to_channels
See also
compose5
Module
Foundation

decompose6 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2, Hobject *Image3, Hobject *Image4, Hobject *Image5,
Hobject *Image6 )

T_decompose6 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2, Hobject *Image3, Hobject *Image4, Hobject *Image5,
Hobject *Image6 )

Convert a six-channel image into six images.


The operator decompose6 converts a 6-channel image into six one-channel images with the same definition
domain.

HALCON 8.0.2
462 CHAPTER 5. IMAGE

Parameter
. MultiChannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / uint2 / int4
/ real / complex / vector_field
Multichannel image.
. Image1 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 1.
. Image2 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 2.
. Image3 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 3.
. Image4 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 4.
. Image5 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 5.
. Image6 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 6.
Parallelization Information
decompose6 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
access_channel, image_to_channels
See also
compose6
Module
Foundation

decompose7 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2, Hobject *Image3, Hobject *Image4, Hobject *Image5,
Hobject *Image6, Hobject *Image7 )

T_decompose7 ( const Hobject MultiChannelImage, Hobject *Image1,


Hobject *Image2, Hobject *Image3, Hobject *Image4, Hobject *Image5,
Hobject *Image6, Hobject *Image7 )

Convert a seven-channel image into seven images.


The operator decompose7 converts a 7-channel image into seven one-channel images with the same definition
domain.

HALCON/C Reference Manual, 2008-5-13


5.3. CHANNEL 463

Parameter
. MultiChannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / uint2 / int4
/ real / complex / vector_field
Multichannel image.
. Image1 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 1.
. Image2 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 2.
. Image3 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 3.
. Image4 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 4.
. Image5 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 5.
. Image6 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 6.
. Image7 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 7.
Parallelization Information
decompose7 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
access_channel, image_to_channels
See also
compose7
Module
Foundation

image_to_channels ( const Hobject MultiChannelImage, Hobject *Images )


T_image_to_channels ( const Hobject MultiChannelImage,
Hobject *Images )

Convert a multichannel image into One-channel images


The operator image_to_channels generates a one-channel image for each channel of the multichannel image
in MultiChannelImage. The definition domains are adopted from the input image. As many images are
created as MultiChannelImage has channels.

HALCON 8.0.2
464 CHAPTER 5. IMAGE

Parameter

. MultiChannelImage (input_object) . . . . . . multichannel-image ; Hobject : byte / direction / cyclic /


int1 / int2 / uint2 / int4 / real / com-
plex / vector_field
Multichannel image to be decomposed.
. Images (output_object) . . . . . . singlechannel-image-array ; Hobject * : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Generated one-channel images.
Parallelization Information
image_to_channels is reentrant and processed without parallelization.
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
access_channel, decompose2, decompose3, decompose4, decompose5
Module
Foundation

5.4 Creation
copy_image ( const Hobject Image, Hobject *DupImage )
T_copy_image ( const Hobject Image, Hobject *DupImage )

Copy an image and allocate new memory for it.


copy_image copies the input image into a new image with the same domain as the input image. In contrast
to HALCON operators such as copy_obj, physical copies of all channels are created. This can be used, for
example, to modify the gray values of the new image (see get_image_pointer1).
Parameter

. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Image to be copied.
. DupImage (output_object) . . . . . . (multichannel-)image ; Hobject * : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Copied image.
Parallelization Information
copy_image is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const
Possible Successors
set_grayval, get_image_pointer1
Alternatives
set_grayval, paint_gray, gen_image_const, gen_image_proto
See also
get_image_pointer1
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


5.4. CREATION 465

gen_image1 ( Hobject *Image, const char *Type, Hlong Width,


Hlong Height, Hlong PixelPointer )

T_gen_image1 ( Hobject *Image, const Htuple Type, const Htuple Width,


const Htuple Height, const Htuple PixelPointer )

Create an image from a pointer to the pixels.


The operator gen_image1 creates an image of the size Width × Height. The pixels in PixelPointer are
stored line-sequentially. The type of the given pixels (PixelPointer) must correspond to Type. The storage
for the new image is newly created by HALCON . Thus, the storage on the PixelPointer can be released after
the call. Since the type of the parameter PixelPointer is generic (long) a cast has to be used for the call.
Parameter

. Image (output_object) . . . . . . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Created image with new image matrix.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"byte", "direction", "cyclic", "int1", "int2", "uint2", "int4", "real"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
. PixelPointer (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to first gray value.
Example

void NewImage(Hobject *new)


{
unsigned char image[768*525];
int r,c;
for (r=0; r<525; r++)
for (c=0; c<768; c++)
image[r*768+c] = c % 255;
gen_image1(new,"byte",768,525,(long)image);
}

Result
If the parameter values are correct, the operator gen_image1 returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
gen_image1 is reentrant and processed without parallelization.
Possible Predecessors
gen_image_const, get_image_pointer1

HALCON 8.0.2
466 CHAPTER 5. IMAGE

Alternatives
gen_image3, gen_image_const, get_image_pointer1
See also
reduce_domain, paint_gray, paint_region, set_grayval
Module
Foundation

gen_image1_extern ( Hobject *Image, const char *Type, Hlong Width,


Hlong Height, Hlong PixelPointer, Hlong ClearProc )

T_gen_image1_extern ( Hobject *Image, const Htuple Type,


const Htuple Width, const Htuple Height, const Htuple PixelPointer,
const Htuple ClearProc )

Create an image from a pointer on the pixels with storage management.


The operator gen_image1_extern creates an image of the size Width × Height. The pixels in
PixelPointer are stored line-sequentially. The type of the given pixels (PixelPointer) must correspond
to Type. Since the type of the parameter PixelPointer is generic (long) a cast must be used for the call.
The memory for the new image is not newly allocated by HALCON , contrary to gen_image1, and thus is not
copied either. This means that the memory space that PixelPointer points to must be released by deleting the
object Image. This is done by the procedure ClearProc provided by the caller. This procedure must have the
following signature
void ClearProc(void* ptr);
and will be called using __cdecl calling convention when deleting Image. If the memory shall not be released
(in the case of frame grabbers or static memory) a procedure “without trunk” or the NULL-Pointer can be passed.
Analogous to the parameter PixelPointer the pointer has to be passed to the procedure by casting it to long.
Parameter

. Image (output_object) . . . . . . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Created HALCON image.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"int1", "int2", "uint2", "int4", "byte", "real", "direction", "cyclic"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
. PixelPointer (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to the first gray value.
. ClearProc (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to the procedure re-releasing the memory of the image when deleting the object.
Default Value : 0

HALCON/C Reference Manual, 2008-5-13


5.4. CREATION 467

Example

void NewImage(Hobject *new)


{
unsigned char *image;
int r,c;
image = malloc(640*480);
for (r=0; r<480; r++)
for (c=0; c<640; c++)
image[r*640+c] = c % 255;
gen_image1_extern(new,"byte",640,480,(long)image,(long)free);
}

Result
The operator gen_image1_extern returns the value H_MSG_TRUE if the parameter values are correct. Oth-
erwise an exception handling is raised.
Parallelization Information
gen_image1_extern is reentrant and processed without parallelization.
Alternatives
gen_image1, gen_image_const, get_image_pointer1
See also
reduce_domain, paint_gray, paint_region, set_grayval
Module
Foundation

gen_image1_rect ( Hobject *Image, Hlong PixelPointer, Hlong Width,


Hlong Height, Hlong VerticalPitch, Hlong HorizontalBitPitch,
Hlong BitsPerPixel, const char *DoCopy, Hlong ClearProc )

T_gen_image1_rect ( Hobject *Image, const Htuple PixelPointer,


const Htuple Width, const Htuple Height, const Htuple VerticalPitch,
const Htuple HorizontalBitPitch, const Htuple BitsPerPixel,
const Htuple DoCopy, const Htuple ClearProc )

Create an image with a rectangular domain from a pointer on the pixels (with storage management).
The operator gen_image1_rect creates an image of size (VerticalPitch/(HorizontalBitPitch /
8)) * Height. The pixels pointed to by PixelPointer are stored line by line. Since the type of the parameter
PixelPointer is generic (long) a cast must be used for the call. VerticalPitch determines the distance
(in bytes) between pixel m in row n and pixel m in row n+1 inside of memory. All rows of the ’input image’ have
the same vertical pitch. The width of the output image equals VerticalPitch / (HorizontalBitPitch /
8). The height of input and output image are equal. The domain of the output image Image is a rectangle of the
size Width * Height. The parameter HorizontalBitPitch is the horizontal distance (in bits) between two
neighbouring pixels. BitsPerPixel is the number of used bits per pixel.
If DoCopy is set ’true’, the image data pointed to by PixelPointer is copied and memory for the new image is
newly allocated by HALCON . Else the image data is not duplicated and the memory space that PixelPointer
points to must be released when deleting the object Image. This is done by the procedure ClearProc provided
by the caller. This procedure must have the following signature
void ClearProc(void* ptr);
and will be called using __cdecl calling convention when deleting Image. If the memory shall not be released
(in the case of frame grabbers or static memory) a procedure ”without trunk” or the NULL-pointer can be passed.
Analogously to the parameter PixelPointer the pointer has to be passed to the procedure by casting it to
long. If DoCopy is ’true’ then ClearProc is irrelevant. The operator gen_image1_rect is symmetrical to
get_image_pointer1_rect.

HALCON 8.0.2
468 CHAPTER 5. IMAGE

Parameter
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / uint2 / int4
Created HALCON image.
. PixelPointer (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to the first pixel.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
. VerticalPitch (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Distance (in bytes) between pixel m in row n and pixel m in row n+1 of the ’input image’.
Restriction : VerticalPitch ≥ (Width · (HorizontalBitPitch/8))
. HorizontalBitPitch (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
Distance between two neighbouring pixels in bits .
Default Value : 8
List of values : HorizontalBitPitch ∈ {8, 16, 32}
. BitsPerPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of used bits per pixel.
Default Value : 8
List of values : BitsPerPixel ∈ {8, 9, 10, 11, 12, 13, 14, 15, 16, 32}
Restriction : BitsPerPixel ≤ HorizontalBitPitch
. DoCopy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Copy image data.
Default Value : "false"
Suggested values : DoCopy ∈ {"true", "false"}
. ClearProc (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to the procedure releasing the memory of the image when deleting the object.
Default Value : 0
Example

void NewImage(Hobject *new)


{
unsigned char *image;
int r,c;

image = malloc(640*480);
for (r=0; r<480; r++)
for (c=0; c<640; c++)
image[r*640+c] = c % 255;
gen_image1_rect(new,(long)image,400,480,640,8,8,’false’,(long)free);
}

Result
The operator gen_image1_rect returns the value H_MSG_TRUE if the parameter values are correct. Other-
wise an exception handling is raised.

HALCON/C Reference Manual, 2008-5-13


5.4. CREATION 469

Parallelization Information
gen_image1_rect is reentrant and processed without parallelization.
Possible Successors
get_image_pointer1_rect
Alternatives
gen_image1, gen_image1_extern
See also
get_image_pointer1_rect
Module
Foundation

gen_image3 ( Hobject *ImageRGB, const char *Type, Hlong Width,


Hlong Height, Hlong PixelPointerRed, Hlong PixelPointerGreen,
Hlong PixelPointerBlue )

T_gen_image3 ( Hobject *ImageRGB, const Htuple Type, const Htuple Width,


const Htuple Height, const Htuple PixelPointerRed,
const Htuple PixelPointerGreen, const Htuple PixelPointerBlue )

Create an image from three pointers to the pixels (red/green/blue).


The operator gen_image3 creates a three-channel image of the size Width × Height. The pixels in
PixelPointerRed, PixelPointerGreen and PixelPointerBlue are stored line-sequentially. The
type of the given pixels (PixelPointerRed etc.) must correspond to the name of the pixels (Type). The
storage for the new image is newly created by HALCON . Thus, it can be released after the call. Since the type of
the parameters (PixelPointerRed etc.) is generic (long) a “cast” must be used for the call.
Parameter

. ImageRGB (output_object) . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Created image with new image matrix.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"byte", "direction", "cyclic", "int1", "int2", "uint2", "int4", "real"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
. PixelPointerRed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to first red value (channel 1).
. PixelPointerGreen (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to first green value (channel 2).
. PixelPointerBlue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to first blue value (channel 3).
Example

HALCON 8.0.2
470 CHAPTER 5. IMAGE

void NewRGBImage(Hobject *new)


{
unsigned char red[768*525];
unsigned char green[768*525];
unsigned char blue[768*525];
int r,c;
for (r=0; r<525; r++)
for (c=0; c<768; c++)
{
red[r*768+c] = c % 255;
green[r*768+c] = (767 - c) % 255;
blue[r*768+c] = r % 255;
}
gen_image3(new,"byte",768,525,(long)red,(long)green,(long)blue);
}

main()
{
Hobject rgb;
open_window(0,0,768,525,0,"","",&WindowHandle);
NewRGBImage(&rgb);
disp_color(rgb,WindowHandle);
clear_obj(rgb);
}

Result
If the parameter values are correct, the operator gen_image3 returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
gen_image3 is reentrant and processed without parallelization.
Possible Predecessors
gen_image_const, get_image_pointer1
Possible Successors
disp_color
Alternatives
gen_image1, compose3, gen_image_const
See also
reduce_domain, paint_gray, paint_region, set_grayval, get_image_pointer1,
decompose3
Module
Foundation

gen_image_const ( Hobject *Image, const char *Type, Hlong Width,


Hlong Height )

T_gen_image_const ( Hobject *Image, const Htuple Type,


const Htuple Width, const Htuple Height )

Create an image with constant gray value.


The operator gen_image_const creates an image of the indicated size. The height and width of the image are
determined by Height and Width. HALCON supports the following image types:

’byte’ 1 byte per pixel (0..255)


’int1’ 1 byte per pixel (-127..127)
’int2’ 2 bytes per pixel (-32767..32767)

HALCON/C Reference Manual, 2008-5-13


5.4. CREATION 471

’uint2’ 2 bytes per pixel (0..65535)


’int4’ 4 bytes per pixel (-2147483647..2147483647)
’real’ 4 bytes per pixel, floating point
’complex’ two matrices of the type real
’vector_field’ two matrices of type real
’dir’ 1 byte per pixel (0..180)
’cyclic’ 1 byte per pixel; cyclic arithmetics (0..255).

The default value 0 is set via the operator set_system(’init_new_image’,<’true’/’false’>).


Parameter
. Image (output_object) . . . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Created image with new image matrix.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"byte", "direction", "cyclic", "int1", "int2", "uint2", "int4", "real", "complex",
"vector_field"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
Example

gen_image_const(&New,"byte",width,height);
get_image_pointer1(New,(long*)&pointer,type,&width,&height);
for (row=0; row<height-1; row++)
for (col=0; col<width-1; col++)
pointer[row*width+col] = (row + col) % 256;

Result
If the parameter values are correct, the operator gen_image_const returns the value H_MSG_TRUE. Other-
wise an exception handling is raised.
Parallelization Information
gen_image_const is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain, get_image_pointer1, copy_obj
Alternatives
gen_image1, gen_image3
See also
reduce_domain, paint_gray, paint_region, set_grayval, get_image_pointer1
Module
Foundation

HALCON 8.0.2
472 CHAPTER 5. IMAGE

gen_image_gray_ramp ( Hobject *ImageGrayRamp, double Alpha,


double Beta, double Mean, Hlong Row, Hlong Column, Hlong Width,
Hlong Height )

T_gen_image_gray_ramp ( Hobject *ImageGrayRamp, const Htuple Alpha,


const Htuple Beta, const Htuple Mean, const Htuple Row,
const Htuple Column, const Htuple Width, const Htuple Height )

Create a gray value ramp.


The operator gen_image_gray_ramp creates a gray value ramp according to the following equation:

ImageGrayRamp0 (r, c) = Alpha(r − Row) + Beta(c − Column) + Mean

The size of the image is determined by Width and Height The gray values are of the type byte. Gray values
outside the valid area are clipped.
Parameter

. ImageGrayRamp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte


Created image with new image matrix.
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Gradient in line direction.
Default Value : 1.0
Suggested values : Alpha ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Beta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Gradient in column direction.
Default Value : 1.0
Suggested values : Beta ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Mean (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Mean gray value.
Default Value : 128
Suggested values : Mean ∈ {0, 20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 220, 255}
Minimum Increment : 1
Recommended Increment : 10
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Line index of reference point.
Default Value : 256
Suggested values : Row ∈ {128, 256, 512, 1024}
Minimum Increment : 1
Recommended Increment : 10
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column index of reference point.
Default Value : 256
Suggested values : Column ∈ {128, 256, 512, 1024}
Minimum Increment : 1
Recommended Increment : 10
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1

HALCON/C Reference Manual, 2008-5-13


5.4. CREATION 473

. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong


Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
Result
If the parameter values are correct gen_image_gray_ramp returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
gen_image_gray_ramp is reentrant and processed without parallelization.
Possible Predecessors
moments_gray_plane
Possible Successors
paint_region, reduce_domain, get_image_pointer1, copy_obj
Alternatives
gen_image1
See also
reduce_domain, paint_gray
Module
Foundation

gen_image_interleaved ( Hobject *ImageRGB, Hlong PixelPointer,


const char *ColorFormat, Hlong OriginalWidth, Hlong OriginalHeight,
Hlong Alignment, const char *Type, Hlong ImageWidth,
Hlong ImageHeight, Hlong StartRow, Hlong StartColumn,
Hlong BitsPerChannel, Hlong BitShift )

T_gen_image_interleaved ( Hobject *ImageRGB,


const Htuple PixelPointer, const Htuple ColorFormat,
const Htuple OriginalWidth, const Htuple OriginalHeight,
const Htuple Alignment, const Htuple Type, const Htuple ImageWidth,
const Htuple ImageHeight, const Htuple StartRow,
const Htuple StartColumn, const Htuple BitsPerChannel,
const Htuple BitShift )

Create a three-channel image from a pointer to the interleaved pixels.


The operator gen_image_interleaved creates a three-channel image from an input image, whose pixels are
stored line-sequentially in PixelPointer. The size of the input image has to be passed in OriginalWidth
and OriginalHeight, the format of the interleaved pixels in ColorFormat.
The output image will be sized ImageWidth × ImageHeight. Together with the coordinates of upper left
corner StartRow and StartColumn any section of the input image can be extracted. When a 0 is passed to
ImageWidth, ImageHeight, StartRow, and StartColumn, the output image will have the same dimen-
sions as the input image.
Note that the image type Type of the output image ImageRGB has to be chosen such that the whole range of
possible color values of the input image can be represented. I.e. gen_image_interleaved does not allow to
create a byte image from an input image with ColorFormat rgb48.
When the formats rgb48, bgr48, rgbx64, and bgr64 do not use the full range of 16 bits per channel and pixel, the
number of actually used bits should be passed in BitsPerChannel. Furthermore, the pixel values of the input
image can be shifted by BitShift bits to the right.
The storage for the new image is newly created by HALCON . Thus, it can be released after the call. Since the
type of the parameters (PixelPointer) is generic (long) a “cast” must be used for the call.

HALCON 8.0.2
474 CHAPTER 5. IMAGE

Possible values for ColorFormat:

’rgb555’: 16 bit rgb triple (5 bit per pixel and channel)


’bgr555’: 16 bit bgr triple (5 bit per pixel and channel)
’rgb565’: 16 bit rgb triple (5 bit per pixel and channel, 6 bit for the green channel)
’bgr565’: 16 bit bgr triple (5 bit per pixel and channel, 6 bit for the green channel)
’rgb’: 24 bit rgb triple (8 bit per pixel and channel)
’bgr’: 24 bit bgr triple (8 bit per pixel and channel)
’rgbx’: 32 bit rgb quadruple (8 bit per pixel and channel)
’bgrx’: 32 bit bgr quadruple (8 bit per pixel and channel)
’rgb48’: 48 bit rgb triple (16 bit per pixel and channel)
’bgr48’: 48 bit bgr triple (16 bit per pixel and channel)
’rgbx64’: 64 bit rgb quadruple (16 bit per pixel and channel)
’bgrx64’: 64 bit bgr quadruple (16 bit per pixel and channel)

Parameter

. ImageRGB (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / uint2


Created image with new image matrix.
. PixelPointer (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to interleaved pixels.
. ColorFormat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Format of the input pixels.
Default Value : "rgb"
List of values : ColorFormat ∈ {"rgb", "bgr", "rgbx", "bgrx", "rgb48", "bgr48", "rgbx64", "bgrx64",
"rgb555", "bgr555", "rgb565", "bgr565"}
. OriginalWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of input image.
Default Value : 512
Suggested values : OriginalWidth ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ OriginalWidth (lin)
Minimum Increment : 1
Recommended Increment : 10
. OriginalHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of input image.
Default Value : 512
Suggested values : OriginalHeight ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ OriginalHeight (lin)
Minimum Increment : 1
Recommended Increment : 10
. Alignment (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Reserved.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type of output image.
Default Value : "byte"
List of values : Type ∈ {"byte", "uint2"}
. ImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.x ; Hlong
Width of output image.
Default Value : 0
Suggested values : ImageWidth ∈ {128, 256, 512, 1024}
Typical range of values : 0 ≤ ImageWidth (lin)
Minimum Increment : 1
Recommended Increment : 10

HALCON/C Reference Manual, 2008-5-13


5.4. CREATION 475

. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.y ; Hlong


Height of output image.
Default Value : 0
Suggested values : ImageHeight ∈ {128, 256, 512, 1024}
Typical range of values : 0 ≤ ImageHeight (lin)
Minimum Increment : 1
Recommended Increment : 10
. StartRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong
Line number of upper left corner of desired image part.
Default Value : 0
Suggested values : StartRow ∈ {-1, 0}
. StartColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong
Column number of upper left corner of desired image part.
Default Value : 0
Suggested values : StartColumn ∈ {-1, 0}
. BitsPerChannel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of used bits per pixel and channel of the output image (-1: All bits are used).
Default Value : -1
Suggested values : BitsPerChannel ∈ {5, 8, 10, 12, 16, -1}
. BitShift (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of bits that the color values of the input pixels are shifted to the right (only uint2 images).
Default Value : 0
Suggested values : BitShift ∈ {0, 2, 4, 6}
Result
If the parameter values are correct, the operator gen_image_interleaved returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
gen_image_interleaved is reentrant and processed without parallelization.
Possible Successors
disp_color
See also
reduce_domain, paint_gray, paint_region, set_grayval
Module
Foundation

gen_image_proto ( const Hobject Image, Hobject *ImageCleared,


double Grayval )

T_gen_image_proto ( const Hobject Image, Hobject *ImageCleared,


const Htuple Grayval )

Create an image with a specified constant gray value.


gen_image_proto creates an output image ImageCleared with the constant gray value Grayval.
ImageCleared has the same dimensions and pixel type as the input image Image.
Parameter
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Input image.
. ImageCleared (output_object) . . . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Image with constant gray value.
. Grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Gray value to be used for the output image.
Default Value : 0
Suggested values : Grayval ∈ {0, 1, 2, 5, 10, 16, 32, 64, 128, 253, 254, 255}

HALCON 8.0.2
476 CHAPTER 5. IMAGE

Result
gen_image_proto returns H_MSG_TRUE if all parameters are correct. If necessary, an exception is raised.
Parallelization Information
gen_image_proto is reentrant and processed without parallelization.
Possible Predecessors
test_obj_def
Alternatives
set_grayval, paint_gray, gen_image_const, copy_image
See also
get_image_pointer1
Module
Foundation

gen_image_surface_first_order ( Hobject *ImageSurface,


const char *Type, double Alpha, double Beta, double Gamma, double Row,
double Col, Hlong Width, Hlong Height )

T_gen_image_surface_first_order ( Hobject *ImageSurface,


const Htuple Type, const Htuple Alpha, const Htuple Beta,
const Htuple Gamma, const Htuple Row, const Htuple Col,
const Htuple Width, const Htuple Height )

Create a curved gray surface with first order polynomial.


The operator gen_image_surface_second_order creates a curved gray value surface according to the
following equation:

ImageSurface(r, c) = Alpha(r − Row) + Beta(c − Col) + Gamma

The size of the image is determined by Width and Height. The gray values are of the type Type. Gray values
outside the valid area are clipped.
Parameter
. ImageSurface (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / uint2 / real
Created image with new image matrix.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"byte", "uint2", "real"}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
First order coefficient in vertical direction.
Default Value : 1.0
Suggested values : Alpha ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Beta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
First order coefficient in horizontal direction.
Default Value : 1.0
Suggested values : Beta ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Gamma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Zero order coefficient
Default Value : 1.0
Suggested values : Gamma ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005

HALCON/C Reference Manual, 2008-5-13


5.4. CREATION 477

. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double


line coordinate of the apex of the surface
Default Value : 256.0
Suggested values : Row ∈ {0.0, 128.0, 256.0, 512.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Column coordinate of the apex of the surface
Default Value : 256.0
Suggested values : Col ∈ {0.0, 128.0, 256.0, 512.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
Result
If the parameter values are correct gen_image_surface_first_order returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
gen_image_surface_first_order is reentrant and processed without parallelization.
Possible Predecessors
fit_surface_first_order
Possible Successors
sub_image
See also
gen_image_gray_ramp, gen_image_surface_second_order
Module
Foundation

gen_image_surface_second_order ( Hobject *ImageSurface,


const char *Type, double Alpha, double Beta, double Gamma,
double Delta, double Epsilon, double Zeta, double Row, double Col,
Hlong Width, Hlong Height )

T_gen_image_surface_second_order ( Hobject *ImageSurface,


const Htuple Type, const Htuple Alpha, const Htuple Beta,
const Htuple Gamma, const Htuple Delta, const Htuple Epsilon,
const Htuple Zeta, const Htuple Row, const Htuple Col,
const Htuple Width, const Htuple Height )

Create a curved gray surface with second order polynomial.


The operator gen_image_surface_second_order creates a curved gray value surface according to the
following equation:

HALCON 8.0.2
478 CHAPTER 5. IMAGE

ImageSurface(r, c) = Alpha(r−Row)∗∗2+Beta(c−Col)∗∗2+Gamma(r−Row)∗(c−Col)+Delta(r−Row)+Epsilon(c

The size of the image is determined by Width and Height. The gray values are of the type Type. Gray values
outside the valid area are clipped.
Parameter

. ImageSurface (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / uint2 / real


Created image with new image matrix.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"byte", "uint2", "real"}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Second order coefficient in vertical direction.
Default Value : 1.0
Suggested values : Alpha ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Beta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Second order coefficient in horizontal direction.
Default Value : 1.0
Suggested values : Beta ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Gamma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Mixed second order coefficient.
Default Value : 1.0
Suggested values : Gamma ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Delta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
First order coefficient in vertical direction.
Default Value : 1.0
Suggested values : Delta ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
First order coefficient in horizontal direction.
Default Value : 1.0
Suggested values : Epsilon ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Zeta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Zero order coefficient
Default Value : 1.0
Suggested values : Zeta ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
line coordinate of the apex of the surface
Default Value : 256.0
Suggested values : Row ∈ {0.0, 128.0, 256.0, 512.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005

HALCON/C Reference Manual, 2008-5-13


5.4. CREATION 479

. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double


Column coordinate of the apex of the surface
Default Value : 256.0
Suggested values : Col ∈ {0.0, 128.0, 256.0, 512.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
Result
If the parameter values are correct gen_image_surface_second_order returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
gen_image_surface_second_order is reentrant and processed without parallelization.
Possible Predecessors
fit_surface_second_order
Possible Successors
sub_image
See also
gen_image_gray_ramp, gen_image_surface_first_order
Module
Foundation

region_to_bin ( const Hobject Region, Hobject *BinImage,


Hlong ForegroundGray, Hlong BackgroundGray, Hlong Width,
Hlong Height )

T_region_to_bin ( const Hobject Region, Hobject *BinImage,


const Htuple ForegroundGray, const Htuple BackgroundGray,
const Htuple Width, const Htuple Height )

Convert a region into a binary byte-image.


region_to_bin converts the input region given in Region into a byte-image and assigns a gray value of
ForegroundGray to all pixels in the region. If the input region is larger than the generated image, it is clipped
at the image borders. The background is set to BackgroundGray.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be converted.
. BinImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte
Result image of dimension Width × Height containing the converted regions.

HALCON 8.0.2
480 CHAPTER 5. IMAGE

. ForegroundGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Gray value in which the regions are displayed.
Default Value : 255
Suggested values : ForegroundGray ∈ {0, 1, 50, 100, 128, 150, 200, 254, 255}
Typical range of values : 0 ≤ ForegroundGray ≤ 255 (lin)
Recommended Increment : 1
. BackgroundGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Gray value in which the background is displayed.
Default Value : 0
Suggested values : BackgroundGray ∈ {0, 1, 50, 100, 128, 150, 200, 254, 255}
Typical range of values : 0 ≤ BackgroundGray ≤ 255 (lin)
Recommended Increment : 1
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the image to be generated.
Default Value : 512
Suggested values : Width ∈ {256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the image to be generated.
Default Value : 512
Suggested values : Height ∈ {256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
Restriction : Height ≥ 1
Complexity
O(2 ∗ Height ∗ Width).
Result
region_to_bin always returns H_MSG_TRUE. The behavior in case of empty input (no regions given) can
be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty input
region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
region_to_bin is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
get_grayval
Alternatives
region_to_label, paint_region, set_grayval
See also
gen_image_proto, paint_gray
Module
Foundation

region_to_label ( const Hobject Region, Hobject *ImageLabel,


const char *Type, Hlong Width, Hlong Height )

T_region_to_label ( const Hobject Region, Hobject *ImageLabel,


const Htuple Type, const Htuple Width, const Htuple Height )

Convert regions to a label image.

HALCON/C Reference Manual, 2008-5-13


5.4. CREATION 481

region_to_label converts the input regions into a label image according to their index (1..n), i.e., the first
region is painted with the gray value 1, the second the gray value 2, etc. Only positive gray values are used. For
byte-images the index is entered modulo 256.
Regions larger than the generated image are clipped appropriately. If regions overlap the regions with the higher
image are entered (i.e., they are painted in the order in which they are contained in the input regions). If so desired,
the regions can be made non-overlapping by calling expand_region.
The background, i.e., the area not covered by any regions, is set to 0. This can be used to test in which image range
no region is present.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be converted.
. ImageLabel (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / int2 / int4
Result image of dimension Width × Height containing the converted regions.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type of the result image.
Default Value : "int2"
List of values : Type ∈ {"byte", "int2", "int4"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Width of the image to be generated.
Default Value : 512
Suggested values : Width ∈ {64, 128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Height of the image to be generated.
Default Value : 512
Suggested values : Height ∈ {64, 128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
Restriction : Height ≥ 1
Complexity
O(2 ∗ Height ∗ Width).
Result
region_to_label always returns H_MSG_TRUE. The behavior in case of empty input (no regions given) can
be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty input
region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
region_to_label is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection, expand_region
Possible Successors
get_grayval, get_image_pointer1
Alternatives
region_to_bin, paint_region
See also
label_to_region
Module
Foundation

HALCON 8.0.2
482 CHAPTER 5. IMAGE

region_to_mean ( const Hobject Regions, const Hobject Image,


Hobject *ImageMean )

T_region_to_mean ( const Hobject Regions, const Hobject Image,


Hobject *ImageMean )

Paint regions with their average gray value.


region_to_mean returns an image in which the regions Regions are painted with their average gray value
based on the image Image. This operator is mainly intended to visualize segmentation results.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input regions.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
original gray-value image.
. ImageMean (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte
Result image with painted regions.
Example

read_image(&Image,"fabrik");
region_growing(Image,&Regions,3,3,6,100);
region_to_mean(Regions,Image,&Disp);
disp_image(Disp,WindowHandle);
set_draw(WindowHandle,"margin");
set_color(WindowHandle,"black");
disp_region(Regions,WindowHandle);

Result
region_to_mean returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can
be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
region_to_mean is reentrant and processed without parallelization.
Possible Predecessors
regiongrowing, connection
Possible Successors
disp_image
Alternatives
paint_region, intensity
Module
Foundation

5.5 Domain
add_channels ( const Hobject Regions, const Hobject Image,
Hobject *GrayRegions )

T_add_channels ( const Hobject Regions, const Hobject Image,


Hobject *GrayRegions )

Add gray values to regions.


The operator add_channels adds the gray values from Image to the regions in Regions. All channels of
Image are adopted. The definition domain is calculated as the average of the definition domain of the image with
the region. Thus the new definition domain can be a subset of the input region. The size of the matrix is not
changed.

HALCON/C Reference Manual, 2008-5-13


5.5. DOMAIN 483

Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input regions (without gray values).
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Gray image for regions.
. GrayRegions (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real / complex / vector_field
Regions with gray values (also gray images).
Number of elements : Regions = GrayRegions
Parallelization Information
add_channels is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, gen_circle, draw_region
Possible Successors
threshold, regiongrowing, get_domain
Alternatives
change_domain, reduce_domain
See also
full_domain, get_domain, intersection
Module
Foundation

change_domain ( const Hobject Image, const Hobject NewDomain,


Hobject *ImageNew )

T_change_domain ( const Hobject Image, const Hobject NewDomain,


Hobject *ImageNew )

Change definition domain of an image.


The operator change_domain uses the indicated region as the new definition domain. Unlike the operator
reduce_domain it does not form the intersection of the previous definition domain, i.e., the size of the matrix
is not changed. This implies in particular, that the region must not exceed the image matrix, otherwise using such
inconsistent iconic objects during subsequent operations will likely lead to errors or system crashes.
Attention
Due to running time the transferred region is not checked for consistency (i.e., whether it fits with the image
matrix). Incorrect regions lead to system hang-ups during subsequent operations.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image.
. NewDomain (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
New definition domain.
. ImageNew (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Image with new definition domain.
Parallelization Information
change_domain is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
get_domain
Alternatives
reduce_domain

HALCON 8.0.2
484 CHAPTER 5. IMAGE

See also
full_domain, get_domain, intersection
Module
Foundation

full_domain ( const Hobject Image, Hobject *ImageFull )


T_full_domain ( const Hobject Image, Hobject *ImageFull )

Expand the domain of an image to maximum.


The operator full_domain enters a rectangle with the edge length of the image as new definition domain. This
means that all pixels of the matrix are included in further operations. Thus the same definition domain is obtained
as by reading or generating an image. The size of the matrix is not changed.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image.
. ImageFull (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2
/ int4 / real / complex / vector_field
Image with maximum definition domain.
Parallelization Information
full_domain is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
get_domain
Alternatives
change_domain, reduce_domain
See also
get_domain, gen_rectangle1
Module
Foundation

get_domain ( const Hobject Image, Hobject *Domain )


T_get_domain ( const Hobject Image, Hobject *Domain )

Get the domain of an image.


The operator get_domain returns the definition domains of all input images as a region.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input images.
. Domain (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Definition domains of input images.
Parallelization Information
get_domain is reentrant and automatically parallelized (on tuple level).
Possible Successors
change_domain, reduce_domain, full_domain
See also
get_domain, change_domain, reduce_domain, full_domain
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


5.5. DOMAIN 485

rectangle1_domain ( const Hobject Image, Hobject *ImageReduced,


Hlong Row1, Hlong Column1, Hlong Row2, Hlong Column2 )

T_rectangle1_domain ( const Hobject Image, Hobject *ImageReduced,


const Htuple Row1, const Htuple Column1, const Htuple Row2,
const Htuple Column2 )

Reduce the domain of an image to a rectangle.


The operator rectangle1_domain reduces the definition domain of the given image to the specified rectangle.
The old domain of the input image is ignored. The size of the matrix is not changed.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image.
. ImageReduced (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real / complex / vector_field
Image with reduced definition domain.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong
Line index of upper left corner of image area.
Default Value : 100
Suggested values : Row1 ∈ {10, 20, 50, 100, 200, 300, 500}
Typical range of values : 0 ≤ Row1 ≤ 1024
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong
Column index of upper left corner of image area.
Default Value : 100
Suggested values : Column1 ∈ {10, 20, 50, 100, 200, 300, 500}
Typical range of values : 0 ≤ Column1 ≤ 1024
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong
Line index of lower right corner of image area.
Default Value : 200
Suggested values : Row2 ∈ {10, 20, 50, 100, 200, 300, 500}
Typical range of values : 0 ≤ Row2 ≤ 1024
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong
Column index of lower right corner of image area.
Default Value : 200
Suggested values : Column2 ∈ {10, 20, 50, 100, 200, 300, 500}
Typical range of values : 0 ≤ Column2 ≤ 1024
Parallelization Information
rectangle1_domain is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
get_domain
Alternatives
change_domain, reduce_domain, add_channels
See also
full_domain, get_domain, intersection
Module
Foundation

reduce_domain ( const Hobject Image, const Hobject Region,


Hobject *ImageReduced )

T_reduce_domain ( const Hobject Image, const Hobject Region,


Hobject *ImageReduced )

Reduce the domain of an image.

HALCON 8.0.2
486 CHAPTER 5. IMAGE

The operator reduce_domain reduces the definition domain of the given image to the indicated region. The
new definition domain is calculated as the intersection of the old definition domain with the region. Thus, the new
definition domain can be a subset of the region. The size of the matrix is not changed.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
New definition domain.
. ImageReduced (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real / complex / vector_field
Image with reduced definition domain.
Parallelization Information
reduce_domain is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
get_domain
Alternatives
change_domain, rectangle1_domain, add_channels
See also
full_domain, get_domain, intersection
Module
Foundation

5.6 Features
area_center_gray ( const Hobject Regions, const Hobject Image,
double *Area, double *Row, double *Column )
T_area_center_gray ( const Hobject Regions, const Hobject Image,
Htuple *Area, Htuple *Row, Htuple *Column )

Compute the area and center of gravity of a region in a gray value image.
area_center_gray computes the area and center of gravity of the regions Regions that have gray values
which are defined by the image Image. This operator is similar to area_center, but in contrast to that
operator, the gray values of the image are taken into account while computing the area and center of gravity.
The area A of a region R in the image with the gray values g(r, c) is defined as
X
A= g(r, c).
(r,c)∈R

This means that the area is defined by the volume of the gray value function g(r, c). The center of gravity is defined
by the first two normalized moments of the gray values g(r, c), i.e., by (m1,0 , m0,1 ), where
1 X p q
mp,q = r c g(r, c).
A
(r,c)∈R

Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Image (input_object) . . . . . . singlechannel-image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real
Gray value image.
. Area (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Gray value volume of the region.

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 487

. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *


Row coordinate of the gray value center of gravity.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column coordinate of the gray value center of gravity.
Result
area_center_gray returns H_MSG_TRUE if all parameters are correct and no error occurs during execution.
If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
area_center_gray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
area_center
See also
area_center_xld, elliptic_axis_gray
Module
Foundation

cooc_feature_image ( const Hobject Regions, const Hobject Image,


Hlong LdGray, Hlong Direction, double *Energy, double *Correlation,
double *Homogeneity, double *Contrast )

T_cooc_feature_image ( const Hobject Regions, const Hobject Image,


const Htuple LdGray, const Htuple Direction, Htuple *Energy,
Htuple *Correlation, Htuple *Homogeneity, Htuple *Contrast )

Calculate a co-occurrence matrix and derive gray value features thereof.


The call of cooc_feature_image corresponds to the consecutive execution of the operators
gen_cooc_matrix and cooc_feature_matrix. If several direction matrices of the co-occurrence matrix
are to be evaluated consecutively, it is more efficient to generate the matrix via gen_cooc_matrix and then
call the operator cooc_feature_matrix for the resulting matrix. The parameter Direction transfers the
direction of the neighborhood in angle or ’mean’. In the case of ’mean’ the mean value is calculated in all four
directions.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region to be examined.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Corresponding gray values.
. LdGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Number of gray values to be distinguished (2LdGray ).
Default Value : 6
List of values : LdGray ∈ {1, 2, 3, 4, 5, 6, 7, 8}
. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong / const char *
Direction in which the matrix is to be calculated.
Default Value : 0
List of values : Direction ∈ {0, 45, 90, 135, "mean"}
. Energy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Gray value energy.
. Correlation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Correlation of gray values.
. Homogeneity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Local homogeneity of gray values.

HALCON 8.0.2
488 CHAPTER 5. IMAGE

. Contrast (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *


Gray value contrast.
Result
The operator cooc_feature_image returns the value H_MSG_TRUE if an image with defined gray values
(byte) is entered and the parameters are correct. The behavior in case of empty input (no input images available) is
set via the operator set_system(’no_object_result’,<Result>), the behavior in case of empty re-
gion is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling
is raised.
Parallelization Information
cooc_feature_image is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
gen_cooc_matrix
Alternatives
cooc_feature_matrix
See also
intensity, min_max_gray, entropy_gray, select_gray
Module
Foundation

cooc_feature_matrix ( const Hobject CoocMatrix, double *Energy,


double *Correlation, double *Homogeneity, double *Contrast )

T_cooc_feature_matrix ( const Hobject CoocMatrix, Htuple *Energy,


Htuple *Correlation, Htuple *Homogeneity, Htuple *Contrast )

Calculate gray value features from a co-occurrence matrix.


The procedure calculates from a co-occurence matrix (CoocMatrix) the energy (Energy), correlation
(Correlation), local homogeneity (Homogeneity) and contrast (Contrast).
The operator cooc_feature_matrix calculates the gray value features from the part of the input matrix
generated by gen_cooc_matrix corresponding to the direction matrix indicated by the parameters LdGray
and Direction according to the following formulae:
Energy:
width
X
Energy = c2ij
i,j=0

(Measure for image homogeneity)


Correlation: Pwidth
i,j=0 (i − ux )(j − uy )cij
Correlation =
sx sy
(Measure for gray value dependencies)
Local homogeneity:
width
X 1
Homogeneity = cij
i,j=0
1 + (i − j)2

Contrast:
width
X
Contrast = (i − j)2 cij
i,j=0

(Measure for the size of the intensity differences)

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 489

where
width = Width of CoocMatrix
cij = Entry of co-occurrence matrix
Pwidth
ux = i,j=0 i ∗ cij
Pwidth
uy = j ∗ cij
Pi,j=0
width
s2x = i,j=0 (i − ux )2 ∗ cij
Pwidth
s2y = 2
i,j=0 (i − uy ) ∗ cij

Attention
The region of the input image is disregarded.
Parameter

. CoocMatrix (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : real


Co-occurrence matrix.
. Energy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Homogeneity of the gray values.
. Correlation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Correlation of gray values.
. Homogeneity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Local homogeneity of gray values.
. Contrast (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Gray value contrast.
Result
The operator cooc_feature_matrix returns the value H_MSG_TRUE if an image with defined gray values
is passed and the parameters are correct. The behavior in case of empty input (no input images available) is set
via the operator set_system(’no_object_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
cooc_feature_matrix is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
gen_cooc_matrix
Alternatives
cooc_feature_image
See also
intensity, min_max_gray, entropy_gray, select_gray
Module
Foundation

elliptic_axis_gray ( const Hobject Regions, const Hobject Image,


double *Ra, double *Rb, double *Phi )

T_elliptic_axis_gray ( const Hobject Regions, const Hobject Image,


Htuple *Ra, Htuple *Rb, Htuple *Phi )

Compute the orientation and major axes of a region in a gray value image.
The operator elliptic_axis_gray calculates the length of the axes and the orientation of the ellipse having
the “same orientation” and the “aspect ratio” as the input region. Several input regions can be passed in Regions
as tuples. The length of the major axis Ra and the minor axis Rb as well as the orientation of the major axis with
regard to the x-axis (Phi) are determined. The angle is returned in radians. The calculation is done analogously
to elliptic_axis. The only difference is that in elliptic_axis_gray the gray value moments are
used instead of the region moments. The gray value moments are derived from the input image Image. For the
definition of the gray value moments, see area_center_gray.

HALCON 8.0.2
490 CHAPTER 5. IMAGE

Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Image (input_object) . . . . . . singlechannel-image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real
Gray value image.
. Ra (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Major axis of the region.
. Rb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Minor axis of the region.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Angle enclosed by the major axis and the x-axis.
Result
elliptic_axis_gray returns H_MSG_TRUE if all parameters are correct and no error occurs during execu-
tion. If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>).
If necessary, an exception handling is raised.
Parallelization Information
elliptic_axis_gray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
gen_ellipse
Alternatives
elliptic_axis
See also
area_center_gray
Module
Foundation

entropy_gray ( const Hobject Regions, const Hobject Image,


double *Entropy, double *Anisotropy )

T_entropy_gray ( const Hobject Regions, const Hobject Image,


Htuple *Entropy, Htuple *Anisotropy )

Determine the entropy and anisotropy of images.


The operator entropy_gray creates the histogram of relative frequencies of the gray values in the input image
and calculates from these frequencies the entropy and the anisotropy coefficient for each region from Regions
according to the following formulae:
Entropy:
255
X
Entropy = − rel[i] ∗ log2 (rel[i])
0

Anisotropy coefficient:
Pk
0 rel[i] ∗ log2 (rel[i])
Anisotropy =
Entropy
where
rel[i] = Histogram of relative gray value frequencies
i = Gray value of input image (0 . . . 255)
Pk
k = Smallest possible gray value with 0 rel[i] ≥ 0.5

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 491

Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions where the features are to be determined.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Gray value image.
. Entropy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Information content (entropy) of the gray values.
Assertion : (0 ≤ Entropy) ∧ (Entropy ≤ 8)
. Anisotropy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Measure of the symmetry of gray value distribution.
Complexity
If F is the area of the region the runtime complexity is O(F + 255).
Result
The operator entropy_gray returns the value H_MSG_TRUE if an image with defined gray values is entered
and the parameters are correct. The behavior in case of empty input (no input images available) is set via the
operator set_system(’no_object_result’,<Result>), the behavior in case of empty region is set
via set_system(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
entropy_gray is reentrant and automatically parallelized (on tuple level).
Alternatives
select_gray
See also
entropy_image, gray_histo, gray_histo_abs, fuzzy_entropy, fuzzy_perimeter
Module
Foundation

estimate_noise ( const Hobject Image, const char *Method,


double Percent, double *Sigma )

T_estimate_noise ( const Hobject Image, const Htuple Method,


const Htuple Percent, Htuple *Sigma )

Estimate the image noise from a single image.


The operator estimate_noise estimates the standard deviation of additive noise within the domain of the
image that is passed in Image. The standard deviation is returned in Sigma.
The operator is useful in the following use cases:

• determination of MinContrast for matching,


• determination of the amplitude for edge filters,
• camera evaluation,
• monitoring errors in camera operation (e.g., user overdrives camera gain).

To estimate the noise, one of the following four methods can be selected in Method:

• ’foerstner’: If Method is set to ’foerstner’, first for each pixel a homogeneity measure is computed based
on the first derivatives of the gray values of Image. By thresholding the homogeneity measure one obtains
the homogeneous regions in the image. The threshold is computed based on a starting value for the image
noise. The starting value is obtained by applying the method ’immerkaer’ (see below) in the first step. It
is assumed that the gray value fluctuations within the homogeneous regions are solely caused by the image
noise. Furthermore it is assumed that the image noise is Gaussian distributed. The average homogeneity
measure within the homogeneous regions is then used to calculate a refined estimate for the image noise.
The refined estimate leads to a new threshold for the homogeneity. The described process is iterated until the
estimated image noise remains constant between two successive iterations. Finally, the standard deviation of
the estimated image noise is returned in Sigma.

HALCON 8.0.2
492 CHAPTER 5. IMAGE

Note that in some cases the iteration falsely converges to the value 0. This happens, for example, if the gray
value histogram of the input image contains gaps that are caused either by an automatic radiometric scaling
of the camera or frame grabber, respectively, or by a manual spreading of the gray values using a scaling
factor > 1.
Also note that the result obtained by this method is independent of the value passed in Percent.
• ’immerkaer’: If Method is set to ’immerkaer’, first the following filter mask is applied to the input image:

1 −2 1

M = −2 4 −2 .
1 −2 1
The advantage of this method is that M is almost insensitive to image structure but only depends on the noise
in the image. Assuming a Gaussian distributed noise, its standard deviation is finally obtained as
r
π 1 X
Sigma = |Image ∗ M | ,
2 6N
Image

where N is the number of image pixels to which M is applied. Note that the result obtained by this method
is independent of the value passed in Percent.
• ’least_squares’: If Method is set to ’least_squares’, the fluctuations of the gray values with respect to a
locally fitted gray value plane are used to estimate the image noise. First, a homogeneity measure is computed
based on the first derivatives of the gray values of Image. Homogeneous image regions are determined by
selecting the Percent percent most homogeneous pixels in the domain of the input image, i.e., pixels with
small magnitudes of the first derivatives. For each homogeneous pixel a gray value plane is fitted to its 3 × 3
neighborhood. The differences between the gray values within the 3 × 3 neighborhood and the locally fitted
plane are used to estimate the standard deviation of the noise. Finally, the average standard deviation over all
homogeneous pixels is returned in Sigma.
• ’mean’: If Method is set to ’mean’, the noise estimation is based on the difference between the input
image and a noiseless version of the input image. First, a homogeneity measure is computed based on the
first derivatives of the gray values of Image. Homogeneous image regions are determined by selecting
the Percent percent most homogeneous pixels in the domain of the input image, i.e., pixels with small
magnitudes of the first derivatives. A mean filter is applied to the homogeneous image regions in order to
eliminate the noise. It is assumed that the difference between the input image and the thus obtained noiseless
version of the image represents the image noise. Finally, the standard deviation of the differences is returned
in Sigma. It should be noted that this method requires large connected homogenous image regions to be
able to reliably estimate the noise.

Note that the methods ’foerstner’ and ’immerkaer’ assume a Gaussian distribution of the image noise, whereas
the methods ’least_squares’ and’mean’ can be applied to images with arbitrarily distributed noise. In general, the
method ’foerstner’ returns the most accurate results while the method ’immerkaer’ shows the fastest computation.
If the image noise could not be estimated reliably, the error 3175 is raised. This may happen if the image does not
contain enough homogeneous regions, if the image was artificially created, or if the noise is not of Gaussian type.
In order to avoid this error, it might be useful in some cases to try one of the following modifications in dependence
of the estimation method that is passed in Method:

• Increase the size of the input image domain (useful for all methods).
• Increase the value of the parameter Percent (useful for methods ’least_squares’ and ’mean’).
• Use the method ’immerkaer’, instead of the methods ’foerstner’, ’least_squares’, or ’mean’. The method
’immerkaer’ does not rely on the existence of homogeneous image regions, and hence is almost always
applicable.

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Input image.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Method to estimate the image noise.
Default Value : "foerstner"
List of values : Method ∈ {"foerstner", "immerkaer", "least_squares", "mean"}

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 493

. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double / Hlong


Percentage of used image points.
Default Value : 20
Suggested values : Percent ∈ {1, 2, 5, 7, 10, 15, 20, 30, 40, 50}
Restriction : (0 < Percent) ∧ (Percent ≤ 50.)
. Sigma (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Standard deviation of the image noise.
Assertion : Sigma ≥ 0
Example (Syntax: HDevelop)

read_image (Image, ’combine’)


estimate_noise (ImageNoise, ’foerstner’, 20, SigmaFoerstner)
estimate_noise (ImageNoise, ’immerkaer’, 20, SigmaImmerkaer)
estimate_noise (ImageNoise, ’least_squares’, 20, SigmaLeastSquares)
estimate_noise (ImageNoise, ’mean’, 20, SigmaMean)

Result
If the parameters are valid, the operator estimate_noise returns the value H_MSG_TRUE. If necessary an
exception is raised. If the image noise could not be estimated reliably, the error 3175 is raised.
Parallelization Information
estimate_noise is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
grab_image, grab_image_async, read_image, reduce_domain
Possible Successors
binomial_filter, gauss_image, mean_image, smooth_image
Alternatives
noise_distribution_mean, intensity, min_max_gray
See also
gauss_distribution, add_noise_distribution
References
W. Förstner: "‘Image Preprocessing for Feature Extraction in Digital Intensity, Color and Range Images"‘, Springer
Lecture Notes on Earth Sciences, Summer School on Data Analysis and the Statistical Foundations of Geomatics,
1999
J. Immerkaer: "‘Fast Noise Variance Estimation"‘, Computer Vision and Image Understanding, Vol. 64, No. 2, pp.
300-302, 1996
Module
Foundation

fit_surface_first_order ( const Hobject Regions, const Hobject Image,


const char *Algorithm, Hlong Iterations, double ClippingFactor,
double *Alpha, double *Beta, double *Gamma )

T_fit_surface_first_order ( const Hobject Regions,


const Hobject Image, const Htuple Algorithm, const Htuple Iterations,
const Htuple ClippingFactor, Htuple *Alpha, Htuple *Beta,
Htuple *Gamma )

Calculate gray value moments and approximation by a first order surface (plane).
The operator fit_surface_first_order calculates the gray value moments and the parameters of the
approximation of the gray values by a first order surface. The calculation is done by minimizing the distance
between the gray values and the surface. A first order surface is described by the following formula:

Image0 (r, c) = Alpha(r − r_center) + Beta(c − c_center) + Gamma

HALCON 8.0.2
494 CHAPTER 5. IMAGE

r_center and c_center are the center coordinates of intersection of the input region with the full image domain. By
the minimization process the parameters from Alpha to Gamma is calculated.
The algorithm used for the fitting can be selected via Algorithm:
’regression’ Standard ’least squares’ line fitting.
’huber’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Huber.
’tukey’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Tukey.
The parameter ClippingFactor (a scaling factor for the standard deviation) controls the amount of damping
outliers: The smaller the value chosen for ClippingFactor the more outliers are detected. The detection of
outliers is repeated. The parameter Iterations specifies the number of iterations. In the modus ’regression’
this value is ignored.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be checked.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2 / direction / cyclic / real
Corresponding gray values.
. Algorithm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Algorithm for the fitting.
Default Value : "regression"
List of values : Algorithm ∈ {"regression", "huber", "tukey"}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Maximum number of iterations (unused for ’regression’).
Default Value : 5
Restriction : Iterations ≥ 0
. ClippingFactor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Clipping factor for the elimination of outliers.
Default Value : 2.0
List of values : ClippingFactor ∈ {1.0, 1.5, 2.0, 2.5, 3.0}
Restriction : ClippingFactor > 0
. Alpha (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Alpha of the approximating surface.
. Beta (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Beta of the approximating surface.
. Gamma (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Gamma of the approximating surface.
Result
The operator fit_surface_second_order returns the value H_MSG_TRUE if an image with the defined
gray values (byte) is entered and the parameters are correct. If necessary an exception handling is raised.
Parallelization Information
fit_surface_first_order is reentrant and automatically parallelized (on tuple level).
See also
moments_gray_plane, fit_surface_second_order
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 495

fit_surface_second_order ( const Hobject Regions,


const Hobject Image, const char *Algorithm, Hlong Iterations,
double ClippingFactor, double *Alpha, double *Beta, double *Gamma,
double *Delta, double *Epsilon, double *Zeta )

T_fit_surface_second_order ( const Hobject Regions,


const Hobject Image, const Htuple Algorithm, const Htuple Iterations,
const Htuple ClippingFactor, Htuple *Alpha, Htuple *Beta,
Htuple *Gamma, Htuple *Delta, Htuple *Epsilon, Htuple *Zeta )

Calculate gray value moments and approximation by a second order surface.


The operator fit_surface_second_order calculates the gray value moments and the parameters of the
approximation of the gray values by a second order surface. The calculation is done by minimizing the distance
between the gray values and the surface. A second order surface is described by the following formula:

Image(r, c) = Alpha(r−r_center)∗∗2+Beta(c−c_center)∗∗2+Gamma(r−r_center)∗(c−c_center)+Delta(r−r_center)

r_center and c_center are the center coordinates of the intersection of the input region with the full image domain.
By the minimization process the parameters from Alpha to Zeta is calculated.
The algorithm used for the fitting can be selected via Algorithm:
’regression’ Standard ’least squares’ fitting.
’huber’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Huber.
’tukey’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Tukey.
The parameter ClippingFactor (a scaling factor for the standard deviation) controls the amount of damping
outliers: The smaller the value chosen for ClippingFactor the more outliers are detected. The detection of
outliers is repeated. The parameter Iterations specifies the number of iterations. In the modus ’regression’
this value is ignored.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be checked.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2 / direction / cyclic / real
Corresponding gray values.
. Algorithm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Algorithm for the fitting.
Default Value : "regression"
List of values : Algorithm ∈ {"regression", "tukey", "huber"}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Maximum number of iterations (unused for ’regression’).
Default Value : 5
Restriction : Iterations ≥ 0
. ClippingFactor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Clipping factor for the elimination of outliers.
Default Value : 2.0
List of values : ClippingFactor ∈ {1.0, 1.5, 2.0, 2.5, 3.0}
Restriction : ClippingFactor > 0
. Alpha (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Alpha of the approximating surface.
. Beta (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Beta of the approximating surface.
. Gamma (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Gamma of the approximating surface.
. Delta (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Delta of the approximating surface.

HALCON 8.0.2
496 CHAPTER 5. IMAGE

. Epsilon (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *


Parameter Epsilon of the approximating surface.
. Zeta (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Zeta of the approximating surface.
Result
The operator fit_surface_second_order returns the value H_MSG_TRUE if an image with the defined
gray values (byte) is entered and the parameters are correct. If necessary an exception handling is raised.
Parallelization Information
fit_surface_second_order is reentrant and automatically parallelized (on tuple level).
See also
moments_gray_plane, fit_surface_first_order
Module
Foundation

fuzzy_entropy ( const Hobject Regions, const Hobject Image, Hlong Apar,


Hlong Cpar, double *Entropy )

T_fuzzy_entropy ( const Hobject Regions, const Hobject Image,


const Htuple Apar, const Htuple Cpar, Htuple *Entropy )

Determine the fuzzy entropy of regions.


fuzzy_entropy calculates the fuzzy entropy of a fuzzy set. To do so, the image is regarded as a fuzzy set. The
entropy then is a measure of how well the image approximates a white or black image. It is defined as follows:

1
P
H(X) = M N ln2 l Te (l)h(l)

where M × N is the size of the image, and h(l) is the histogram of the image. Furthermore,

Te (l) = −µ(l) ln µ(l) − (1 − µ(l)) ln(1 − µ(l))

Here, u(x(m, n)) is a fuzzy membership function defining the fuzzy set (see fuzzy_perimeter). The same
restrictions hold as in fuzzy_perimeter.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions for which the fuzzy entropy is to be calculated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image containing the fuzzy membership values.
. Apar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Start of the fuzzy function.
Default Value : 0
Suggested values : Apar ∈ {0, 5, 10, 20, 50, 100}
Typical range of values : 0 ≤ Apar ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 5
. Cpar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
End of the fuzzy function.
Default Value : 255
Suggested values : Cpar ∈ {50, 100, 150, 200, 220, 255}
Typical range of values : 0 ≤ Cpar ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 5
Restriction : Apar ≤ Cpar
. Entropy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Fuzzy entropy of a region.

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 497

Example

/* To find a Fuzzy Entropy from an Image */


read_image(&Image,’affe’) ;
fuzzy_entropy(Trans,Trans,0,255,&Entro) ;

Result
The operator fuzzy_entropy returns the value H_MSG_TRUE if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
fuzzy_entropy is reentrant and automatically parallelized (on tuple level).
See also
fuzzy_perimeter
References
M.K. Kundu, S.K. Pal: ‘"Automatic selection of object enhancement operator with quantitative justification based
on fuzzy set theoretic measures”; Pattern Recognition Letters 11; 1990; pp. 811-829.
Module
Foundation

fuzzy_perimeter ( const Hobject Regions, const Hobject Image,


Hlong Apar, Hlong Cpar, double *Perimeter )

T_fuzzy_perimeter ( const Hobject Regions, const Hobject Image,


const Htuple Apar, const Htuple Cpar, Htuple *Perimeter )

Calculate the fuzzy perimeter of a region.


The operator fuzzy_perimeter is used to determine the differences of fuzzy membership between an image
point and its neighbor points. The right and lower neighbor are taken into account. The fuzzy perimeter is then
defined as follows:
M
X −1 N
X −1 M
X −1 N
X −1
p(X) = |µX (xm,n ) − µX (xm,n+1 )| + |µX (xm,n ) − µX (xm+1,n )|
m=1 n=1 m=1 n=1

where M × N is the size of the image, and u(x(m, n)) is the fuzzy membership function (i.e., the input image).
This implementation uses Zadeh’s Standard-S function, which is defined as follows:


 0,  x≤a
 2 x−a 2 ,

a<x≤b

c−a
µX (x) = 
2

x−a


 1 − 2 c−a , b < x ≤ c

1, c≤x

The parameters a, b and c obey the following restrictions: b = a+c


2 is the inflection point of the function, ∆b =
b − a = c − b is the bandwith, and for x = b µ(x) = 0.5 holds. In fuzzy_perimeter, the parameters Apar
and Cpar are defined as follows: b is Apar+2
Cpar .
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions for which the fuzzy perimeter is to be calculated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image containing the fuzzy membership values.
. Apar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Start of the fuzzy function.
Default Value : 0
Suggested values : Apar ∈ {0, 5, 10, 20, 50, 100}
Typical range of values : 0 ≤ Apar ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 5

HALCON 8.0.2
498 CHAPTER 5. IMAGE

. Cpar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong


End of the fuzzy function.
Default Value : 255
Suggested values : Cpar ∈ {50, 100, 150, 200, 220, 255}
Typical range of values : 0 ≤ Cpar ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 5
Restriction : Apar ≤ Cpar
. Perimeter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Fuzzy perimeter of a region.
Example

/* To find a Fuzzy Entropy from an Image */


read_image(&Image,"affe");
fuzzy_perimeter(Trans,Trans,Apar,Bpar,&Per);

Result
The operator fuzzy_perimeter returns the value H_MSG_TRUE if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
fuzzy_perimeter is reentrant and automatically parallelized (on tuple level).
See also
fuzzy_entropy
References
M.K. Kundu, S.K. Pal: ‘"Automatic selection of object enhancement operator with quantitative justification based
on fuzzy set theoretic measures”; Pattern Recognition Letters 11; 1990; pp. 811-829.
Module
Foundation

gen_cooc_matrix ( const Hobject Regions, const Hobject Image,


Hobject *Matrix, Hlong LdGray, Hlong Direction )

T_gen_cooc_matrix ( const Hobject Regions, const Hobject Image,


Hobject *Matrix, const Htuple LdGray, const Htuple Direction )

Calculate the co-occurrence matrix of a region in an image.


The operator gen_cooc_matrix determines from the input regions how often the gray values i and j are
located next to each other in a certain direction (0, 45, 90, 135 degrees), stores this number in the co-occurrence
matrix at the locations (i, j) and (j, i) (the matrix is symmetrical), and finally scales the matrix with the number of
entries. LdGray indicates the number of gray values to be distinguished (namely 2LdGray ).
Example (LdGray = 2, i.e. 4 gray values are distinguished):
Input image Co-occurrence matrix
with gray values: (not scaled)

0 0 3 2 0 0 1 0 1 1 0
1 1 2 0 2 2 0 1 0 1 1
1 2 3 0 2 0 1 1 1 0 0
1 0 1 0 0 1 0 0

0 2 0 0 0 1 0 0
2 2 1 0 1 2 0 1
0 1 0 2 0 0 2 0
0 0 2 0 0 1 0 0

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 499

Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region to be checked.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Image providing the gray values.
. Matrix (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Co-occurrence matrix (matrices).
. LdGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of gray values to be distinguished (2LdGray ).
Default Value : 6
List of values : LdGray ∈ {1, 2, 3, 4, 5, 6, 7, 8}
Typical range of values : 1 ≤ LdGray ≤ 256 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Direction of neighbor relation.
Default Value : 0
List of values : Direction ∈ {0, 45, 90, 135}
Result
The operator gen_cooc_matrix returns the value H_MSG_TRUE if an image with defined gray values is
entered and the parameters are correct. The behavior in case of empty input (no input images available) is set
via the operator set_system(’no_object_result’,<Result>), the behavior in case of empty region
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
gen_cooc_matrix is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
draw_region, gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, threshold,
erosion_circle, binomial_filter, gauss_image, smooth_image, sub_image
Alternatives
cooc_feature_image
See also
cooc_feature_matrix
Module
Foundation

T_gray_histo ( const Hobject Regions, const Hobject Image,


Htuple *AbsoluteHisto, Htuple *RelativeHisto )

Calculate the gray value distribution.


The operator gray_histo calculates for the image (Image) within Regions the absolute
(AbsoluteHisto) and relative (RelativeHisto) histogram of the gray values.
Both histograms are tupels of 256 values, which — beginning at 0 — contain the frequencies of the individual gray
values of the image.
AbsoluteHisto indicates the absolute frequencies of the gray values in integers, and RelativeHisto indi-
cates the relative, i.e. the absolute frequencies divided by the area of the image as floating point numbers.
real-, int2-, uint2-, and int4-images are transformed into byte-images (first the largest and smallest gray value
in the image are determined, and then the original gray values are mapped linearly into the area 0..255) and
then processed as mentioned above. The histogram can also be returned directly as a graphic via the operators
set_paint(WindowHandle,’histogram’) and disp_image.
Attention
Real, int2, uint2, and int4 images are reduced to 256 gray values.

HALCON 8.0.2
500 CHAPTER 5. IMAGE

Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region in which the histogram is to be calculated.
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 / int4 / real
Image the gray value distribution of which is to be calculated.
. AbsoluteHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; Htuple . Hlong *
Absolute frequencies of the gray values.
. RelativeHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; Htuple . double *
Frequencies, normalized to the area of the region.
Complexity
If F is the area of the region the runtime complexity is O(F + 255).
Result
The operator gray_histo returns the value H_MSG_TRUE if the image has defined gray values and the
parameters are correct. The behavior in case of empty input (no input images available) is set via the opera-
tor set_system(’no_object_result’,<Result>), the behavior in case of empty region is set via
set_system(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
gray_histo is reentrant and processed without parallelization.
Possible Successors
histo_to_thresh, gen_region_histo
Alternatives
min_max_gray, intensity, gray_histo_abs
See also
set_paint, disp_image, histo_2dim, scale_image_max, entropy_gray
Module
Foundation

T_gray_histo_abs ( const Hobject Regions, const Hobject Image,


const Htuple Quantization, Htuple *AbsoluteHisto )

Calculate the gray value distribution.


The operator gray_histo_abs calculates for the image (Image) within Regions the absolute
(AbsoluteHisto) ) histogram of the gray values.
The parameter Quantization defines, how many frequencies of neighbored gray values are added for one
frequency value. The resulting histogram AbsoluteHisto is a tuple, whose indices are mapped on the gray
values of the input image Image and whose elements contain the frequencies of the gray values. The indices i of
the frequency value are calculated from the gray values g and the quantisation q as follows:
 
g + 0.5
i= for unsigned image types,
q
 
g − (M IN − 0.5)
i= for signed image types,
q

whereas MIN denotes the minimal gray value, e.g., -128 for an int1 image type. Therefore, the size of the tuple
results from the ratio of the full domain of gray values and the quantisation, e.g. for images of int2 in d 65536
3.0 e =
21846 . The origin gray value of the signed image types int1 resp. int2 is mapped on the index 128 resp. 32768,
negative resp. positive gray values have smaller resp. greater indices.
The histogram can also be returned directly as a graphic via the operators set_paint
(WindowHandle,’histogram’) and disp_image.

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 501

Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region in which the histogram is to be calculated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2
Image the gray value distribution of which is to be calculated.
. Quantization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Quantization of the gray values.
Default Value : 1.0
List of values : Quantization ∈ {1.0, 2.0, 3.0, 5.0, 10.0}
Restriction : Quantization ≥ 1.0
. AbsoluteHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; Htuple . Hlong *
Absolute frequencies of the gray values.
Result
The operator gray_histo_abs returns the value H_MSG_TRUE if the image has defined gray values and
the parameters are correct. The behavior in case of empty input (no input images available) is set via the oper-
ator set_system(’no_object_result’,<Result>), the behavior in case of empty region is set via
set_system(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
gray_histo_abs is reentrant and processed without parallelization.
Possible Successors
histo_to_thresh, gen_region_histo
Alternatives
min_max_gray, intensity, gray_histo
See also
set_paint, disp_image, histo_2dim, scale_image_max, entropy_gray
Module
Foundation

T_gray_projections ( const Hobject Region, const Hobject Image,


const Htuple Mode, Htuple *HorProjection, Htuple *VertProjection )

Calculate horizontal and vertical gray-value projections.


gray_projections calculates the horizontal and vertical gray-value projections, i.e., the mean values in the
horizontal and vertical direction of the gray values of the input image Image within the input region Region.
If Mode = ’simple’ is selected the projection is performed in the direction of the coordinate axes of the image, i.e.:

1 X
HorProjection(r) = Image(r + r0 , c + c0 )
n(r + r0 )
(r+r 0 ,c+c0 )∈Region
1 X
VertProjection(c) = Image(r + r0 , c + c0 )
n(c + c0 )
(r+r 0 ,c+c0 )∈Region

Here, (r0 , c0 ) denotes the upper left corner of the smallest enclosing axis-parallel rectangle of the input region (see
smallest_rectangle1), and n(x) denotes the number of region points in the corresponding row r + r0 or
column c + c0 . Hence, the horizontal projection returns a one-dimensional function that reflects the vertical gray
value changes. Likewise, the vertical projection returns a function that reflects the horizontal gray value changes.
If Mode = ’rectangle’is selected the projection is performed in the direction of the major axes of the smallest
enclosing rectangle of arbitrary orientation of the input region (see smallest_rectangle2). Here, the hor-
izontal projection direction corresponds to the larger axis, while the vertical direction corresponds to the smaller
axis. In this mode, all gray values within the smallest enclosing rectangle of arbitrary orientation of the input
region are used to compute the projections.

HALCON 8.0.2
502 CHAPTER 5. IMAGE

Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Region to be processed.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / int2 / uint2
Grayvalues for projections.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method to compute the projections.
Default Value : "simple"
List of values : Mode ∈ {"simple", "rectangle"}
. HorProjection (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Horizontal projection.
. VertProjection (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Vertical projection.
Parallelization Information
gray_projections is reentrant and processed without parallelization.
Module
1D Metrology

histo_2dim ( const Hobject Regions, const Hobject ImageCol,


const Hobject ImageRow, Hobject *Histo2Dim )

T_histo_2dim ( const Hobject Regions, const Hobject ImageCol,


const Hobject ImageRow, Hobject *Histo2Dim )

Calculate the histogram of two-channel gray value images.


The operator histo_2dim calculates the 2-dimensional histogram of two images within Regions. The gray
values of channel 1 (ImageCol) are interpreted as row index, those of channel 2 (ImageRow) as column index.
The gray value at one point P (g1, g2) in the output image Histo2Dim indicates the frequency of the gray value
combination (g1,g2) with g1 indicating the line index and g2 the column index.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region in which the histogram is to be calculated.
. ImageCol (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1
Channel 1.
. ImageRow (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1
Channel 2.
. Histo2Dim (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : int4
Histogram to be calculated.
Example

read_image(&Image,"affe");
texture_laws(Image,&Texture,"el",1,5);
draw_region(&Region,WindowHandle);
histo_2dim(Region,Texture,Image,&Histo2Dim);
set_part(WindowHandle,0,0,255,255);
disp_image(Histo2Dim,WindowHandle);

Complexity
If F is the plane of the region, the runtime complexity is O(F + 2562 ).
Result
The operator histo_2dim returns the value H_MSG_TRUE if both images have defined gray values.
The behavior in case of empty input (no input images available) is set via the operator set_system

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 503

(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system


(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
histo_2dim is reentrant and processed without parallelization.
Possible Predecessors
decompose3, decompose2, draw_region
Possible Successors
threshold, class_2dim_sup, pouring, local_max, gray_skeleton
Alternatives
gray_histo, gray_histo_abs
See also
get_grayval
Module
Foundation

intensity ( const Hobject Regions, const Hobject Image, double *Mean,


double *Deviation )

T_intensity ( const Hobject Regions, const Hobject Image, Htuple *Mean,


Htuple *Deviation )

Calculate the mean and deviation of gray values.


The operator intensity calculates the mean and the deviation of the gray values in the input image within
Regions. If R is a region, p a pixel from R with the gray value g(p) and F the plane (F = |R|), the features are
defined by:
P
p∈R g(p)
Mean :=
F
sP
p∈R (g(p) − Mean)2
Deviation :=
F

Attention
The calculation of Deviation does not follow the usual definition if the region of the image contains only one
pixel. In this case 0.0 is returned.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions the features of which are to be calculated.
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Gray value image.
. Mean (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Mean gray value of a region.
. Deviation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Deviation of gray values within a region.
Complexity
If F is the area of the region, the runtime complexity is O(F ).
Result
The operator intensity returns the value H_MSG_TRUE. The behavior in case of empty input (no input
images available) is set via the operator set_system(’no_object_result’,<Result>), the behavior
in case of empty region is set via set_system(’empty_region_result’,<Result>). If necessary an
exception handling is raised.
Parallelization Information
intensity is reentrant and automatically parallelized (on tuple level).

HALCON 8.0.2
504 CHAPTER 5. IMAGE

Possible Successors
threshold
Alternatives
select_gray, min_max_gray
See also
mean_image, mean_image, gray_histo, gray_histo_abs
Module
Foundation

min_max_gray ( const Hobject Regions, const Hobject Image,


double Percent, double *Min, double *Max, double *Range )

T_min_max_gray ( const Hobject Regions, const Hobject Image,


const Htuple Percent, Htuple *Min, Htuple *Max, Htuple *Range )

Determine the minimum and maximum gray values within regions.


The operator min_max_gray creates the histogram of the absolute frequencies of the gray values within
Regions in the input image Image (see gray_histo) and calculates the number of pixels Percent cor-
responding to the area of the input image. Then it goes inwards on both sides of the histogram by this number of
pixels and determines the smallest and the largest gray value:
e.g.:
Area = 60, percent = 5, i.e. 3 pixels
histogram = [2,8,0,7,13,0,0,. . . ,0,10,10,5,3,1,1]
⇒ Maximum = 255, Minimum = 0, Range = 255
min_max_gray returns: Maximum = 253, Minimum = 1, Range = 252
For image of type int4 and real, the above calculation is not performed via histograms, but using a rank selection
algorithm. If Percent is set to 50, Min = Max = Median. If Percent is 0 no histogram is calculated in order
to enhance the runtime.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions, the features of which are to be calculated.
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Gray value image.
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double / Hlong
Percentage below (above) the absolute maximum (minimum).
Default Value : 0
Suggested values : Percent ∈ {0, 1, 2, 5, 7, 10, 15, 20, 30, 40, 50}
Restriction : (0 ≤ Percent) ∧ (Percent ≤ 50)
. Min (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
“Minimum” gray value.
. Max (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
“Maximum” gray value.
Assertion : Max ≥ Min
. Range (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Difference between Max and Min.
Assertion : Range ≥ 0
Example

/* Threshold segmentation with training region: */


read_image(&Image,"fabrik");
draw_region(&Region,WindowHandle);
min_max_gray(Region,Image,5.0,&Min,&Max,_);
threshold(Bild,&Seg,Min,Max);
disp_region(Seg,WindowHandle);

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 505

Result
The operator min_max_gray returns the value H_MSG_TRUE if the input image has the defined gray values
and the parameters are correct. The behavior in case of empty input (no input images available) is set via the
operator set_system(’no_object_result’,<Result>). The behaviour in case of an empty region
is set via the operator set_system(’empty_region_result’,<Result>). If necessary an exception
handling is raised.
Parallelization Information
min_max_gray is reentrant and processed without parallelization.
Possible Predecessors
draw_region, gen_circle, gen_ellipse, gen_rectangle1, threshold, regiongrowing
Possible Successors
threshold
Alternatives
select_gray, intensity
See also
gray_histo, scale_image, scale_image_max, learn_ndim_norm
Module
Foundation

moments_gray_plane ( const Hobject Regions, const Hobject Image,


double *MRow, double *MCol, double *Alpha, double *Beta,
double *Mean )

T_moments_gray_plane ( const Hobject Regions, const Hobject Image,


Htuple *MRow, Htuple *MCol, Htuple *Alpha, Htuple *Beta,
Htuple *Mean )

Calculate gray value moments and approximation by a plane.


The operator moments_gray_plane calculates the gray value moments and the parameters of the approxima-
tion of the gray values by a plane. The calculation is carried out according to the following formula:

1 X 1 X
MRow = (r − r)(Image(r, c) − Mean) MCol = (c − c)(Image(r, c) − Mean)
F2 F2
(r,c)∈Regions (r,c)∈Regions

MRowF m02 − m11 MCol m20 MColF − MRowF m11


Alpha = m20 m02 − m211 Beta =
F m20 m02 − m211
where F is the plane, r, c the center, and m11 , m20 , and m02 the scaled moments of Regions.
The parameters Alpha, Beta and Mean describe a plane above the region:

Image0 (r, c) = Alpha(r − r) + Beta(c − c) + Mean

Thus Alpha indicates the gradient in the direction of the line axis (“down”), Beta the gradient in the direction of
the column axis (to the “right”).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be checked.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / uint2 / real
Corresponding gray values.
. MRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Mixed moments along a line.
. MCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Mixed moments along a column.

HALCON 8.0.2
506 CHAPTER 5. IMAGE

. Alpha (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *


Parameter Alpha of the approximating plane.
. Beta (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Beta of the approximating plane.
. Mean (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Mean gray value.
Result
The operator moments_gray_plane returns the value H_MSG_TRUE if an image with the defined gray
values (byte) is entered and the parameters are correct. The behavior in case of empty input (no input images
available) is set via the operator set_system(’no_object_result’,<Result>), the behavior in case of
empty region is set via set_system(’empty_region_result’,<Result>). If necessary an exception
handling is raised.
Parallelization Information
moments_gray_plane is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
draw_region, gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, threshold,
regiongrowing
See also
intensity, moments_region_2nd
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, pp 75-76
Module
Foundation

plane_deviation ( const Hobject Regions, const Hobject Image,


double *Deviation )

T_plane_deviation ( const Hobject Regions, const Hobject Image,


Htuple *Deviation )

Calculate the deviation of the gray values from the approximating image plane.
The operator plane_deviation calculates the deviation of the gray values in Image from the approximation
of the gray values through a plane. Contrary to the standard deviation in case of intensity slanted gray value
planes also receive the value zero. The gray value plane is calculated according to gen_image_gray_ramp.
If F is the plane, α, β, µ the parameters of the image plane and (r0 , c0 ) the center, Deviation is defined by:
s
sum(r,c)∈Regions ((α(r − r0 ) + β(c − c0 ) + µ) − Image(r, c))2
Deviation = .
F

Attention
It should be noted that the calculation of Deviation does not follow the usual definition. It is defined to return
the value 0.0 for an image with only one pixel.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions, of which the plane deviation is to be calculated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / cyclic
Gray value image.
. Deviation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Deviation of the gray values within a region.
Complexity
If F is the area of the region the runtime complexity amounts to O(F ).
Result
The operator plane_deviation returns the value H_MSG_TRUE if Image is of the type byte.

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 507

The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
plane_deviation is reentrant and automatically parallelized (on tuple level).
Alternatives
intensity, gen_image_gray_ramp, sub_image
See also
moments_gray_plane
Module
Foundation

select_gray ( const Hobject Regions, const Hobject Image,


Hobject *SelectedRegions, const char *Features, const char *Operation,
double Min, double Max )

T_select_gray ( const Hobject Regions, const Hobject Image,


Hobject *SelectedRegions, const Htuple Features,
const Htuple Operation, const Htuple Min, const Htuple Max )

Select regions based on gray value features.


The operator select_gray has a number of regions (Regions) as input. For each of these regions the features
(Features) are calculated. If each (Operation = ’and’) or at least one (Operation = ’or’) of the calculated
features is within the limits determined by the parameter, the region is transferred (duplicated) into the output. The
parameter Image contains an image which returns the gray values for calculating the features.
Condition:

Mini ≤ Featuresi (Regions, Image) ≤ Maxi

Possble values for Features:


’area’ Gray value volume of region (see area_center_gray)
’row’ Row index of the center of gravity (see area_center_gray)
’column’ Column index of the center of gravity (see area_center_gray)
’ra’ Major axis of equivallent ellipse (see elliptic_axis_gray)
’rb’ Minor axis of equivallent ellipse (see elliptic_axis_gray)
’phi’ Orientation of equivallent ellipse (see elliptic_axis_gray)
’min’ Minimum gray value (see min_max_gray)
’max’ Maximum gray value (see min_max_gray)
’mean’ Mean gray value (see intensity)
’deviation’ Deviation of gray values (see intensity)
’plane_deviation’ Deviation from the approximating plane (see plane_deviation)
’anisotropy’ Anisotropy (see entropy_gray)
’entropy’ Entropy (see entropy_gray)
’fuzzy_entropy’ Fuzzy entropie of region (see fuzzy_entropy, with a fuzzy function from Apar=0 to
Cpar=255)
’fuzzy_perimeter’ Fuzzy perimeter of region (see fuzzy_perimeter, with a fuzzy function from Apar=0
to Cpar=255)
’moments_row’ Mixed moments along a row (see moments_gray_plane)
’moments_column’ Mixed moments along a column (see moments_gray_plane)
’alpha’ Approximating plane, parameter Alpha (see moments_gray_plane)
’beta’ Approximating plane, parameter Beta (see moments_gray_plane)

HALCON 8.0.2
508 CHAPTER 5. IMAGE

Attention
If only one feature is used the value of Operation is meaningless. Several features are processed in the order in
which they are entered. The maximum number of features is limited to 100.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to be examined.
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Gray value image.
. SelectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Regions having features within the limits.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Names of the features.
Default Value : "mean"
List of values : Features ∈ {"area", "row", "column", "ra", "rb", "phi", "min", "max", "mean", "deviation",
"plane_deviation", "anisotropy", "entropy", "fuzzy_entropy", "fuzzy_perimeter", "moments_row",
"moments_column", "alpha", "beta"}
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Logical connection of features.
Default Value : "and"
List of values : Operation ∈ {"and", "or"}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Lower limit(s) of features.
Default Value : 128.0
Suggested values : Min ∈ {0.5, 1.0, 10.0, 20.0, 50.0, 128.0, 255.0, 1000.0}
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Upper limit(s) of features.
Default Value : 255.0
Suggested values : Max ∈ {0.5, 1.0, 10.0, 20.0, 50.0, 128.0, 255.0, 1000.0}
Complexity
If F is the area of the region and N the number of features the runtime complexity is O(F ∗ N ).
Result
The operator select_gray returns the value H_MSG_TRUE if the input image has the defined gray values
and the parameters are correct. The behavior in case of empty input (no input images available) is set via the
operator set_system(’no_object_result’,<Result>), the behavior in case of empty region is set
via set_system(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
select_gray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, mean_image, entropy_image, sobel_amp, median_separate
Possible Successors
select_shape, select_gray, shape_trans, reduce_domain, count_obj
See also
deviation_image, entropy_gray, intensity, mean_image, min_max_gray, select_obj
Module
Foundation

T_shape_histo_all ( const Hobject Region, const Hobject Image,


const Htuple Feature, Htuple *AbsoluteHisto, Htuple *RelativeHisto )

Determine a histogram of features along all threshold values.


The operator shape_histo_all carries out 255 threshold operations within Region with the gray values of
Image. The entry i in the histogram corresponds to the number of connected components/holes of this image
segmented with the threshold i (Feature = ’connected_components’, ’holes’) or the mean value of the feature
values of the regions segmented in this way (Feature = ’convexity’, ’compactness’, ’ansisometry’), respectively.

HALCON/C Reference Manual, 2008-5-13


5.6. FEATURES 509

The histogram can also be displayed directly as a graphic via the operators set_paint
(WindowHandle,’component_histogram’) and disp_image.
Attention
The operator shape_histo_all expects a region and exactly one gray value image as input. Because of the
power of this operator the runtime of shape_histo_all is relatively large!
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Region in which the features are to be examined.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Gray value image.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Feature to be examined.
Default Value : "connected_components"
List of values : Feature ∈ {"connected_components", "convexity", "compactness", "anisometry", "holes"}
. AbsoluteHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; Htuple . double * / Hlong *
Absolute distribution of the feature.
. RelativeHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; Htuple . double *
Relative distribution of the feature.
Example

/* Simulation von shape_histo_all mit Merkmal ’connected_components’: */


my_shape_histo_all(Hobject Region,Hobject Image,
long AbsHisto[], double RelHisto[])
{
long i,sum;
Hobject RegionGray,Seg;

reduce_domain(Region,Image,&RegionGray);
for (i=0; i<256; i++) {
threshold(RegionGray,&Seg,(double)i,255.0);
connect_and_holes(Seg,&AbsHisto[i],_);
clear_obj(Seg);
}
clear_obj(RegionGray); sum = 0;
for (i=0; i<256; i++)
sum += AbsHisto[i];
for (i=0; i<256; i++)
RelHist[i] = (double)AbsHisto[i]/Sum;
}

Complexity
If F is the area
√ √ of the input region and N the mean number of connected components the runtime complexity is
O(255(F + F N )).
Result
The operator shape_histo_all returns the value H_MSG_TRUE if an image with the defined gray val-
ues is entered. The behavior in case of empty input (no input images) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
shape_histo_all is reentrant and processed without parallelization.
Possible Successors
histo_to_thresh, threshold, gen_region_histo
Alternatives
shape_histo_point

HALCON 8.0.2
510 CHAPTER 5. IMAGE

See also
connection, convexity, compactness, connect_and_holes, entropy_gray, gray_histo,
set_paint, count_obj
Module
Foundation

T_shape_histo_point ( const Hobject Region, const Hobject Image,


const Htuple Feature, const Htuple Row, const Htuple Column,
Htuple *AbsoluteHisto, Htuple *RelativeHisto )

Determine a histogram of features along all threshold values.


Like shape_histo_all the operator shape_histo_point carries out 255 threshold value operations
within Region with the gray values of Image. Contrary to shape_histo_all only the segmented region
containing the pixel (Row, Column) is taken into account here. The entry i in the histogram then corresponds to
the number of holes of this region segmented with the threshold i (Feature = ’holes’) or the feature value of the
region (Feature = ’convexity’, ’compactness’, ’ansisometry’), respectively.
The histogram can also be displayed directly as a graphic via the operators set_paint
(WindowHandle,’component_histogram’) and disp_image.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region in which the features are to be examined.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Gray value image.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Feature to be examined.
Default Value : "convexity"
List of values : Feature ∈ {"convexity", "compactness", "anisometry", "holes"}
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . Hlong
Row of the pixel which the region must contain.
Default Value : 256
Suggested values : Row ∈ {10, 50, 100, 200, 300, 400}
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . Hlong
Column of the pixel which the region must contain.
Default Value : 256
Suggested values : Column ∈ {10, 50, 100, 200, 300, 400}
. AbsoluteHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; Htuple . double * / Hlong *
Absolute distribution of the feature.
. RelativeHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; Htuple . double *
Relative distribution of the feature.
Result
The operator shape_histo_point returns the value H_MSG_TRUE if an image with defined gray values is
entered. The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
shape_histo_point is reentrant and processed without parallelization.
Possible Predecessors
get_mbutton, area_center
Possible Successors
histo_to_thresh, threshold, gen_region_histo
Alternatives
shape_histo_all
See also
connection, connect_and_holes, convexity, compactness, set_paint

HALCON/C Reference Manual, 2008-5-13


5.7. FORMAT 511

Module
Foundation

5.7 Format
change_format ( const Hobject Image, Hobject *ImagePart, Hlong Width,
Hlong Height )

T_change_format ( const Hobject Image, Hobject *ImagePart,


const Htuple Width, const Htuple Height )

Change image size.


The operator change_format increases or decreases the size of the input images to the indicated height or
width, respectively. If the image is reduced, parts are cut off at the “right” or “lower” edge of the image, respec-
tively. If the image is enlarged, the additional areas are set to 0. The definition domain of the new image is equal
to the domain of the input image, clipped to the size of the new image. No zooming is carried out.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image.
. ImagePart (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2
/ int4 / real / complex / vector_field
Image with new format.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of new image.
Default Value : 512
Suggested values : Width ∈ {32, 64, 128, 256, 512, 768, 1024}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of new image.
Default Value : 512
Suggested values : Height ∈ {32, 64, 128, 256, 512, 525, 1024}
Parallelization Information
change_format is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
crop_part
See also
zoom_image_size, zoom_image_factor
Module
Foundation

crop_domain ( const Hobject Image, Hobject *ImagePart )


T_crop_domain ( const Hobject Image, Hobject *ImagePart )

Cut out of defined gray values.


The operator crop_domain cuts a rectangular area from the input images. This rectangle is the smallest sur-
rounding rectangle of the domain of the imput image. The new definition domain includes all pixels of the new
image. The new image matrix has the size of the rectangle.

HALCON 8.0.2
512 CHAPTER 5. IMAGE

Parameter

. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /


uint2 / int4 / real
Input image.
. ImagePart (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic /
int1 / int2 / uint2 / int4 / real
Image area.
Parallelization Information
crop_domain is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
crop_part, crop_rectangle1, change_format, reduce_domain
See also
zoom_image_size, zoom_image_factor
Module
Foundation

crop_domain_rel ( const Hobject Image, Hobject *ImagePart, Hlong Top,


Hlong Left, Hlong Bottom, Hlong Right )

T_crop_domain_rel ( const Hobject Image, Hobject *ImagePart,


const Htuple Top, const Htuple Left, const Htuple Bottom,
const Htuple Right )

Cut out an image area relative to the domain.


crop_domain_rel cuts a rectangular area from the input images. The area is determined by the surrounding
rectangle of the domain of the input image. The rectangle can be influenced by the control parameters to modify
at the top (Top), at the left (Left), at the bottom (Bottom), and at the right (Right). Positive values results in
a smaller, negative values in a larger size. If all parameters are set to zero, the region remains unchanged.
Parameter

. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /


uint2 / int4 / real
Input image.
. ImagePart (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic /
int1 / int2 / uint2 / int4 / real
Image area.
. Top (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of rows clipped at the top.
Default Value : -1
Suggested values : Top ∈ {-20, -10, -5, -3, -2, -1, 0, 1, 2, 3, 4, 5, 10, 20}
. Left (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of columns clipped at the left.
Default Value : -1
Suggested values : Left ∈ {-20, -10, -5, -3, -2, -1, 0, 1, 2, 3, 4, 5, 10, 20}
. Bottom (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of rows clipped at the bottom.
Default Value : -1
Suggested values : Bottom ∈ {-20, -10, -5, -3, -2, -1, 0, 1, 2, 3, 4, 5, 10, 20}
. Right (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of columns clipped at the right.
Default Value : -1
Suggested values : Right ∈ {-20, -10, -5, -3, -2, -1, 0, 1, 2, 3, 4, 5, 10, 20}

HALCON/C Reference Manual, 2008-5-13


5.7. FORMAT 513

Result
crop_domain_rel returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
crop_domain_rel is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
reduce_domain, threshold, connection, regiongrowing, pouring
Alternatives
crop_domain, crop_rectangle1
See also
smallest_rectangle1, intersection, gen_rectangle1, clip_region
Module
Foundation

crop_part ( const Hobject Image, Hobject *ImagePart, Hlong Row,


Hlong Column, Hlong Width, Hlong Height )

T_crop_part ( const Hobject Image, Hobject *ImagePart, const Htuple Row,


const Htuple Column, const Htuple Width, const Htuple Height )

Cut out a rectangular image area.


The operator crop_part cuts a rectangular area from the input images. The area is indicated by a rectangle
(upper left corner and size). The area must be within the image. The definition domain includes all pixels of the
new image. The new image matrix has the size of a rectangle.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Input image.
. ImagePart (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic /
int1 / int2 / uint2 / int4 / real
Image area.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong
Line index of upper left corner of image area.
Default Value : 100
Suggested values : Row ∈ {10, 20, 50, 100, 200, 300, 500}
Typical range of values : 0 ≤ Row ≤ 1024
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong
Column index of upper left corner of image area.
Default Value : 100
Suggested values : Column ∈ {10, 20, 50, 100, 200, 300, 500}
Typical range of values : 0 ≤ Column ≤ 1024
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.x ; Hlong
Width of new image.
Default Value : 128
Suggested values : Width ∈ {32, 64, 128, 256, 512, 768}
Typical range of values : 0 ≤ Width ≤ 1024
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.extent.y ; Hlong
Height of new image.
Default Value : 128
Suggested values : Height ∈ {32, 64, 128, 256, 512, 525}
Typical range of values : 0 ≤ Height ≤ 1024
Parallelization Information
crop_part is reentrant and automatically parallelized (on tuple level).

HALCON 8.0.2
514 CHAPTER 5. IMAGE

Possible Successors
disp_image
Alternatives
crop_rectangle1, crop_domain, change_format, reduce_domain
See also
zoom_image_size, zoom_image_factor
Module
Foundation

crop_rectangle1 ( const Hobject Image, Hobject *ImagePart, Hlong Row1,


Hlong Column1, Hlong Row2, Hlong Column2 )

T_crop_rectangle1 ( const Hobject Image, Hobject *ImagePart,


const Htuple Row1, const Htuple Column1, const Htuple Row2,
const Htuple Column2 )

Cut out a rectangular image area.


The operator crop_rectangle1 cuts a rectangular area from the input images. The area is indicated by a
rectangle (upper left and lower right corner). The area must be within the image. The definition domain includes
all pixels of the new image. The new image matrix has the size of a rectangle.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Input image.
. ImagePart (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic /
int1 / int2 / uint2 / int4 / real
Image area.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong
Line index of upper left corner of image area.
Default Value : 100
Suggested values : Row1 ∈ {10, 20, 50, 100, 200, 300, 500}
Typical range of values : 0 ≤ Row1 ≤ 1024
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong
Column index of upper left corner of image area.
Default Value : 100
Suggested values : Column1 ∈ {10, 20, 50, 100, 200, 300, 500}
Typical range of values : 0 ≤ Column1 ≤ 1024
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y ; Hlong
Line index of lower right corner of image area.
Default Value : 200
Suggested values : Row2 ∈ {10, 20, 50, 100, 200, 300, 500}
Typical range of values : 0 ≤ Row2 ≤ 1024
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .rectangle.corner.x ; Hlong
Column index of lower right corner of image area.
Default Value : 200
Suggested values : Column2 ∈ {10, 20, 50, 100, 200, 300, 500}
Typical range of values : 0 ≤ Column2 ≤ 1024
Parallelization Information
crop_rectangle1 is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
crop_part, crop_domain, change_format, reduce_domain
See also
zoom_image_size, zoom_image_factor

HALCON/C Reference Manual, 2008-5-13


5.7. FORMAT 515

Module
Foundation

tile_channels ( const Hobject Image, Hobject *TiledImage,


Hlong NumColumns, const char *TileOrder )

T_tile_channels ( const Hobject Image, Hobject *TiledImage,


const Htuple NumColumns, const Htuple TileOrder )

Tile multiple images into a large image.


tile_channels tiles an image consisting of multiple channels into a large single-channel image. The input
image Image contains Num images of the same size, which are stored in the individual channels. The out-
put image TiledImage contains a single channel image, where the Num input channels have been tiled into
NumColumns columns. In particular, this means that tile_channels cannot tile color images. For this pur-
pose, tile_images can be used. The parameter TileOrder determines the order in which the images are
copied into the output in the cases in which this is not already determined by NumColumns (i.e., if NumColumns
!= 1 and NumColumns != Num). If TileOrder = ’horizontal’ the images are copied in the horizontal direction,
i.e., the second channel of Image will be to the right of the first channel. If TileOrder = ’vertical’ the images
are copied in the vertical direction, i.e., the second channel of Image will be below the first channel. The domain
of TiledImage is obtained by copying the domain of Image to the corresponding locations in the output im-
age. If Num is not a multiple of NumColumns the output image will have undefined gray values in the lower right
corner of the image. The output domain will reflect this.
Parameter
. Image (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Input image.
. TiledImage (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic /
int1 / int2 / uint2 / int4 / real
Tiled output image.
. NumColumns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of columns to use for the output image.
Default Value : 1
Suggested values : NumColumns ∈ {1, 2, 3, 4, 5, 6, 7}
Restriction : NumColumns ≥ 1
. TileOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Order of the input images in the output image.
Default Value : "vertical"
List of values : TileOrder ∈ {"horizontal", "vertical"}
Example (Syntax: HDevelop)

/* Grab 5 single-channel images and stack them vertically. */


gen_rectangle1 (Image, 0, 0, Height-1, Width-1)
for I := 1 to 5 by 1
grab_image_async (ImageGrabbed, FGHandle, -1)
append_channel (Image, ImageGrabbed, Image)
endfor
tile_channels (Image, TiledImage, 1, ’vertical’)

Result
tile_channels returns H_MSG_TRUE if all parameters are correct and no error occurs during execution.
If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
tile_channels is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
append_channel

HALCON 8.0.2
516 CHAPTER 5. IMAGE

Alternatives
tile_images, tile_images_offset
See also
change_format, crop_part, crop_rectangle1
Module
Foundation

tile_images ( const Hobject Images, Hobject *TiledImage,


Hlong NumColumns, const char *TileOrder )

T_tile_images ( const Hobject Images, Hobject *TiledImage,


const Htuple NumColumns, const Htuple TileOrder )

Tile multiple image objects into a large image.


tile_images tiles multiple input image objects, which must contain the same number of channels, into a large
image. The input image object Images contains Num images, which may be of different size. The output image
TiledImage contains as many channels as the input images. In the output image the Num input images have been
tiled into NumColumns columns. Each tile has the same size, which is determined by the maximum width and
height of all input images. If an input image is smaller than the tile size it is copied to the center of the respective
tile. The parameter TileOrder determines the order in which the images are copied into the output in the cases
in which this is not already determined by NumColumns (i.e., if NumColumns != 1 and NumColumns != Num).
If TileOrder = ’horizontal’ the images are copied in the horizontal direction, i.e., the second image of Images
will be to the right of the first image. If TileOrder = ’vertical’ the images are copied in the vertical direction,
i.e., the second image of Images will be below the first image. The domain of TiledImage is obtained by
copying the domains of Images to the corresponding locations in the output image. If Num is not a multiple of
NumColumns the output image will have undefined gray values in the lower right corner of the image. The output
domain will reflect this.
Parameter
. Images (input_object) . . . . . . (multichannel-)image-array ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Input images.
. TiledImage (output_object) . . . . . . (multichannel-)image ; Hobject * : byte / direction / cyclic / int1 /
int2 / uint2 / int4 / real
Tiled output image.
. NumColumns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of columns to use for the output image.
Default Value : 1
Suggested values : NumColumns ∈ {1, 2, 3, 4, 5, 6, 7}
Restriction : NumColumns ≥ 1
. TileOrder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Order of the input images in the output image.
Default Value : "vertical"
List of values : TileOrder ∈ {"horizontal", "vertical"}
Example (Syntax: HDevelop)

/* Grab 5 (multi-channel) images and stack them vertically. */


gen_empty_obj (Images)
for I := 1 to 5 by 1
grab_image_async (ImageGrabbed, FGHandle, -1)
concat_obj (Images, ImageGrabbed, Images)
endfor
tile_images (Images, TiledImage, 1, ’vertical’)

Result
tile_images returns H_MSG_TRUE if all parameters are correct and no error occurs during execution. If the

HALCON/C Reference Manual, 2008-5-13


5.7. FORMAT 517

input is empty the behavior can be set via set_system(’no_object_result’,<Result>). If necessary,


an exception handling is raised.
Parallelization Information
tile_images is reentrant and automatically parallelized (on channel level).
Possible Predecessors
append_channel
Alternatives
tile_channels, tile_images_offset
See also
change_format, crop_part, crop_rectangle1
Module
Foundation

tile_images_offset ( const Hobject Images, Hobject *TiledImage,


Hlong OffsetRow, Hlong OffsetCol, Hlong Row1, Hlong Col1, Hlong Row2,
Hlong Col2, Hlong Width, Hlong Height )

T_tile_images_offset ( const Hobject Images, Hobject *TiledImage,


const Htuple OffsetRow, const Htuple OffsetCol, const Htuple Row1,
const Htuple Col1, const Htuple Row2, const Htuple Col2,
const Htuple Width, const Htuple Height )

Tile multiple image objects into a large image with explicit positioning information.
tile_images_offset tiles multiple input image objects, which must contain the same number of channels,
into a large image. The input image object Images contains Num images, which may be of different size. The
output image TiledImage contains as many channels as the input images. The size of the output image is
determined by the parameters Width and Height. The position of the upper left corner of the input images in
the output images is determined by the parameters OffsetRow and OffsetCol. Both parameters must contain
exactly Num values. Optionally, each input image can be cropped to an arbitrary rectangle that is smaller than the
input image. To do so, the parameters Row1, Col1, Row2, and Col2 must be set accordingly. If any of these four
parameters is set to -1, the corresponding input image is not cropped. In any case, all four parameters must contain
Num values. If the input images are cropped the position parameters OffsetRow and OffsetCol refer to the
upper left corner of the cropped image. If the input images overlap each other in the output image (while taking
into account their respective domains), the image with the higher index in Images overwrites the image data of
the image with the lower index. The domain of TiledImage is obtained by copying the domains of Images to
the corresponding locations in the output image.
Attention
If the input images all have the same size and tile the output image exactly, the operator tile_images usually
will be slightly faster.
Parameter

. Images (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2


/ uint2 / int4 / real
Input images.
. TiledImage (output_object) . . . . . . (multichannel-)image ; Hobject * : byte / direction / cyclic / int1 /
int2 / uint2 / int4 / real
Tiled output image.
. OffsetRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) Hlong
Row coordinate of the upper left corner of the input images in the output image.
Default Value : 0
Suggested values : OffsetRow ∈ {0, 50, 100, 150, 200, 250}
. OffsetCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) Hlong
Column coordinate of the upper left corner of the input images in the output image.
Default Value : 0
Suggested values : OffsetCol ∈ {0, 50, 100, 150, 200, 250}

HALCON 8.0.2
518 CHAPTER 5. IMAGE

. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; (Htuple .) Hlong


Row coordinate of the upper left corner of the copied part of the respective input image.
Default Value : -1
Suggested values : Row1 ∈ {-1, 0, 10, 20, 50, 100, 200, 300, 500}
. Col1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; (Htuple .) Hlong
Column coordinate of the upper left corner of the copied part of the respective input image.
Default Value : -1
Suggested values : Col1 ∈ {-1, 0, 10, 20, 50, 100, 200, 300, 500}
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; (Htuple .) Hlong
Row coordinate of the lower right corner of the copied part of the respective input image.
Default Value : -1
Suggested values : Row2 ∈ {-1, 0, 10, 20, 50, 100, 200, 300, 500}
. Col2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x(-array) ; (Htuple .) Hlong
Column coordinate of the lower right corner of the copied part of the respective input image.
Default Value : -1
Suggested values : Col2 ∈ {-1, 0, 10, 20, 50, 100, 200, 300, 500}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; (Htuple .) Hlong
Width of the output image.
Default Value : 512
Suggested values : Width ∈ {32, 64, 128, 256, 512, 768, 1024, 2048, 4096}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; (Htuple .) Hlong
Height of the output image.
Default Value : 512
Suggested values : Height ∈ {32, 64, 128, 256, 512, 525, 1024, 2048, 4096}
Example (Syntax: HDevelop)

/* Example 1 */
/* Grab 2 (multi-channel) NTSC images, crop the bottom 5 lines off */
/* of each image, the right 5 columns off of the first image, and */
/* the left five lines off of the second image, and put the cropped */
/* images side-by-side. */
gen_empty_obj (Images)
for I := 1 to 2 by 1
grab_image_async (ImageGrabbed, FGHandle, -1)
concat_obj (Images, ImageGrabbed, Images)
endfor
tile_images_offset (Images, TiledImage, [0,635], [0,0], [0,0],
[0,5], [474,474], [634,639])

/* Example 2 */
/* Enlarge image by 15 rows and columns on all sides */
EnlargeColsBy := 15
EnlargeRowsBy := 15
get_image_pointer1 (Image, Pointer, Type, WidthImage, HeightImage)
tile_images_offset (Image, EnlargedImage, EnlargeRowsBy, EnlargeColsBy,
-1, -1, -1, -1, WidthImage + EnlargeColsBy*2,
HeightImage + EnlargeRowsBy*2)

Result
tile_images_offset returns H_MSG_TRUE if all parameters are correct and no error occurs during execu-
tion. If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>).
If necessary, an exception handling is raised.
Parallelization Information
tile_images_offset is reentrant and automatically parallelized (on channel level).
Possible Predecessors
append_channel

HALCON/C Reference Manual, 2008-5-13


5.8. MANIPULATION 519

Alternatives
tile_channels, tile_images
See also
change_format, crop_part, crop_rectangle1
Module
Foundation

5.8 Manipulation

overpaint_gray ( const Hobject ImageDestination,


const Hobject ImageSource )

T_overpaint_gray ( const Hobject ImageDestination,


const Hobject ImageSource )

Overpaint the gray values of an image.


overpaint_gray copies the gray values of the image given in ImageSource into the image
in ImageDestination. Only the gray values of the domain of ImageSource are copied (see
reduce_domain).
If you do not want to modify ImageDestination itself, you can use the operator paint_gray, which returns
the result in a newly created image.
Attention
overpaint_gray modifies the content of an already existing image (ImageDestination). Besides, even
other image objects may be affected: For example, if you created ImageDestination via copy_obj from
another image object (or vice versa), overpaint_gray will also modify the image matrix of this other im-
age object. Therefore, overpaint_gray should only be used to overpaint newly created image objects.
Typical operators for this task are, e.g., gen_image_const (creates a new image with a specified size),
gen_image_proto (creates an image with the size of a specified prototype image) or copy_image (cre-
ates an image as the copy of a specified image).
Parameter
. ImageDestination (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Input image to be painted over.
. ImageSource (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image containing the desired gray values.
Example

/* Copy a circular part of the image ’monkey’ into a new image (New1): */

read_image(&Image,"monkey");
gen_circle(&Circle,200.0,200.0,150.0);
reduce_domain(Image,Circle,&Mask);
/* New image with black (0) background */
gen_image_proto(Image,&New1,0.0);
/* Copy a part of the image ’monkey’ into New1 */
overpaint_gray(New1,Mask);

Result
overpaint_gray returns H_MSG_TRUE if all parameters are correct. If necessary, an exception is raised.
Parallelization Information
overpaint_gray is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto

HALCON 8.0.2
520 CHAPTER 5. IMAGE

Alternatives
get_image_pointer1, paint_gray, set_grayval, copy_image
See also
paint_region, overpaint_region
Module
Foundation

overpaint_region ( const Hobject Image, const Hobject Region,


double Grayval, const char *Type )

T_overpaint_region ( const Hobject Image, const Hobject Region,


const Htuple Grayval, const Htuple Type )

Overpaint regions in an image.


overpaint_region paints the regions given in Region with a constant gray value into the image given in
Image. These gray values can either be specified for each channel once, valid for all regions, or for each region
separately. To define the latter, group the channel gray values g of each region and concatenate them to a tuple
according to the regions’ order, e.g., for a three channel image:

[g(channel1,region1), g(channel2,region1), g(channel3,region1), g(channel1,region2), . . .]

The parameter Type determines whether the region should be painted filled (’fill’) or whether only its boundary
should be painted (’margin’).
If you do not want to modify Image itself, you can use the operator paint_region, which returns the result
in a newly created image.
Attention
overpaint_region modifies the content of an already existing image (Image). Besides, even other image
objects may be affected: For example, if you created Image via copy_obj from another image object (or
vice versa), overpaint_region will also modify the image matrix of this other image object. Therefore,
overpaint_region should only be used to overpaint newly created image objects. Typical operators for this
task are, e.g., gen_image_const (creates a new image with a specified size), gen_image_proto (creates
an image with the size of a specified prototype image) or copy_image (creates an image as the copy of a
specified image).
Parameter
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex
Image in which the regions are to be painted.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be painted into the input image.
. Grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Desired gray values of the regions.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 2.0, 5.0, 10.0, 16.0, 32.0, 64.0, 128.0, 253.0, 254.0, 255.0}
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Paint regions filled or as boundaries.
Default Value : "fill"
List of values : Type ∈ {"fill", "margin"}
Example

/* Paint a rectangle into a new image (New1) */

gen_rectangle1(&Rectangle,100.0,100.0,300.0,300.0);
/* generate a black image */
gen_image_const(&New1,"byte", 768, 576)

HALCON/C Reference Manual, 2008-5-13


5.8. MANIPULATION 521

/* paint a white rectangle */


overpaint_region(New1,Rectangle,255.0,"fill");

Result
overpaint_region returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
overpaint_region is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto, reduce_domain
Alternatives
set_grayval, paint_region, paint_xld
See also
reduce_domain, set_draw, paint_gray, overpaint_gray, gen_image_const
Module
Foundation

paint_gray ( const Hobject ImageSource, const Hobject ImageDestination,


Hobject *MixedImage )

T_paint_gray ( const Hobject ImageSource,


const Hobject ImageDestination, Hobject *MixedImage )

Paint the gray values of an image into another image.


paint_gray paints the gray values of the image given in ImageSource into the image in
ImageDestination and returns the resulting image in MixedImage. Only the gray values of the domain
of ImageSource are copied (see reduce_domain).
As an alternative to paint_gray, you can use the operator overpaint_gray, which directly paints the gray
values into ImageDestination.
Parameter
. ImageSource (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image containing the desired gray values.
. ImageDestination (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Input image to be painted over.
. MixedImage (output_object) . . . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4
/ real / complex / vector_field
Result image.
Example

/* Copy a circular part of the image ’monkey’ into the image ’fabrik’: */

read_image(&Image,"monkey");
gen_circle(&Circle,200.0,200.0,150.0);
reduce_domain(Image,Circle,&Mask);
read_image(&Image,"fabrik");
/* Copy a part of the image ’monkey’ into ’fabrik’ */
paint_gray(Mask,Image2,&MixedImage);

Result
paint_gray returns H_MSG_TRUE if all parameters are correct. If necessary, an exception is raised.
Parallelization Information
paint_gray is reentrant and processed without parallelization.

HALCON 8.0.2
522 CHAPTER 5. IMAGE

Possible Predecessors
read_image, gen_image_const, gen_image_proto
Alternatives
get_image_pointer1, set_grayval, copy_image, overpaint_gray
See also
paint_region, overpaint_region
Module
Foundation

paint_region ( const Hobject Region, const Hobject Image,


Hobject *ImageResult, double Grayval, const char *Type )

T_paint_region ( const Hobject Region, const Hobject Image,


Hobject *ImageResult, const Htuple Grayval, const Htuple Type )

Paint regions into an image.


paint_region paints the regions given in Region with a constant gray value into the image given in Image
and returns the result in ImageResult. These gray values can either be specified for each channel once, valid
for all regions, or for each region separately. To define the latter, group the channel gray values g of each region
and concatenate them to a tuple according to the regions’ order, e.g., for a three channel image:

[g(channel1,region1), g(channel2,region1), g(channel3,region1), g(channel1,region2), . . .]

The parameter Type determines whether the region should be painted filled (’fill’) or whether only its boundary
should be painted (’margin’).
As an alternative to paint_region, you can use the operator overpaint_region, which directly paints
the regions into Image.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be painted into the input image.
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex
Image in which the regions are to be painted.
. ImageResult (output_object) . . . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4
/ real / complex
Image containing the result.
. Grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Desired gray values of the regions.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 2.0, 5.0, 10.0, 16.0, 32.0, 64.0, 128.0, 253.0, 254.0, 255.0}
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Paint regions filled or as boundaries.
Default Value : "fill"
List of values : Type ∈ {"fill", "margin"}
Example

/* Paint a rectangle into the image ’monkey’ */

read_image(&Image,"monkey");
gen_rectangle1(&Rectangle,100.0,100.0,300.0,300.0);
/* paint a white rectangle */
paint_region(Rectangle,Image,&ImageResult,255.0,"fill");

HALCON/C Reference Manual, 2008-5-13


5.8. MANIPULATION 523

Result
paint_region returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
paint_region is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto, reduce_domain
Alternatives
set_grayval, overpaint_region, paint_xld
See also
reduce_domain, paint_gray, overpaint_gray, set_draw, gen_image_const
Module
Foundation

paint_xld ( const Hobject XLD, const Hobject Image,


Hobject *ImageResult, double Grayval )

T_paint_xld ( const Hobject XLD, const Hobject Image,


Hobject *ImageResult, const Htuple Grayval )

Paint XLD objects into an image.


paint_xld paints the XLD objects XLD of type contour or polygon with the constant gray values Grayval into
each channel of the background image given in Image and returns the result in ImageResult. Open contours
of XLD objects are closed and their enclosed regions are filled up. The rim of the subpixel XLD objects is painted
onto the background image using anti-aliasing. Note that only objects without crossings or touching segments are
painted correctly.
Grayval contains the gray values for painting the XLD objects. These gray values can either be specified for
each channel once, valid for all XLD objects, or for each XLD object separately. To define the latter, group the
channel gray values g of each XLD object and concatenate them to a tuple according to the order of the XLD
objects, e.g., for a three channel image:

[g(channel1,xld1), g(channel2,xld1), g(channel3,xld1), g(channel1,xld2), . . .]

Parameter

. XLD (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld(-array) ; Hobject


XLD objects to be painted into the input image.
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex
Image in which the xld objects are to be painted.
. ImageResult (output_object) . . . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4
/ real / complex
Image containing the result.
. Grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Desired gray value of the xld object.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 2.0, 5.0, 10.0, 16.0, 32.0, 64.0, 128.0, 253.0, 254.0, 255.0}
Example

/* Paint colored xld objects into a gray image */

/* read and copy image to generate a three channel image */


read_image(&Image1,"green-dot");
copy_image(Image1,&Image2);

HALCON 8.0.2
524 CHAPTER 5. IMAGE

copy_image(Image1,&Image3);
compose3(Image1,Image2,Image3,&Image);
/* extract subpixel border */
threshold_sub_pix(Image1,&Border,128);
/* select the circle and the arrows */
select_obj(Border,&circle,14);
select_obj(Border,&arrows,16);
concat_obj(circle,arrows,&green_dot);
/* paint a green circle and white arrows,
* therefore define tuple grayval:=[0,255,0,255,255,255].
* (to paint all objects e.g. blue define grayval:=[0,0,255]) */
T_paint_xld(green_dot,Image,&ImageResult,grayval);

Result
paint_xld returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
paint_xld is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto, gen_contour_polygon_xld,
threshold_sub_pix
Alternatives
set_grayval, paint_gray, paint_region
See also
gen_image_const
Module
Foundation

set_grayval ( const Hobject Image, Hlong Row, Hlong Column,


double Grayval )

T_set_grayval ( const Hobject Image, const Htuple Row,


const Htuple Column, const Htuple Grayval )

Set single gray values in an image.


set_grayval sets the gray values of the input image Image at the positions (Row,Column) to the values
specified by Grayval. The number of values in Grayval must match the number of points passed to the
operator.
Parameter

. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Image to be modified.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) Hlong
Row coordinates of the pixels to be modified.
Default Value : 0
Suggested values : Row ∈ {0, 10, 50, 127, 255, 511}
Typical range of values : 0 ≤ Row
Restriction : (0 ≤ Row) ∧ (Row < height(Image))
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) Hlong
Column coordinates of the pixels to be modified.
Default Value : 0
Suggested values : Column ∈ {0, 10, 50, 127, 255, 511}
Typical range of values : 0 ≤ Column
Restriction : (0 ≤ Column) ∧ (Column < width(Image))

HALCON/C Reference Manual, 2008-5-13


5.9. TYPE-CONVERSION 525

. Grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . grayval(-array) ; (Htuple .) double / Hlong


Gray values to be used.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 10.0, 128.0, 255.0}
Result
set_grayval returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
set_grayval is reentrant and processed without parallelization.
Possible Predecessors
read_image, get_image_pointer1, gen_image_proto, gen_image1
Alternatives
get_image_pointer1, paint_gray, paint_region
See also
get_grayval, gen_image_const, gen_image1, gen_image_proto
Module
Foundation

5.9 Type-Conversion

complex_to_real ( const Hobject ImageComplex, Hobject *ImageReal,


Hobject *ImageImaginary )

T_complex_to_real ( const Hobject ImageComplex, Hobject *ImageReal,


Hobject *ImageImaginary )

Convert a complex image into two real images.


complex_to_real converts a complex image ImageComplex into two real images ImageReal and
ImageImaginary, which contain the real and imaginary part of the complex image.
Parameter
. ImageComplex (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Complex image.
. ImageReal (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Real part.
. ImageImaginary (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Imaginary part.
Parallelization Information
complex_to_real is reentrant and automatically parallelized (on tuple level).
See also
real_to_complex
Module
Foundation

convert_image_type ( const Hobject Image, Hobject *ImageConverted,


const char *NewType )

T_convert_image_type ( const Hobject Image, Hobject *ImageConverted,


const Htuple NewType )

Convert the type of an image.


convert_image_type converts images of an arbitrary type into an arbitrary new image type. If the conversion
is done from a larger to a smaller gray value range (e.g., from ’int4’ to ’byte’), too large or too small values are

HALCON 8.0.2
526 CHAPTER 5. IMAGE

simply “clipped.” It is therefore advisable to adapt the range of gray values by calling scale_image before
calling this operator. For images of type complex, only the real part is converted. The imaginary part is ignored.
This facilitates an efficient conversion of images that have been transformed back from the frequency domain.
Such images always have an imaginary part of 0.
Attention
If the source and destination image type are identical, no new image matrix is allocated.
Parameter

. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /


uint2 / int4 / real / complex
Image whose image type is to be changed.
. ImageConverted (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction /
cyclic / int1 / int2 / uint2 / int4
/ real / complex
Converted image.
. NewType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Desired image type (i.e., type of the gray values).
Default Value : "byte"
List of values : NewType ∈ {"int1", "int2", "uint2", "int4", "byte", "real", "direction", "cyclic", "complex"}
Result
convert_image_type returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
convert_image_type is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
scale_image
See also
scale_image, abs_image
Module
Foundation

real_to_complex ( const Hobject ImageReal,


const Hobject ImageImaginary, Hobject *ImageComplex )

T_real_to_complex ( const Hobject ImageReal,


const Hobject ImageImaginary, Hobject *ImageComplex )

Convert two real images into a complex image.


real_to_complex converts two real images ImageReal and ImageImaginary, which contain the real
and imaginary part of a complex image, into a complex image ImageComplex.
Parameter

. ImageReal (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : real


Real part.
. ImageImaginary (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : real
Imaginary part.
. ImageComplex (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : complex
Complex image.
Parallelization Information
real_to_complex is reentrant and automatically parallelized (on tuple level).
See also
complex_to_real
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


5.9. TYPE-CONVERSION 527

real_to_vector_field ( const Hobject Row, const Hobject Col,


Hobject *VectorField )

T_real_to_vector_field ( const Hobject Row, const Hobject Col,


Hobject *VectorField )

Convert two real-valued images into a vector field image.


real_to_vector_field converts two real-valued images Row and Col into a vector field image
VectorField. The input images contain the vector components in the row and column direction, respectively.
Parameter

. Row (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : real


Vector component in the row direction.
. Col (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : real
Vector component in the column direction.
. VectorField (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : vector_field
Displacement vector field.
Parallelization Information
real_to_vector_field is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
vector_field_to_real
Module
Foundation

vector_field_to_real ( const Hobject VectorField, Hobject *Row,


Hobject *Col )

T_vector_field_to_real ( const Hobject VectorField, Hobject *Row,


Hobject *Col )

Convert a vector field image into two real-valued images.


vector_field_to_real converts the vector field image VectorField into two real-valued images Row
and Col. The output images contain the vector components in the row and column direction, respectively.
Parameter

. VectorField (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : vector_field


Vector field.
. Row (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Vector component in the row direction.
. Col (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Vector component in the column direction.
Parallelization Information
vector_field_to_real is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
optical_flow_mg
See also
optical_flow_mg
Module
Foundation

HALCON 8.0.2
528 CHAPTER 5. IMAGE

HALCON/C Reference Manual, 2008-5-13


Chapter 6

Lines

6.1 Access
T_approx_chain ( const Htuple Row, const Htuple Column,
const Htuple MinWidthCoord, const Htuple MaxWidthCoord,
const Htuple ThreshStart, const Htuple ThreshEnd,
const Htuple ThreshStep, const Htuple MinWidthSmooth,
const Htuple MaxWidthSmooth, const Htuple MinWidthCurve,
const Htuple MaxWidthCurve, const Htuple Weight1,
const Htuple Weight2, const Htuple Weight3, Htuple *ArcCenterRow,
Htuple *ArcCenterCol, Htuple *ArcAngle, Htuple *ArcBeginRow,
Htuple *ArcBeginCol, Htuple *LineBeginRow, Htuple *LineBeginCol,
Htuple *LineEndRow, Htuple *LineEndCol, Htuple *Order )

Approximate a contour by arcs and lines.


The coordinates of a curve are approximated by a row of lines and arcs. The procedure tries values from
a user-definable range for certain parameters. The limits of these ranges are explicitly stated in the parame-
ter list of the function (MinWidthCoord ... MaxWidthCoord, ThreshStart ... ThreshEnd, MinWidthSmooth ...
MaxWidthSmooth, MinWidthCurve ... MaxWidthCurve). Additionally, the step width for the parameter area of
the threshold value for pointed corners has to be indicated (ThreshStep). By narrowing the covered areas the
runtime of the calculation can be shortened, but the result may deteriorate.
The parameters Weight1, Weight2 and Weight3 indicate whether the desired weighting is placed more on precision
of the approximation, obtaining as much large segments as possible or as few small segments as possible. Thus,
for (Weight1,Weight2,Weight3) (1,0,0) creates a very precise approximation and (0,1,1) an approximation with as
few large segments as possible.
The result of the procedure is returned separately as arcs and lines. If one is interested in the sequence of the
segments the individual resulting elements can be read successively from the resulting tuples; the sequence can be
taken from the return parameter order (0: next element is next line segment, 1: next element is next arc segment).
Attention
Contours which can possibly consist of only one segment should also be examined with a threshold maximum
(ThreshEnd) > 1.0, because otherwise at least one “corner point” is determined in any case.
Parameter

. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . Hlong


Row of the contour.
Default Value : 32
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . Hlong
Column of the contour.
Default Value : 32

529
530 CHAPTER 6. LINES

. MinWidthCoord (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double


Minimum width of Gauss operator for coordinate smoothing (> 0.4).
Default Value : 0.5
Suggested values : MinWidthCoord ∈ {0.5, 0.7, 1.0, 1.2, 1.5, 1.7}
Typical range of values : 0.4 ≤ MinWidthCoord ≤ 3.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. MaxWidthCoord (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Maximum width of Gauss operator for coordinate smoothing (> 0.4).
Default Value : 2.4
Suggested values : MaxWidthCoord ∈ {0.5, 0.7, 1.0, 1.2, 1.5, 1.7}
Typical range of values : 0.4 ≤ MaxWidthCoord ≤ 3.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. ThreshStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum threshold value of the curvature for accepting a corner (relative to the largest curvature present).
Default Value : 0.3
Suggested values : ThreshStart ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8}
Typical range of values : 0.1 ≤ ThreshStart ≤ 0.9 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. ThreshEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Maximum threshold value of the curvature for accepting a corner (relative to the largest curvature present).
Default Value : 0.9
Suggested values : ThreshEnd ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8}
Typical range of values : 0.1 ≤ ThreshEnd ≤ 0.9 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. ThreshStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Step width for threshold increase.
Default Value : 0.2
Suggested values : ThreshStep ∈ {0.3, 0.4, 0.5}
Typical range of values : 0.1 ≤ ThreshStep ≤ 0.9 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. MinWidthSmooth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum width of Gauss operator for smoothing the curvature function (> 0.4).
Default Value : 0.5
Suggested values : MinWidthSmooth ∈ {0.5, 0.7, 1.0, 1.2, 1.5, 1.7}
Typical range of values : 0.4 ≤ MinWidthSmooth ≤ 3.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. MaxWidthSmooth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Maximum width of Gauss operator for smoothing the curvature function.
Default Value : 2.4
Suggested values : MaxWidthSmooth ∈ {0.5, 0.7, 1.0, 1.2, 1.5, 1.7}
Typical range of values : 0.4 ≤ MaxWidthSmooth ≤ 3.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. MinWidthCurve (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Minimum width of curve area for curvature determination (> 0.4).
Default Value : 2
Suggested values : MinWidthCurve ∈ {2, 5, 7}
Typical range of values : 1 ≤ MinWidthCurve ≤ 12 (lin)
Minimum Increment : 1
Recommended Increment : 2

HALCON/C Reference Manual, 2008-5-13


6.1. ACCESS 531

. MaxWidthCurve (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Maximum width of curve area for curvature determination.
Default Value : 12
Suggested values : MaxWidthCurve ∈ {2, 5, 7}
Typical range of values : 1 ≤ MaxWidthCurve ≤ 20 (lin)
Minimum Increment : 1
Recommended Increment : 2
. Weight1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Weighting factor for approximation precision.
Default Value : 1.0
Suggested values : Weight1 ∈ {0.0, 0.5, 1.0}
Typical range of values : 0.0 ≤ Weight1 ≤ 1.0 (lin)
Minimum Increment : 0.1
Recommended Increment : 0.5
. Weight2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Weighting factor for large segments.
Default Value : 1.0
Suggested values : Weight2 ∈ {0.0, 0.5, 1.0}
Typical range of values : 0.0 ≤ Weight2 ≤ 1.0 (lin)
Minimum Increment : 0.1
Recommended Increment : 0.5
. Weight3 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Weighting factor for small segments.
Default Value : 1.0
Suggested values : Weight3 ∈ {0.0, 0.5, 1.0}
Typical range of values : 0.0 ≤ Weight3 ≤ 1.0 (lin)
Minimum Increment : 0.1
Recommended Increment : 0.5
. ArcCenterRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.center.y-array ; Htuple . Hlong *
Row of the center of an arc.
. ArcCenterCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.center.x-array ; Htuple . Hlong *
Column of the center of an arc.
. ArcAngle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.angle.rad-array ; Htuple . double *
Angle of an arc.
. ArcBeginRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.begin.y-array ; Htuple . Hlong *
Row of the starting point of an arc.
. ArcBeginCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.begin.x-array ; Htuple . Hlong *
Column of the starting point of an arc.
. LineBeginRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong *
Row of the starting point of a line segment.
. LineBeginCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *
Column of the starting point of a line segment.
. LineEndRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong *
Row of the ending point of a line segment.
. LineEndCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong *
Column of the ending point of a line segment.
. Order (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Sequence of line (value 0) and arc segments (value 1).
Example

/* read edge image */


read_image(&Image,"fig1_kan");
/* construct edge region */
hysteresis_threshold(Image,&RK1,64,255,40,1);
connection(RK1,&Rand);
/* fetch chain code */
T_get_region_contour(Rand,&Rows,&Columns);

HALCON 8.0.2
532 CHAPTER 6. LINES

firstline = get_i(Tline,0);
firstcol = get_i(Tcol,0);
/* approximation with lines and circular arcs */
set_d(t1,0.4,0);
set_d(t2,2.4,0);

set_d(t3,0.3,0);
set_d(t4,0.9,0);

set_d(t5,0.2,0);

set_d(t6,0.4,0);
set_d(t7,2.4,0);

set_i(t8,2,0);
set_i(t9,12,0);

set_d(t10,1.0,0);
set_d(t11,1.0,0);
set_d(t12,1.0,0);

T_approx_chain(Rows,Columns,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,
&Bzl,&Bzc,&Br,&Bwl,&Bwc,&Ll0,&Lc0,&Ll1,&Lc1,&order);
nob = length_tuple(Bzl);
nol = length_tuple(Ll0);
/* draw lines and arcs */
set_i(WindowHandleTuple,WindowHandle,0) ;
set_line_width(WindowHandle,4);
if (nob>0) T_disp_arc(Bzl,Bzc,Br,Bwl,Bwc);
set_line_width(WindowHandle,1);
if (nol>0) T_disp_line(WindowHandleTuple,Ll0,Lc0,Ll1,Lc1);

Result
The operator approx_chain returns the value H_MSG_TRUE if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
approx_chain is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, edges_image, get_region_contour, threshold, hysteresis_threshold
Possible Successors
set_line_width, disp_arc, disp_line
Alternatives
get_region_polygon, approx_chain_simple
See also
get_region_chain, smallest_circle, disp_circle, disp_line
Module
Foundation

T_approx_chain_simple ( const Htuple Row, const Htuple Column,


Htuple *ArcCenterRow, Htuple *ArcCenterCol, Htuple *ArcAngle,
Htuple *ArcBeginRow, Htuple *ArcBeginCol, Htuple *LineBeginRow,
Htuple *LineBeginCol, Htuple *LineEndRow, Htuple *LineEndCol,
Htuple *Order )

Approximate a contour by arcs and lines.

HALCON/C Reference Manual, 2008-5-13


6.1. ACCESS 533

The contour of a curve is approximated by a sequence of lines and arcs.


The result of the procedure is returned separately as arcs and lines. If one is interested in the sequence of the
segments the individual resulting elements can be read successively from the resulting tuples. The sequence can be
taken from the return parameter order (0: next element is next line segment, 1: next element is next arc segment).
The operator approx_chain_simple behaves similarly as approx_chain except that in the case
of approx_chain_simple the missing parameters are internally allocated as follows: MinWidthCoord =
1.0, MaxWidthCoord = 3.0, ThreshStart = 0.5, ThreshEnd = 0.9, ThreshStep = 0.3, MinWidthSmooth = 1.0,
MaxWidthSmooth = 3.0, MinWidthCurve = 2, MaxWidthCurve = 9, Weight1 = 1.0, Weight2 = 1.0, Weight3 = 1.0.
Parameter

. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . Hlong


Row of the contour.
Default Value : 32
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . Hlong
Column of the contour.
Default Value : 32
. ArcCenterRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.center.y-array ; Htuple . Hlong *
Row of the center of an arc.
. ArcCenterCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.center.x-array ; Htuple . Hlong *
Column of the center of an arc.
. ArcAngle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.angle.rad-array ; Htuple . double *
Angle of an arc.
. ArcBeginRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.begin.y-array ; Htuple . Hlong *
Row of the starting point of an arc.
. ArcBeginCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arc.begin.x-array ; Htuple . Hlong *
Column of the starting point of an arc.
. LineBeginRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong *
Row of the starting point of a line segment.
. LineBeginCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *
Column of the starting point of a line segment.
. LineEndRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong *
Row of the ending point of a line segment.
. LineEndCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong *
Column of the ending point of a line segment.
. Order (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Sequence of line (value 0) and arc segments (value 1).
Example

/* read edge image */


read_image(&Image,"fig1_kan");
/* construct edge region */
hysteresis_threshold(Image,&RK1,64,255,40,1);
connection(RK1,&Rand);
/* fetch chain code */
T_get_region_contour(Rand,&Rows,&Columns);
firstline = get_i(Tline,0);
firstcol = get_i(Tcol,0);
/* approximation with lines and circular arcs */
T_approx_chain_simple(Rows,Columns,
&Bzl,&Bzc,&Br,&Bwl,&Bwc,&Ll0,&Lc0,&Ll1,&Lc1,&order);
nob = length_tuple(Bzl);
nol = length_tuple(Ll0);
/* draw lines and arcs */
set_i(WindowHandleTuple,WindowHandle,0) ;
set_line_width(WindowHandle,4);
if (nob>0) T_disp_arc(Bzl,Bzc,Br,Bwl,Bwc);

HALCON 8.0.2
534 CHAPTER 6. LINES

set_line_width(WindowHandle,1);
if (nol>0) T_disp_line(WindowHandleTuple,Ll0,Lc0,Ll1,Lc1);

Result
The operator approx_chain_simple returns the value H_MSG_TRUE if the parameters are correct. Other-
wise an exception is raised.
Parallelization Information
approx_chain_simple is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, edges_image, get_region_contour, threshold, hysteresis_threshold
Possible Successors
set_line_width, disp_arc, disp_line
Alternatives
get_region_polygon, approx_chain
See also
get_region_chain, smallest_circle, disp_circle, disp_line
Module
Foundation

6.2 Features
line_orientation ( double RowBegin, double ColBegin, double RowEnd,
double ColEnd, double *Phi )

T_line_orientation ( const Htuple RowBegin, const Htuple ColBegin,


const Htuple RowEnd, const Htuple ColEnd, Htuple *Phi )

Calculate the orientation of lines.


The operator line_orientation returns the orientation (−π/2 < Phi ≤ π/2) of the given lines. If more
than one line is to be treated the line and column indices can be passed as tuples. In this case Phi is, of course,
also a tuple and contains the corresponding orientations.
The procedure is typically applied to model lines in order to select parallel image lines, which were found, e.g., by
detect_edge_segments, via the operator select_lines.
Parameter
. RowBegin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y(-array) ; (Htuple .) double / Hlong
Row coordinates of the starting points of the input lines.
. ColBegin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x(-array) ; (Htuple .) double / Hlong
Column coordinates of the starting points of the input lines.
. RowEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y(-array) ; (Htuple .) double / Hlong
Row coordinates of the ending points of the input lines.
. ColEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x(-array) ; (Htuple .) double / Hlong
Column coordinates of the ending points of the input lines.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Orientation of the input lines.
Result
line_orientation always returns the value H_MSG_TRUE.
Parallelization Information
line_orientation is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, edges_image, threshold, hysteresis_threshold, split_skeleton_region,
split_skeleton_lines
Possible Successors
set_line_width, disp_line

HALCON/C Reference Manual, 2008-5-13


6.2. FEATURES 535

Alternatives
line_position, select_lines, partition_lines
See also
line_position, select_lines, partition_lines, detect_edge_segments
Module
Foundation

line_position ( Hlong RowBegin, Hlong ColBegin, Hlong RowEnd,


Hlong ColEnd, double *RowCenter, double *ColCenter, double *Length,
double *Phi )

T_line_position ( const Htuple RowBegin, const Htuple ColBegin,


const Htuple RowEnd, const Htuple ColEnd, Htuple *RowCenter,
Htuple *ColCenter, Htuple *Length, Htuple *Phi )

Calculate the center of gravity, length, and orientation of a line.


The operator line_position returns the center (RowCenter, ColCenter), the (Euclidean) length
(Length) and the orientation (−π/2 < Phi ≤ π/2) of the given lines. If more than one line is to be treated the
line and column indices can be passed as tuples. In this case the output parameters, of course, are also tuples.
The routine is applied, for example, to model lines in order to determine search regions for the edge detection (
detect_edge_segments).
Parameter
. RowBegin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y(-array) ; (Htuple .) Hlong
Row coordinates of the starting points of the input lines.
. ColBegin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x(-array) ; (Htuple .) Hlong
Column coordinates of the starting points of the input lines.
. RowEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y(-array) ; (Htuple .) Hlong
Row coordinates of the ending points of the input lines.
. ColEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x(-array) ; (Htuple .) Hlong
Column coordinates of the ending points of the input lines.
. RowCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *
Row coordinates of the centers of gravity of the input lines.
. ColCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column coordinates of the centers of gravity of the input lines.
. Length (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Euclidean length of the input lines.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Orientation of the input lines.
Result
line_position always returns the value H_MSG_TRUE.
Parallelization Information
line_position is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, edges_image, threshold, hysteresis_threshold, split_skeleton_region,
split_skeleton_lines
Possible Successors
set_line_width, disp_line
Alternatives
line_orientation, select_lines, partition_lines
See also
line_orientation, select_lines, partition_lines, detect_edge_segments
Module
Foundation

HALCON 8.0.2
536 CHAPTER 6. LINES

T_partition_lines ( const Htuple RowBeginIn, const Htuple ColBeginIn,


const Htuple RowEndIn, const Htuple ColEndIn, const Htuple Feature,
const Htuple Operation, const Htuple Min, const Htuple Max,
Htuple *RowBeginOut, Htuple *ColBeginOut, Htuple *RowEndOut,
Htuple *ColEndOut, Htuple *FailRowBOut, Htuple *FailColBOut,
Htuple *FailRowEOut, Htuple *FailColEOut )

Partition lines according to various criteria.


The operator partition_lines divides lines into two sets according to various criteria. For each input line the
indicated features (Feature) are calculated. If each (Operation = ’and’) or at least one (Operation = ’or’)
of the calculated features is within the given limits (Min,Max) the line is transferred into the first set (parameters
RowBeginOut to ColEndOut), otherwise into the second set (parameters FailRowBOut to FailColEOut).
Condition: M ini ≤ F eaturei (Line) ≤ M axi

Possible values for Feature:


’length’ (Euclidean) length of the line
’row’ Line index of the center
’column’ Column index of the center
’phi’ Orientation of the line (− π2 < ϕ ≤ π
2)

Attention
If only one feature is used the value of Operation is meaningless. Several features are processed according to
the sequence in which they are passed.
Parameter
. RowBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong
Row coordinates of the starting points of the input lines.
. ColBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong
Column coordinates of the starting points of the input lines.
. RowEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong
Row coordinates of the ending points of the input lines.
. ColEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong
Column coordinates of the ending points of the input lines.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Features to be used for selection.
List of values : Feature ∈ {"length", "row", "column", "phi"}
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Desired combination of the features.
List of values : Operation ∈ {"and", "or"}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong / double
Lower limits of the features or ’min’.
Default Value : "min"
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong / double
Upper limits of the features or ’max’.
Default Value : "max"
. RowBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong *
Row coordinates of the starting points of the lines fulfilling the conditions.
. ColBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *
Column coordinates of the starting points of the lines fulfilling the conditions.
. RowEndOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong *
Row coordinates of the ending points of the lines fulfilling the conditions.
. ColEndOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *
Column coordinates of the ending points of the lines fulfilling the conditions.
. FailRowBOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong *
Row coordinates of the starting points of the lines not fulfilling the conditions.

HALCON/C Reference Manual, 2008-5-13


6.2. FEATURES 537

. FailColBOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *


Column coordinates of the starting points of the lines not fulfilling the conditions.
. FailRowEOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong *
Row coordinates of the ending points of the lines not fulfilling the conditions.
. FailColEOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong *
Column coordinates of the ending points of the lines not fulfilling the conditions.
Result
The operator partition_lines returns the value H_MSG_TRUE if the parameter values are correct. Other-
wise an exception is raised.
Parallelization Information
partition_lines is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, edges_image, threshold, hysteresis_threshold, split_skeleton_region,
split_skeleton_lines
Possible Successors
set_line_width, disp_line
Alternatives
line_orientation, line_position, select_lines, select_lines_longest
See also
select_lines, select_lines_longest, detect_edge_segments, select_shape
Module
Foundation

T_select_lines ( const Htuple RowBeginIn, const Htuple ColBeginIn,


const Htuple RowEndIn, const Htuple ColEndIn, const Htuple Feature,
const Htuple Operation, const Htuple Min, const Htuple Max,
Htuple *RowBeginOut, Htuple *ColBeginOut, Htuple *RowEndOut,
Htuple *ColEndOut )

Select lines according to various criteria.


The operator select_lines chooses lines according to various criteria. For every input line the indicated
features (Feature) are calculated. If each (Operation = ’and’) or at least one (Operation = ’or’) of the
calculated features is within the given limits (Min,Max) the line is transferred into the output.
Condition: M ini ≤ F eaturei (Line) ≤ M axi

Possible values for Feature:

’length’ (Euclidean) length of the line


’row’ Line index of the center
’column’ Column index of the center
’phi’ Orientation of the line (− π2 < ϕ ≤ π
2)

Attention
If only one feature is used the value of Operation is meaningless. Several features are processed according to
the sequence in which they are passed.
Parameter

. RowBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong


Row coordinates of the starting points of the input lines.
. ColBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong
Column coordinates of the starting points of the input lines.
. RowEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong
Row coordinates of the ending points of the input lines.

HALCON 8.0.2
538 CHAPTER 6. LINES

. ColEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong


Column coordinates of the ending points of the input lines.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Features to be used for selection.
Default Value : "length"
List of values : Feature ∈ {"length", "row", "column", "phi"}
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Desired combination of the features.
Default Value : "and"
List of values : Operation ∈ {"and", "or"}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong / double
Lower limits of the features or ’min’.
Default Value : "min"
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong / double
Upper limits of the features or ’max’.
Default Value : "max"
. RowBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong *
Row coordinates of the starting points of the output lines.
. ColBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *
Column coordinates of the starting points of the output lines.
. RowEndOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong *
Row coordinates of the ending points of the output lines.
. ColEndOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong *
Column coordinates of the ending points of the output lines.
Result
The operator select_lines returns the value H_MSG_TRUE if the parameter values are correct. Otherwise
an exception is raised.
Parallelization Information
select_lines is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, edges_image, threshold, hysteresis_threshold, split_skeleton_region,
split_skeleton_lines
Possible Successors
set_line_width, disp_line
Alternatives
line_orientation, line_position, partition_lines
See also
partition_lines, select_lines_longest, detect_edge_segments, select_shape
Module
Foundation

T_select_lines_longest ( const Htuple RowBeginIn,


const Htuple ColBeginIn, const Htuple RowEndIn, const Htuple ColEndIn,
const Htuple Num, Htuple *RowBeginOut, Htuple *ColBeginOut,
Htuple *RowEndOut, Htuple *ColEndOut )

Select the longest input lines.


The operator select_lines_longest selects the Num longest input lines from the input lines described by
the tuples RowBeginIn, ColBeginIn, RowEndIn and ColEndIn.
Parameter

. RowBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong


Row coordinates of the starting points of the input lines.

HALCON/C Reference Manual, 2008-5-13


6.2. FEATURES 539

. ColBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong


Column coordinates of the starting points of the input lines.
. RowEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong
Row coordinates of the ending points of the input lines.
. ColEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong
Column coordinates of the ending points of the input lines.
. Num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
(Maximum) desired number of output lines.
Default Value : 10
. RowBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong *
Row coordinates of the starting points of the output lines.
. ColBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *
Column coordinates of the starting points of the output lines.
. RowEndOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong *
Row coordinates of the ending points of the output lines.
. ColEndOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong *
Column coordinates of the ending points of the output lines.
Result
The operator select_lines_longest returns the value H_MSG_TRUE if the parameter values are correct.
Otherwise an exception is raised.
Parallelization Information
select_lines_longest is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, edges_image, threshold, hysteresis_threshold, split_skeleton_region,
split_skeleton_lines
Possible Successors
set_line_width, disp_line
Alternatives
line_orientation, line_position, select_lines, partition_lines
See also
select_lines, partition_lines, detect_edge_segments, select_shape
Module
Foundation

HALCON 8.0.2
540 CHAPTER 6. LINES

HALCON/C Reference Manual, 2008-5-13


Chapter 7

Matching

7.1 Component-Based

clear_all_component_models ( )
T_clear_all_component_models ( )

Free the memory of all component models.


The operator clear_all_component_models frees the memory of all component models that were
created by create_component_model or create_trained_component_model. After calling
clear_all_component_models, no model can be used any longer.
Attention
clear_all_component_models exists solely for the purpose of implementing the “reset program” func-
tionality in HDevelop. clear_all_component_models must not be used in any application.
Result
clear_all_component_models always returns H_MSG_TRUE.
Parallelization Information
clear_all_component_models is processed completely exclusively without parallelization.
Possible Predecessors
create_component_model, create_trained_component_model, write_component_model
Alternatives
clear_component_model
Module
Matching

clear_all_training_components ( )
T_clear_all_training_components ( )

Free the memory of all component training results.


The operator clear_all_training_components frees the memory of all training results that were created
by train_model_components. After calling clear_all_training_components, no training result
can be used any longer.
Attention
clear_all_training_components exists solely for the purpose of implementing the “reset program”
functionality in HDevelop. clear_all_training_components must not be used in any application.
Result
clear_all_training_components always returns H_MSG_TRUE.

541
542 CHAPTER 7. MATCHING

Parallelization Information
clear_all_training_components is processed completely exclusively without parallelization.
Possible Predecessors
train_model_components, write_training_components
See also
clear_training_components
Module
Matching

clear_component_model ( Hlong ComponentModelID )


T_clear_component_model ( const Htuple ComponentModelID )

Free the memory of a component model.


The operator clear_component_model frees the memory of a component model that was cre-
ated by create_component_model or create_trained_component_model. After calling
clear_component_model, the model can no longer be used. The handle ComponentModelID becomes
invalid.
Parameter
. ComponentModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_model ; Hlong
Handle of the component model.
Result
If the handle of the model is valid, the operator clear_component_model returns the value H_MSG_TRUE.
If necessary, an exception is raised.
Parallelization Information
clear_component_model is processed completely exclusively without parallelization.
Possible Predecessors
create_component_model, create_trained_component_model, read_component_model,
write_component_model
See also
clear_all_component_models
Module
Matching

clear_training_components ( Hlong ComponentTrainingID )


T_clear_training_components ( const Htuple ComponentTrainingID )

Free the memory of a component training result.


The operator clear_training_components frees the memory of a training result that was created by
train_model_components. After calling clear_training_components, the training result can no
longer be used. The handle ComponentTrainingID becomes invalid.
Parameter
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_training ; Hlong
Handle of the training result.
Result
If the handle of the training result is valid, the operator clear_training_components returns the value
H_MSG_TRUE. If necessary, an exception is raised.
Parallelization Information
clear_training_components is processed completely exclusively without parallelization.

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 543

Possible Predecessors
train_model_components, write_training_components
See also
clear_all_training_components
Module
Matching

cluster_model_components ( const Hobject TrainingImages,


Hobject *ModelComponents, Hlong ComponentTrainingID,
const char *AmbiguityCriterion, double MaxContourOverlap,
double ClusterThreshold )

T_cluster_model_components ( const Hobject TrainingImages,


Hobject *ModelComponents, const Htuple ComponentTrainingID,
const Htuple AmbiguityCriterion, const Htuple MaxContourOverlap,
const Htuple ClusterThreshold )

Adopt new parameters that are used to create the model components into the training result.
With cluster_model_components you can modify parameters after a first training has been per-
formed using train_model_components. cluster_model_components sets the crite-
rion AmbiguityCriterion that is used to solve the ambiguities, the maximum contour overlap
MaxContourOverlap, and the cluster threshold of the training result ComponentTrainingID to
the specified values. A detailed description of these parameters can be found in the documentation of
train_model_components. By modifying these parameters, the way in which the initial components are
merged into rigid model components changes. For example, the greater the cluster threshold is chosen, the fewer
initial components are merged.
The rigid model components are returned in ModelComponents. In order to receive reasonable results, it is es-
sential that the same training images that were used to perform the training with train_model_components
are passed in TrainingImages. The pose of the newly clustered components within the training images is
determined using the shape-based matching. As in train_model_components, one can decide whether the
shape models should be pregenerated by using set_system(’pregenerate_shape_models’,...).
Furthermore, set_system(’border_shape_models’,...) can be used to determine whether the mod-
els must lie completely within the training images or whether they can extend partially beyond the image border.
Thus, you can select suitable parameter values interactively by repeatedly calling
inspect_clustered_components with different parameter values and then setting the chosen val-
ues by using get_training_components.
Parameter

. TrainingImages (input_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2


Training images that were used for training the model components.
. ModelComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Contour regions of rigid model components.
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_training ; Hlong
Handle of the training result.
. AmbiguityCriterion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Criterion for solving the ambiguities.
Default Value : "rigidity"
List of values : AmbiguityCriterion ∈ {"distance", "orientation", "distance_orientation", "rigidity"}
. MaxContourOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Maximum contour overlap of the found initial components.
Default Value : 0.2
Suggested values : MaxContourOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MaxContourOverlap) ∧ (MaxContourOverlap ≤ 1)

HALCON 8.0.2
544 CHAPTER 7. MATCHING

. ClusterThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


Threshold for clustering the initial components.
Default Value : 0.5
Suggested values : ClusterThreshold ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : (0 ≤ ClusterThreshold) ∧ (ClusterThreshold ≤ 1)
Example (Syntax: HDevelop)

* Get the model image.


read_image (ModelImage, ’model_image.tif’)
* Define the regions for the initial components.
gen_rectangle2 (InitialComponentRegions, 212, 233, 0.62, 167, 29)
gen_rectangle2 (Rectangle2, 298, 363, 1.17, 162, 34)
gen_rectangle2 (Rectangle3, 63, 444, -0.26, 50, 27)
gen_rectangle2 (Rectangle4, 120, 473, 0, 33, 20)
InitialComponentRegions := [InitialComponentRegions,Rectangle2]
InitialComponentRegions := [InitialComponentRegions,Rectangle3]
InitialComponentRegions := [InitialComponentRegions,Rectangle4]
* Get the training images
TrainingImages := []
for i := 1 to 4 by 1
read_image (TrainingImage, ’training_image-’+i$’02’+’.tif’)
TrainingImages := [TrainingImages,TrainingImage]
endfor
* Extract the model components and train the relations.
train_model_components (ModelImage, InitialComponentRegions, TrainingImages,
ModelComponents, 22, 60, 30, 0.65, 0, 0, rad(60),
’speed’, ’rigidity’, 0.2, 0.5, ComponentTrainingID)
* Find the best value for the parameter ClusterThreshold.
inspect_clustered_components (ModelComponents, ComponentTrainingID,
’rigidity’, 0.2, 0.4)
* Adopt the ClusterThreshold into the training result.
cluster_model_components (TrainingImages, ModelComponents,
ComponentTrainingID, ’rigidity’, 0.2, 0.4)
* Create the component model based on the training result.
create_trained_component_model (ComponentTrainingID, -rad(30), rad(60), 10,
0.5, ’auto’, ’auto’, ’none’, ’use_polarity’,
’false’, ComponentModelID, RootRanking)

Result
If the parameter values are correct, the operator cluster_model_components returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
cluster_model_components is processed completely exclusively without parallelization.
Possible Predecessors
train_model_components, inspect_clustered_components
Possible Successors
get_training_components, create_trained_component_model,
modify_component_relations, write_training_components,
get_component_relations, clear_training_components,
clear_all_training_components
Module
Matching

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 545

create_component_model ( const Hobject ModelImage,


const Hobject ComponentRegions, Hlong VariationRow,
Hlong VariationColumn, double VariationAngle, double AngleStart,
double AngleExtent, Hlong ContrastLowComp, Hlong ContrastHighComp,
Hlong MinSizeComp, Hlong MinContrastComp, double MinScoreComp,
Hlong NumLevelsComp, double AngleStepComp,
const char *OptimizationComp, const char *MetricComp,
const char *PregenerationComp, Hlong *ComponentModelID,
Hlong *RootRanking )

T_create_component_model ( const Hobject ModelImage,


const Hobject ComponentRegions, const Htuple VariationRow,
const Htuple VariationColumn, const Htuple VariationAngle,
const Htuple AngleStart, const Htuple AngleExtent,
const Htuple ContrastLowComp, const Htuple ContrastHighComp,
const Htuple MinSizeComp, const Htuple MinContrastComp,
const Htuple MinScoreComp, const Htuple NumLevelsComp,
const Htuple AngleStepComp, const Htuple OptimizationComp,
const Htuple MetricComp, const Htuple PregenerationComp,
Htuple *ComponentModelID, Htuple *RootRanking )

Prepare a component model for matching based on explicitly specified components and relations.
create_component_model prepares patterns, which are passed in the form of a model image
ModelImage and several regions ComponentRegions, as a component model for matching. The out-
put parameter ComponentModelID is a handle for this model, which is used in subsequent calls to
find_component_model. In contrast to create_trained_component_model, no preceding training
with train_model_components needs to be performed before calling create_component_model.
Each of the regions passed in ComponentRegions describes a separate model component. Later, the index of
a component region in ComponentRegions is used to denote the model component. The reference point of a
component is the center of gravity of its associated region, which is passed in ComponentRegions. It can be
calculated by calling area_center.
The relative movements (relations) of the model components can be set by passing VariationRow,
VariationColumn, and VariationAngle. Because directly passing the relations is complicated, instead of
the relations the variations of the model components are passed. The variations describe the movements of the com-
ponents independently from each other relative to their poses in the model image ModelImage. The parameters
VariationRow and VariationColumn describe the movement of the components in row and column di-
rection by ± 21 VariationRow and ± 12 VariationColumn, respectively. The parameter VariationAngle
describes the angle variation of the component by ± 12 VariationAngle. Based on these values, the relations
are automatically computed. The three parameters must either contain one element, in which case the parameter is
used for all model components, or must contain the same number of elements as ComponentRegions, in which
case each parameter element refers to the corresponding model component in ComponentRegions.
The parameters AngleStart and AngleExtent determine the range of possible rotations of the component
model in an image.
Internally, a separate shape model is built for each model component (see create_shape_model). There-
fore, the parameters ContrastLowComp, ContrastHighComp, MinSizeComp, MinContrastComp,
MinScoreComp, NumLevelsComp, AngleStepComp, OptimizationComp, MetricComp, and
PregenerationComp correspond to the parameters of create_shape_model, with the following differ-
ences: First, in the parameter Contrast of create_shape_model the upper as well as the lower threshold
for the hysteresis threshold method can be passed. Additionally, a third value, which suppresses small connected
contour regions, can be passed. In contrast, the operator create_component_model offers three sepa-
rate parameters ContrastHighComp, ContrastLowComp, and MinScoreComp in order to set these three
values. Consequently, also the automatic computation of the contrast threshold(s) is different. If both hystere-
sis threshold should be automatically determined, both ContrastLowComp and ContrastHighComp must
be set to ’auto’. In contrast, if only one threshold value should be determined, ContrastLowComp must be
set to ’auto’ while ContrastHighComp must be set to an arbitrary value different from ’auto’. Secondly,
the parameter Optimization of create_shape_model provides the possibility to reduce the number
of model points as well as the possibility to completely pregenerate the shape model. In contrast, the oper-
ator create_trained_component_model uses a separate parameter PregenerationComp in order

HALCON 8.0.2
546 CHAPTER 7. MATCHING

to decide whether the shape models should be completely pregenerated or not. A third difference concerning
the parameter MinScoreComp should be noted. When using the shape-based matching, this parameter needs
not be passed when preparing the shape model using create_shape_model, but only during the search
using find_shape_model. In contrast, when preparing the component model it is favorable to analyze ro-
tational symmetries of the model components and similarities between the model components. However, this
analysis only leads to meaningful results if the value for MinScoreComp that is used during the search (see
find_component_model) is already approximately known.
In addition to the parameters ContrastLowComp, ContrastHighComp, and MinSizeComp also the pa-
rameters MinContrastComp, NumLevelsComp, AngleStepComp, and OptimizationComp can be au-
tomatically determined by passing ’auto’ for the respective parameters.
All component-specific input parameters (parameter names terminate with the suffix Comp) must either contain
one element, in which case the parameter is used for all model components, or must contain the same number of
elements as the number of regions in ComponentRegions, in which case each parameter element refers to the
corresponding element in ComponentRegions.
In addition to the individual shape models, the component model also contains information about the way the
single model components must be searched relative to each other using find_component_model in order to
minimize the computation time of the search. For this, the components are represented in a tree structure. First, the
component that stands at the root of this search tree (root component) is searched. Then, the remaining components
are searched relative to the pose of their predecessor in the search tree.
The root component can be passed as an input parameter of find_component_model during the search. To
what extent a model component is suited to act as the root component depends on several factors. In principle, a
model component that can be found in the image with a high probability should be chosen. Therefore, a component
that is sometimes occluded to a high degree or that is missing in some cases is not well suited to act as the root
component. Additionally, the computation time that is associated with the root component during the search
can serve as a criterion. A ranking of the model components that is based on the latter criterion is returned in
RootRanking. In this parameter the indices of the model components are sorted in descending order according
to their associated search time, i.e., RootRanking[0] contains the index of the model component that, chosen
as root component, allows the fastest search. Note that the ranking returned in RootRanking represents only a
coarse estimation. Furthermore, the calculation of the root ranking assumes that the image size as well as the value
of the system parameter ’border_shape_models’ are identical when calling create_component_model and
find_component_model.
Parameter
. ModelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Input image from which the shape models of the model components should be created.
. ComponentRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Input regions from which the shape models of the model components should be created.
. VariationRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Variation of the model components in row direction.
Suggested values : VariationRow ∈ {0, 1, 2, 3, 4, 5, 10, 20, 30, 40, 50, 100, 150}
Restriction : VariationRow ≥ 0
. VariationColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Variation of the model components in column direction.
Suggested values : VariationColumn ∈ {0, 1, 2, 3, 4, 5, 10, 20, 30, 40, 50, 100, 150}
Restriction : VariationColumn ≥ 0
. VariationAngle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double
Angle variation of the model components.
Suggested values : VariationAngle ∈ {0, 0.017, 0, 035, 0.05, 0.07, 0.09, 0.17, 0.35, 0.52, 0.67, 0.87}
Restriction : VariationAngle ≥ 0
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Smallest rotation of the component model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Extent of the rotation of the component model.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.28, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 547

. ContrastLowComp (input_control) . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *


Lower hysteresis threshold for the contrast of the components in the model image.
Default Value : "auto"
Suggested values : ContrastLowComp ∈ {"auto", 10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Restriction : ContrastLowComp > 0
. ContrastHighComp (input_control) . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Upper hysteresis threshold for the contrast of the components in the model image.
Default Value : "auto"
Suggested values : ContrastHighComp ∈ {"auto", 10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Restriction : (ContrastHighComp > 0) ∧ (ContrastHighComp ≥ ContrastLowComp)
. MinSizeComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Minimum size of the contour regions in the model.
Default Value : "auto"
Suggested values : MinSizeComp ∈ {"auto", 0, 5, 10, 20, 30, 40}
Restriction : MinSizeComp ≥ 0
. MinContrastComp (input_control) . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Minimum contrast of the components in the search images.
Default Value : "auto"
Suggested values : MinContrastComp ∈ {"auto", 10, 20, 20, 40}
Restriction : (MinContrastComp ≤ ContrastLowComp) ∧ (MinContrastComp ≥ 0)
. MinScoreComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
Minimum score of the instances of the components to be found.
Default Value : 0.5
Suggested values : MinScoreComp ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MinScoreComp) ∧ (MinScoreComp ≤ 1)
. NumLevelsComp (input_control) . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Maximum number of pyramid levels for the components.
Default Value : "auto"
List of values : NumLevelsComp ∈ {"auto", 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. AngleStepComp (input_control) . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double / const char *
Step length of the angles (resolution) for the components.
Default Value : "auto"
Suggested values : AngleStepComp ∈ {"auto", 0.0175, 0.0349, 0.0524, 0.0698, 0.0873}
Restriction : AngleStepComp ≥ 0
. OptimizationComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Kind of optimization for the components.
Default Value : "auto"
List of values : OptimizationComp ∈ {"auto", "none", "point_reduction_low",
"point_reduction_medium", "point_reduction_high"}
. MetricComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Match metric used for the components.
Default Value : "use_polarity"
List of values : MetricComp ∈ {"use_polarity", "ignore_global_polarity", "ignore_local_polarity",
"ignore_color_polarity"}
. PregenerationComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Complete pregeneration of the shape models for the components if equal to ’true’.
Default Value : "false"
List of values : PregenerationComp ∈ {"true", "false"}
. ComponentModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . component_model ; (Htuple .) Hlong *
Handle of the component model.
. RootRanking (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Ranking of the model components expressing the suitability to act as the root component.
Example (Syntax: HDevelop)

* Read the model image.

HALCON 8.0.2
548 CHAPTER 7. MATCHING

read_image (ModelImage, ’model_image.tif’)


* Describe the model components.
gen_rectangle2 (ComponentRegions, 318, 109, -1.62, 34, 19)
gen_rectangle2 (Rectangle2, 342, 238, -1.63, 32, 17)
gen_rectangle2 (Rectangle3, 355, 505, 1.41, 25, 17)
ComponentRegions := [ComponentRegions,Rectangle2]
ComponentRegions := [ComponentRegions,Rectangle3]
* Create the component model by explicitly specifying the relations.
create_component_model (ModelImage, ComponentRegions, 20, 20, rad(25), 0,
rad(360), 15, 40, 15, 10, 0.8, ’auto’, ’auto’,
’none’, ’use_polarity’, ’false’, ComponentModelID,
RootRanking)
* Find the component model in a run-time image.
read_image (SearchImage, ’search_image.tif’)
find_component_model (SearchImage, ComponentModelID, RootRanking, 0,
rad(360), 0.5, 0, 0.5, ’stop_search’,
’search_from_best’, ’none’, 0.8, ’least_squares’, 0,
0.8, ModelStart, ModelEnd, Score, RowComp, ColumnComp,
AngleComp, ScoreComp, ModelComp)

Result
If the parameters are valid, the operator create_component_model returns the value H_MSG_TRUE. If
necessary an exception is raised.
Parallelization Information
create_component_model is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, concat_obj
Possible Successors
find_component_model
Alternatives
create_trained_component_model
See also
create_shape_model, find_shape_model
Module
Matching

create_trained_component_model ( Hlong ComponentTrainingID,


double AngleStart, double AngleExtent, Hlong MinContrastComp,
double MinScoreComp, Hlong NumLevelsComp, double AngleStepComp,
const char *OptimizationComp, const char *MetricComp,
const char *PregenerationComp, Hlong *ComponentModelID,
Hlong *RootRanking )

T_create_trained_component_model ( const Htuple ComponentTrainingID,


const Htuple AngleStart, const Htuple AngleExtent,
const Htuple MinContrastComp, const Htuple MinScoreComp,
const Htuple NumLevelsComp, const Htuple AngleStepComp,
const Htuple OptimizationComp, const Htuple MetricComp,
const Htuple PregenerationComp, Htuple *ComponentModelID,
Htuple *RootRanking )

Prepare a component model for matching based on trained components.


create_trained_component_model prepares the training result, which is passed in
ComponentTrainingID, as a component model for matching. The output parameter ComponentModelID

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 549

is a handle for this model, which is used in subsequent calls to find_component_model. In con-
trast to create_component_model, the model components must have been previously trained using
train_model_components before calling create_trained_component_model.
The parameters AngleStart and AngleExtent determine the range of possible rotations of the component
model in an image.
Internally, a separate shape model is built for each model component (see create_shape_model).
Therefore, the parameters MinContrastComp, MinScoreComp, NumLevelsComp, AngleStepComp,
OptimizationComp, MetricComp, and PregenerationComp correspond to the parameters of
create_shape_model, with the following differences: First, the parameter Optimization of
create_shape_model provides the possibility to reduce the number of model points as well as the possibility
to completely pregenerate the shape model. In contrast, the operator create_trained_component_model
uses a separate parameter PregenerationComp in order to decide whether the shape models should be com-
pletely pregenerated or not. A second difference concerning the parameter MinScoreComp should be noted.
When using the shape-based matching, this parameter needs not be passed when preparing the shape model us-
ing create_shape_model but only during the search using find_shape_model. In contrast, when
preparing the component model it is favorable to analyze rotational symmetries of the model components and
similarities between the model components. However, this analysis only leads to meaningful results if the value
for MinScoreComp that is used during the search (see find_component_model) is already approximately
known. After the search with find_component_model the pose parameters of the components in a search
image are returned. Note that the pose parameters refer to the reference points of the components. The reference
point of a component is the center of gravity of its associated region that is returned in ModelComponents of
train_model_components.
The parameters MinContrastComp, NumLevelsComp, AngleStepComp, and OptimizationComp can
be automatically determined by passing ’auto’ for the respective parameters.
All component-specific input parameters (parameter names terminate with the suffix Comp) must either contain
one element, in which case the parameter is used for all model components, or must contain the same number
of elements as the number of model components contained in ComponentTrainingID, in which case each
parameter element refers to the corresponding component in ComponentTrainingID.
In addition to the individual shape models, the component model also contains information about the way the
single model components must be searched relative to each other using find_component_model in order to
minimize the computation time of the search. For this, the components are represented in a tree structure. First, the
component that stands at the root of this search tree (root component) is searched. Then, the remaining components
are searched relative to the pose of their predecessor in the search tree.
The root component can be passed as an input parameter of find_component_model during the search. To
what extent a model component is suited to act as root component depends on several factors. In principle, a model
component that can be found in the image with a high probability should be chosen. Therefore, a component that
is sometimes occluded to a high degree or that is missing in some cases is not well suited to act as root component.
Additionally, the computation time that is associated with the root component during the search can serve as a
criterion. A ranking of the model components that is based on the latter criterion is returned in RootRanking.
In this parameter the indices of the model components are sorted in descending order according to their associ-
ated computation time, i.e., RootRanking[0] contains the index of the model component that, chosen as root
component, allows the fastest search. Note that the ranking returned in RootRanking represents only a coarse
estimation. Furthermore, the calculation of the root ranking assumes that the image size as well as the value of the
system parameter ’border_shape_models’ are identical when calling create_trained_component_model
and find_component_model.
Parameter
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . component_training ; (Htuple .) Hlong
Handle of the training result.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Smallest rotation of the component model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Extent of the rotation of the component model.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.28, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0

HALCON 8.0.2
550 CHAPTER 7. MATCHING

. MinContrastComp (input_control) . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *


Minimum contrast of the components in the search images.
Default Value : "auto"
Suggested values : MinContrastComp ∈ {"auto", 10, 20, 20, 40}
Restriction : MinContrastComp ≥ 0
. MinScoreComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
Minimum score of the instances of the components to be found.
Default Value : 0.5
Suggested values : MinScoreComp ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MinScoreComp) ∧ (MinScoreComp ≤ 1)
. NumLevelsComp (input_control) . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Maximum number of pyramid levels for the components.
Default Value : "auto"
List of values : NumLevelsComp ∈ {"auto", 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. AngleStepComp (input_control) . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double / const char *
Step length of the angles (resolution) for the components.
Default Value : "auto"
Suggested values : AngleStepComp ∈ {"auto", 0.0175, 0.0349, 0.0524, 0.0698, 0.0873}
Restriction : AngleStepComp ≥ 0
. OptimizationComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Kind of optimization for the components.
Default Value : "auto"
List of values : OptimizationComp ∈ {"auto", "none", "point_reduction_low",
"point_reduction_medium", "point_reduction_high"}
. MetricComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Match metric used for the components.
Default Value : "use_polarity"
List of values : MetricComp ∈ {"use_polarity", "ignore_global_polarity", "ignore_local_polarity",
"ignore_color_polarity"}
. PregenerationComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Complete pregeneration of the shape models for the components if equal to ’true’.
Default Value : "false"
List of values : PregenerationComp ∈ {"true", "false"}
. ComponentModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . component_model ; (Htuple .) Hlong *
Handle of the component model.
. RootRanking (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Ranking of the model components expressing the suitability to act as the root component.
Example (Syntax: HDevelop)

* Get the model image.


read_image (ModelImage, ’model_image.tif’)
* Define the regions for the initial components.
gen_rectangle2 (InitialComponentRegions, 212, 233, 0.62, 167, 29)
gen_rectangle2 (Rectangle2, 298, 363, 1.17, 162, 34)
gen_rectangle2 (Rectangle3, 63, 444, -0.26, 50, 27)
gen_rectangle2 (Rectangle4, 120, 473, 0, 33, 20)
InitialComponentRegions := [InitialComponentRegions,Rectangle2]
InitialComponentRegions := [InitialComponentRegions,Rectangle3]
InitialComponentRegions := [InitialComponentRegions,Rectangle4]
* Get the training images.
TrainingImages := []
for i := 1 to 4 by 1
read_image (TrainingImage, ’training_image-’+i+’.tif’)
TrainingImages := [TrainingImages,TrainingImage]
endfor

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 551

* Extract the model components and train the relations.


train_model_components (ModelImage, InitialComponentRegions, TrainingImages,
ModelComponents, 22, 60, 30, 0.65, 0, 0, rad(60),
’speed’, ’rigidity’, 0.2, 0.4, ComponentTrainingID)
* Create the component model based on the training result.
create_trained_component_model (ComponentTrainingID, -rad(30), rad(60), 10,
0.5, ’auto’, ’auto’, ’none’, ’use_polarity’,
’false’, ComponentModelID, RootRanking)
* Find the component model in a run-time image.
read_image (SearchImage, ’search_image.tif’)
find_component_model (SearchImage, ComponentModelID, RootRanking, -rad(30),
rad(60), 0.5, 0, 0.5, ’stop_search’, ’prune_branch’,
’none’, 0.55, ’least_squares’, 0, 0.9, ModelStart,
ModelEnd, Score, RowComp, ColumnComp, AngleComp,
ScoreComp, ModelComp)

Result
If the parameters are valid, the operator create_trained_component_model returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
create_trained_component_model is processed completely exclusively without parallelization.
Possible Predecessors
train_model_components, read_training_components
Possible Successors
find_component_model
Alternatives
create_component_model
See also
create_shape_model, find_shape_model
Module
Matching

find_component_model ( const Hobject Image, Hlong ComponentModelID,


Hlong RootComponent, double AngleStartRoot, double AngleExtentRoot,
double MinScore, Hlong NumMatches, double MaxOverlap,
const char *IfRootNotFound, const char *IfComponentNotFound,
const char *PosePrediction, double MinScoreComp,
const char *SubPixelComp, Hlong NumLevelsComp, double GreedinessComp,
Hlong *ModelStart, Hlong *ModelEnd, double *Score, double *RowComp,
double *ColumnComp, double *AngleComp, double *ScoreComp,
Hlong *ModelComp )

T_find_component_model ( const Hobject Image,


const Htuple ComponentModelID, const Htuple RootComponent,
const Htuple AngleStartRoot, const Htuple AngleExtentRoot,
const Htuple MinScore, const Htuple NumMatches,
const Htuple MaxOverlap, const Htuple IfRootNotFound,
const Htuple IfComponentNotFound, const Htuple PosePrediction,
const Htuple MinScoreComp, const Htuple SubPixelComp,
const Htuple NumLevelsComp, const Htuple GreedinessComp,
Htuple *ModelStart, Htuple *ModelEnd, Htuple *Score, Htuple *RowComp,
Htuple *ColumnComp, Htuple *AngleComp, Htuple *ScoreComp,
Htuple *ModelComp )

Find the best matches of a component model in an image.

HALCON 8.0.2
552 CHAPTER 7. MATCHING

The operator find_component_model finds the best NumMatches instances of the compo-
nent model ComponentModelID in the input image Image. The model must have been created
previously by calling create_trained_component_model, create_component_model, or
read_component_model.
The components of the component model ComponentModelID are represented in in a tree structure. The com-
ponent that stands at the root of this search tree (root component) is searched within the full search space, i.e., at
all allowed positions and in the allowed range of orientations (see below). In contrast, the remaining components
are searched relative to the pose of their predecessor in the search tree within a restricted search space that is com-
puted from the relations (recursive search). The index of the root component can be passed in RootComponent.
To what extent a model component is suited to act as root component depends on several factors. In principle, a
model component that can be found in the image with a high probability, should be chosen. Therefore, a com-
ponent that is sometimes occluded to a high degree or that is missing in some cases is not well suited to act as
root component. The behavior of the operator when dealing with a missing or strongly occluded root compo-
nent can be set with IfRootNotFound (see below). Also, the computation time that is associated with the
root component during the search can serve as a criterion. A ranking of the model components that is based on
the latter criterion is returned in RootRanking of the operator create_trained_component_model or
create_component_model, respectively. If the complete ranking is passed in RootComponent, the first
value RootComponent[0] is automatically selected as the root component. The domain of the image Image
determines the search space for the reference point, i.e., the allowed positions, of the root component. The pa-
rameters AngleStartRoot and AngleExtentRoot specify the allowed angle range within which the root
component is searched. If necessary, the range of rotations is clipped to the range given when the component model
was created with create_trained_component_model or create_component_model, respectively.
The angle range for each component can be queried with get_shape_model_params after requesting the
corresponding shape model handles with get_component_model_params.
The position and rotation of the model components of all found component model instances are returned
in RowComp, ColumnComp, and AngleComp. The coordinates RowComp and ColumnComp are the
coordinates of the origin (reference point) of the component in the search image. If the component
model was created with create_trained_component_model by training, the origin of the compo-
nent is the center of gravity of the respective returned contour region in ModelComponents of the op-
erator train_model_components. Otherwise, if the component model was created manually with
create_component_model, the origin of the component is the center of gravity of the corresponding passed
component region ComponentRegion of the operator create_component_model. Since the relations be-
tween the components in ComponentModelID refer to this reference point, the origin of the components must
not be modified by using set_shape_model_origin.
Additionally, the score of each found component instance is returned in ScoreComp. The score is a number
between 0 and 1, and is an approximate measure of how much of the component is visible in the image. If,
for example, half of the component is occluded, the score cannot exceed 0.5. While ScoreComp represents
the score of the instances of the single components, Score contains the score of the instances of the entire
component model. More precisely, Score contains the weighted mean of the associated values of ScoreComp.
The weighting is performed according to the number of model points within the respective component.
In order to assign the values in RowComp, ColumnComp, AngleComp, and ScoreComp to the as-
sociated model component, the index of the model component (see create_component_model and
train_model_components, respectively) is returned in ModelComp. Furthermore, for each found instance
of the component model its associated component matches are given in ModelStart and ModelEnd. Thus,
the matches of the components that correspond to the first found instance of the component model are given
by the interval of indices [ModelStart[0],ModelEnd[0]]. The indices refer to the parameters RowComp,
ColumnComp, AngleComp, ScoreComp, and ModelComp. Assume, for example, that two instances of the
component model, which consists of three components, are found in the image, where for one instance only two
components (component 0 and component 2) could be found. Then the returned parameters could, for exam-
ple, have the following elements: RowComp = [100,200,300,150,250], ColumnComp = [200,210,220,400,425],
AngleComp = [0,0.1,-0.2,0.1,0.2,0], ScoreComp = [1,1,1,1,1], ModelComp = [0,1,2,0,2], ModelStart =
[0,3], ModelEnd = [2,4], Score = [1,1]. The operator get_found_component_model can be used to
visualize the result of the search and to extract the component matches of a certain component model instance.
By default, the components are searched at image positions where the components lie completely within the im-
age. This means that the components will not be found if they extend beyond the borders of the image, even
if they would achieve a score greater than MinScoreComp (see below). This behavior can be changed with
set_system(’border_shape_models’,’true’), which will cause components that extend beyond the
image border to be found if they achieve a score greater than MinScoreComp. Here, points lying outside the

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 553

image are regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the search
will increase in this mode.
The parameter MinScore determines what score a potential match of the component model must at least have to
be regarded as an instance of the component model in the image. If the component model can be expected never
to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If a missing or strongly occluded
root component must be assumed, and hence IfRootNotFound is set to ’select_new_root’ (see below), the
search is faster the larger MinScore is chosen. Otherwise, the value of this parameter only slightly influences the
computation time.
The maximum number of model instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches. If all model instances exceeding MinScore in the image
should be found, NumMatches must be set to 0.
In some cases, found instances only differ in the pose of one or a few components. The parameter MaxOverlap
determines by what fraction (i.e., a number between 0 and 1) two instances may at most overlap in order to
consider them as different instances, and hence to return them separately. If two instances overlap each other by
more than MaxOverlap only the best instance is returned. The calculation of the overlap is based on the smallest
enclosing rectangles of arbitrary orientation (see smallest_rectangle2) of the found component instances.
If MaxOverlap = 0, the found instances may not overlap at all, while for MaxOverlap = 1 no check for
overlap is performed, and hence all instances are returned.
The parameter IfRootNotFound specifies the behavior of the operator when dealing with a missing or
strongly occluded root component. This parameter strongly influences the computation time of the operator. If
IfRootNotFound is set to ’stop_search’, it is assumed that the root component is always found in the image.
Consequently, for instances for which the root component could not be found the search for the remaining compo-
nents is not continued. If IfRootNotFound is set to ’select_new_root’, different components are successively
chosen as the root component and searched within the full search space. The order in which the selection of the
root component is performed corresponds to the order passed in RootRanking. The poses of the found in-
stances of all root components are then used to start the recursive search for the remaining components. Hence,
it is possible to find instances even if the original root component is not found. However, the computation time
of the search increases significantly in comparison to the search when choosing ’stop_search’. The number of
root components to search depends on the value specified for MinScore. The higher the value for MinScore
is chosen the fewer root components must be searched, and thus the faster the search is performed. If the number
of elements in RootComponent is less than the number of required root components during the search, the root
components are completed by the automatically computed order (see create_trained_component_model
or create_component_model).
The parameter IfComponentNotFound specifies the behavior of the operator when dealing with missing or
strongly occluded components other than the root component. Here, it can be stated in which way components
that must be searched relative to the pose of another (predecessor) component should be treated if the predecessor
component was not found. If IfComponentNotFound is set to ’prune_branch’, such components are not
searched at all and are also treated as ’not found’. If IfComponentNotFound is set to ’search_from_upper’,
such components are searched relative to the pose of the predecessor component of the predecessor component. If
IfComponentNotFound is set to ’search_from_best’, such components are searched relative to the pose of the
already found component from which the relative search can be performed with minimum computational effort.
The parameter PosePrediction determines whether the pose of components that could not be found should
be estimated. If PosePrediction is set to ’none’, only the poses of the found components are returned. In
contrast, if PosePrediction is set to ’from_neighbors’ or ’from_all’, the poses of components that could not
be found are estimated and returned with a score of ScoreComp = 0.0. The estimation of the poses is then either
based on the poses of the found neighboring components in the search tree (’from_neighbors’) or on the poses of
all found components (’from_all’).
Internally, the shape-based matching is used for the component-based matching in order to search the individ-
ual components (see find_shape_model). Therefore, the parameters MinScoreComp, SubPixelComp,
NumLevelsComp, and GreedinessComp have the same meaning as the corresponding parameters in
find_shape_model. These parameters must either contain one element, in which case the parameter is used
for all components, or must contain the same number of elements as model components in ComponentModelID,
in which case each parameter element refers to the corresponding component in ComponentModelID.
NumLevelsComp may also contain two elements or twice the number of elements as model components. The
first value determines the number of pyramid levels to use. The second value determines the lowest pyramid level

HALCON 8.0.2
554 CHAPTER 7. MATCHING

to which the found matches are tracked. If different values should be used for different components, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevelsComp. If, for ex-
ample, two components are contained in ComponentModelID, and the number of pyramid levels is 5 for the
first component and 4 for the second component, and the lowest pyramid level is 2 for the first component and 1
for the second component, NumLevelsComp = [5,2,4,1] must be selected. Further details can be found in the
documentation of find_shape_models.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image in which the component model should be found.
. ComponentModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . component_model ; (Htuple .) Hlong
Handle of the component model.
. RootComponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Index of the root component.
Suggested values : RootComponent ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8}
. AngleStartRoot (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double
Smallest rotation of the root component
Default Value : -0.39
Suggested values : AngleStartRoot ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}
. AngleExtentRoot (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double
Extent of the rotation of the root component.
Default Value : 0.78
Suggested values : AngleExtentRoot ∈ {6.28, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtentRoot ≥ 0
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Minimum score of the instances of the component model to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MinScore) ∧ (MinScore ≤ 1)
. NumMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Number of instances of the component model to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. MaxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Maximum overlap of the instances of the component models to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MaxOverlap) ∧ (MaxOverlap ≤ 1)
. IfRootNotFound (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Behavior if the root component is missing.
Default Value : "stop_search"
List of values : IfRootNotFound ∈ {"stop_search", "select_new_root"}
. IfComponentNotFound (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Behavior if a component is missing.
Default Value : "prune_branch"
List of values : IfComponentNotFound ∈ {"prune_branch", "search_from_upper", "search_from_best"}
. PosePrediction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Pose prediction of components that are not found.
Default Value : "none"
List of values : PosePrediction ∈ {"none", "from_neighbors", "from_all"}

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 555

. MinScoreComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double


Minimum score of the instances of the components to be found.
Default Value : 0.5
Suggested values : MinScoreComp ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MinScoreComp) ∧ (MinScoreComp ≤ 1)
. SubPixelComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Subpixel accuracy of the component poses if not equal to ’none’.
Default Value : "least_squares"
List of values : SubPixelComp ∈ {"none", "interpolation", "least_squares", "least_squares_high",
"least_squares_very_high"}
. NumLevelsComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Number of pyramid levels for the components used in the matching (and lowest pyramid level to use if
|NumLevelsComp| = 2n).
Default Value : 0
List of values : NumLevelsComp ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. GreedinessComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
“Greediness” of the search heuristic for the components (0: safe but slow; 1: fast but matches may be missed).
Default Value : 0.9
Suggested values : GreedinessComp ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ GreedinessComp) ∧ (GreedinessComp ≤ 1)
. ModelStart (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Start index of each found instance of the component model in the tuples describing the component matches.
. ModelEnd (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
End index of each found instance of the component model in the tuples describing the component matches.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Score of the found instances of the component model.
. RowComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *
Row coordinate of the found component matches.
. ColumnComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column coordinate of the found component matches.
. AngleComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Rotation angle of the found component matches.
. ScoreComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Score of the found component matches.
. ModelComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Index of the found components.
Result
If the parameter values are correct, the operator find_component_model returns the value H_MSG_TRUE.
If the input is empty (no input image available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_component_model is reentrant and processed without parallelization.
Possible Predecessors
create_trained_component_model, create_component_model, read_component_model
Possible Successors
get_found_component_model
Alternatives
find_shape_models
See also
find_shape_model, find_shape_models, get_shape_model_params,
get_component_model_params, train_model_components, set_shape_model_origin,
smallest_rectangle2

HALCON 8.0.2
556 CHAPTER 7. MATCHING

Module
Matching

gen_initial_components ( const Hobject ModelImage,


Hobject *InitialComponents, Hlong ContrastLow, Hlong ContrastHigh,
Hlong MinSize, const char *Mode, const char *GenericName,
double GenericValue )

T_gen_initial_components ( const Hobject ModelImage,


Hobject *InitialComponents, const Htuple ContrastLow,
const Htuple ContrastHigh, const Htuple MinSize, const Htuple Mode,
const Htuple GenericName, const Htuple GenericValue )

Extract the initial components of a component model.


In general, there are two possibilities to use gen_initial_components. The first possibility should be
chosen if the components of the component model are not known. Then gen_initial_components
automatically extracts the initial components of a component model from a model image. The second
possibility can be chosen if the components of the component model are approximately known. Then
gen_initial_components can be used to find suitable parameter values for the model feature extraction
in train_model_components and create_component_model. Hence, the second possibility is com-
parable to the function of inspect_shape_model within the shape-based matching.
When using the first possibility, gen_initial_components extracts the initial components of a component
model from a model image ModelImage. As already mentioned, this is especially useful if the components of the
component model are not known. In this case, the resulting initial components can be used to automatically train
the component model with train_model_components, which extracts the (final) model components and the
relations between them. gen_initial_components returns the initial components in a region object tuple
InitialComponents that contains a representation for each initial component in form of contour regions.
For the automatic determination of the initial components, the domain of the model image ModelImage must
contain the entire compound object including all components. Mode specifies the method used for the auto-
matic computation. Currently, only the mode ’connection’ is available. In this mode the automatic computa-
tion is performed in two steps: In the first step, features are extracted using the parameters ContrastLow,
ContrastHigh, and MinSize. These three parameters define the contours of which the initial components
should consist and should be chosen such that only the significant features of the model image are contained in the
initial components. ContrastLow and ContrastHigh specify the gray value contrast of the points that should
be contained in the initial components. The contrast is a measure for local gray value differences between the ob-
ject and the background and between different parts of the object. The model image is segmented using a method
similar to the hysteresis threshold method used in edges_image. Here, ContrastLow determines the lower
threshold, while ContrastHigh determines the upper threshold. If the same value is passed for ContrastLow
and ContrastHigh a simple thresholding operation is performed. For more information about the hysteresis
threshold method, see hysteresis_threshold. MinSize can be used to select only significant features
for the initial components based on the size of the connected contour regions, i.e., connected contour regions with
fewer than MinSize points are suppressed.
The resulting connected contour regions are iteratively merged in the second step. For this, two contour regions
are merged if the distance between both regions is smaller than a certain threshold (see below). Finally, the merged
regions are returned in InitialComponents and can be used to train the component model by passing them
to train_model_components.
To control the internal image processing, the parameters GenericName and GenericValue are used. This
is done by passing the names of the control parameters to be changed in GenericName as a list of strings. In
GenericValue the values are passed at the corresponding index positions.
Normally, none of the values needs to be changed. A change should only be applied in case of unsatisfying
results of the automatic determination of the initial components. The two parameters that can be changed are
’merge_distance’ and ’merge_fraction’; both are used during the iterative merging of contour regions (see above).
First, the fraction of contour pixels of one contour region that at most have a distance of ’merge_distance’ from
another contour region is computed. If this fraction exceeds the value that is passed in ’merge_fraction’ the
two contour regions are merged. Consequently, the higher ’merge_distance’ and the lower ’merge_fraction’ is

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 557

chosen the more contour regions are merged. The default value of ’merge_distance’ is 5 and the default value of
’merge_fraction’ is 0.5 (corresponds to 50 percent).
When using the second possibility, i.e., the components of the component model are approximately known,
the training by using train_model_components can be performed without previously executing
gen_initial_components. If this is desired, the initial components can be specified by the user
and directly passed to train_model_components. Furthermore, if the components as well as the
relative movements (relations) of the components are known, gen_initial_components as well as
train_model_components need not be executed. In fact, by immediately passing the components as well
as the relations to create_component_model, the component model can be created without any training.
In both cases, however, gen_initial_components can be used to evaluate the effect of the feature ex-
traction parameters ContrastLow, ContrastHigh, and MinSize of train_model_components and
create_component_model, and hence to find suitable parameter values for a certain application.
For this, the image regions for the (initial) components must be explicitly given, i.e., for each (initial) component
a separate image from which the (initial) component should be created is passed. In this case, ModelImage
contains multiple image objects. The domain of each image object is used as the region of interest for calculating
the corresponding (initial) component. The image matrix of all image objects in the tuple must be identical, i.e.,
ModelImage cannot be constructed in an arbitrary manner using concat_obj, but must be created from the
same image using add_channels or equivalent calls. If this is not the case, an error message is returned. If
the paramters ContrastLow, ContrastHigh, or MinSize only contain one element, this value is applied
to the creation of all (initial) components. In contrast, if different values for different (initial) components should
be used, tuples of values can be passed for these three parameters. In this case, the tuples must have a length
that corresponds to the number of (initial) components, i.e., the number of image objects in ModelImage. The
contour regions of the (initial) components are returned in InitialComponents.
Thus, the second possibility is equivalent to the function of inspect_shape_model within the shape-based
matching. However, in contrast to inspect_shape_model, gen_initial_components does not return
the contour regions on multiple image pyramid levels. Therefore, if the number of pyramid levels to be used
should be chosen manually, preferably inspect_shape_model should be called individually for each (initial)
component.
For both described possibilities the parameters ContrastLow, ContrastHigh, and MinSize can be au-
tomatically determined. If both hysteresis threshold should be automatically determined, both ContrastLow
and ContrastHigh must be set to ’auto’. In contrast, if only one threshold value should be determined,
ContrastLow must be set to ’auto’ while ContrastHigh must be set to an arbitrary value different from
’auto’.
If the input image ModelImage has one channel the representation of the model is created with the method
that is used in create_component_model or create_trained_component_model for the metrics
’use_polarity’, ’ignore_global_polarity’, and ’ignore_local_polarity’. If the input image has more than one chan-
nel the representation is created with the method that is used for the metric ’ignore_color_polarity’.
Parameter
. ModelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image from which the initial components should be extracted.
. InitialComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Contour regions of initial components.
. ContrastLow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Lower hysteresis threshold for the contrast of the initial components in the image.
Default Value : "auto"
Suggested values : ContrastLow ∈ {"auto", 10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Restriction : ContrastLow > 0
. ContrastHigh (input_control) . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Upper hysteresis threshold for the contrast of the initial components in the image.
Default Value : "auto"
Suggested values : ContrastHigh ∈ {"auto", 10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Restriction : (ContrastHigh > 0) ∧ (ContrastHigh ≥ ContrastLow)
. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Minimum size of the initial components.
Default Value : "auto"
Suggested values : MinSize ∈ {"auto", 0, 5, 10, 20, 30, 40}
Restriction : MinSize ≥ 0

HALCON 8.0.2
558 CHAPTER 7. MATCHING

. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *


Type of automatic segmentation.
Default Value : "connection"
List of values : Mode ∈ {"connection"}
. GenericName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Names of optional control parameters.
Default Value : []
List of values : GenericName ∈ {"merge_distance", "merge_fraction"}
. GenericValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Values of optional control parameters.
Default Value : []
Example (Syntax: HDevelop)

* First example that shows the use of gen_initial_components to automatically


extract the initial components from a model image.

* Get the model image.


read_image (Image, ’model_image.tif’)
* Define the entire model region.
gen_rectangle1 (ModelRegion, 119, 106, 330, 537)
reduce_domain (Image, ModelRegion, ModelImage)
* Automatically generate the initial components.
gen_initial_components (ModelImage, InitialComponents, 40, 40, 20,
’connection’, [], [])
* Extract the model components and train the relations.
train_model_components (ModelImage, InitialComponents, TrainingImages,
ModelComponents, 40, 40, 20, 0.85, 0, 0, rad(15),
’speed’, ’rigidity’, 0.2, 0.5, ComponentTrainingID)
* Create the component model based on the training result.
create_trained_component_model (ComponentTrainingID, -rad(30), rad(60), 10,
0.8, ’auto’, ’auto’, ’none’, ’use_polarity’,
’false’, ComponentModelID, RootRanking)

* Second example that shows the use of gen_initial_components to evaluate


the effect of the feature extraction parameters.

* Get the model image.


read_image (ModelImage, ’model_image.tif’)
* Define the regions for the components.
gen_rectangle2 (ComponentRegions, 318, 109, -1.62, 34, 19)
gen_rectangle2 (Rectangle2, 342, 238, -1.63, 32, 17)
gen_rectangle2 (Rectangle3, 355, 505, 1.41, 25, 17)
ComponentRegions := [ComponentRegions,Rectangle2]
ComponentRegions := [ComponentRegions,Rectangle3]
add_channels (ComponentRegions, ModelImage, ModelImageReduced)
gen_initial_components (ModelImageReduced, InitialComponents, 15, 40, 15,
’connection’, [], [])
* Create the component model by explicitly specifying the relations.
create_component_model (ModelImage, ComponentRegions, 20, 20, rad(25), 0,
rad(360), 15, 40, 15, 10, 0.8, ’auto’, ’auto’,
’none’, ’use_polarity’, ’false’, ComponentModelID,
RootRanking)

Result
If the parameter values are correct, the operator gen_initial_components returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 559

Parallelization Information
gen_initial_components is reentrant and processed without parallelization.
Possible Predecessors
draw_region, add_channels, reduce_domain
Possible Successors
train_model_components
Alternatives
inspect_shape_model
Module
Matching

get_component_model_params ( Hlong ComponentModelID,


double *MinScoreComp, Hlong *RootRanking, Hlong *ShapeModelIDs )

T_get_component_model_params ( const Htuple ComponentModelID,


Htuple *MinScoreComp, Htuple *RootRanking, Htuple *ShapeModelIDs )

Return the parameters of a component model.


The operator get_component_model_params returns the parameters of the component model
ComponentModelID. In particular, this output can be used to check the parameters RootRanking and
MinScoreComp after reading the component model with read_component_model. Additionally, the oper-
ator returns the shape model IDs ShapeModelIDs of the model components. The order of the returned shape
model IDs corresponds to the indices of the components within the component model ComponentModelID.
The IDs can be used to query their shape model parameters with get_shape_model_params.
Parameter
. ComponentModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . component_model ; (Htuple .) Hlong
Handle of the component model.
. MinScoreComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Minimum score of the instances of the components to be found.
. RootRanking (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Ranking of the model components expressing their suitability to act as root component.
. ShapeModelIDs (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model(-array) ; (Htuple .) Hlong *
Handles of the shape models of the individual model components.
Example (Syntax: HDevelop)

read_component_model (’pliers.cpm’, ComponentModelID)


get_component_model_params (ComponentModelID, MinScoreComp, RootRanking,
ShapeModelIDs)
for i := 0 to |ShapeModelIDs|-1 by 1
get_shape_model_params (ShapeModelIDs[i], NumLevels, AngleStart,
AngleExtent, AngleStep, ScaleMin, ScaleMax,
ScaleStep, Metric, MinContrast)
endfor

Result
If the handle of the component model is valid, the operator get_component_model_params returns the
value H_MSG_TRUE. If necessary, an exception is raised.
Parallelization Information
get_component_model_params is reentrant and processed without parallelization.
Possible Predecessors
create_trained_component_model, create_component_model
See also
get_shape_model_params

HALCON 8.0.2
560 CHAPTER 7. MATCHING

Module
Matching

get_component_model_tree ( Hobject *Tree, Hobject *Relations,


Hlong ComponentModelID, Hlong RootComponent, const char *Image,
Hlong *StartNode, Hlong *EndNode, double *Row, double *Column,
double *Phi, double *Length1, double *Length2, double *AngleStart,
double *AngleExtent )

T_get_component_model_tree ( Hobject *Tree, Hobject *Relations,


const Htuple ComponentModelID, const Htuple RootComponent,
const Htuple Image, Htuple *StartNode, Htuple *EndNode, Htuple *Row,
Htuple *Column, Htuple *Phi, Htuple *Length1, Htuple *Length2,
Htuple *AngleStart, Htuple *AngleExtent )

Return the search tree of a component model.


get_component_model_tree returns the search tree Tree and the associated relations Relations
of the component model that is passed in ComponentModelID in form of regions as well as in numer-
ical form. get_component_model_tree is particularly useful in order to visualize the search or-
der of the components, which was automatically computed in create_trained_component_model or
create_component_model.
Because the search tree depends on the selected root component, the root component must be passed in
RootComponent. The nodes in the tree Tree represent the model components, the connecting lines between
the nodes indicate which components are searched relative to each other. The position of the nodes corresponds to
the position of the components in the model image (if Image = ’model_image’ or Image = 0) or in a training
image (if Image ≥ 1). In the latter case, the component model must have been created based on a component
training result with create_trained_component_model.
Let n be the number of components in ComponentModelID. The region object tuple Relations of length
n is designed as follows: For each component a separate region is returned. The positions of all components in
the image are represented by circles with a radius of 3 pixels. For each component other than the root compo-
nent RootComponent, additionally the position relation and the orientation relation relative to the predecessor
component in the search tree are represented. The position relation is represented by a rectangle, the orientation
relation is represented by a circle sector with a radius of 30 pixels. The center of the circle is placed at the mean
relative position of the component. The rectangle describes the movement of the reference point of the respective
component relative to the pose of its predecessor component, the circle sector describes the variation of the relative
orientation. A relative orientation of 0 corresponds to the relative orientation of both components in the model
image.
In addition to the regions, the search tree as well as the associated relations are also returned in numerical form.
The search tree is described by the two tuples StartNode and EndNode, both of length n, which contain the
start and the end node of all arcs in the tree. The nodes contain the indices of the components. This means that
during the search the component that is described by the end node is searched relative to the pose of the component
that is described by the start node (predecessor component). Since the root component is not searched relative to
any other component, and hence does not have a predecessor component, the associated start node is set to -1. The
relations are returned in Row, Column, Phi, Length1, Length2, AngleStart, and AngleExtent. These
parameters are tuples of length n, and contain the relations of all components relative to their associated predeces-
sor component, where the order of the values within the tuples is determined by the index of the corresponding
component. The position relation is described by the parameters of the corresponding rectangle Row, Column,
Phi, Length1, and Length2 (see gen_rectangle2). The orientation relation is described by the starting
angle AngleStart and the angle extent AngleExtent.
For the root component as well as for components that do not have a predecessor in the current image or that
have not been found in the current image, an empty region is returned and the corresponding values of the seven
parameters are set to 0.
Parameter

. Tree (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Search tree.

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 561

. Relations (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *


Relations of components that are connected in the search tree.
. ComponentModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . component_model ; (Htuple .) Hlong
Handle of the component model.
. RootComponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Index of the root component.
Suggested values : RootComponent ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8}
. Image (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char * / Hlong
Image for which the tree is to be returned.
Default Value : "model_image"
Suggested values : Image ∈ {"model_image", 0, 1, 2, 3, 4, 5, 6, 7, 8}
. StartNode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Component index of the start node of an arc in the search tree.
. EndNode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Component index of the end node of an arc in the search tree.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; (Htuple .) double *
Row coordinate of the center of the rectangle representing the relation.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; (Htuple .) double *
Column index of the center of the rectangle representing the relation.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; (Htuple .) double *
Orientation of the rectangle representing the relation (radians).
Assertion : ((−pi/2) < Phi) ∧ (Phi ≤ (pi/2))
. Length1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.width(-array) ; (Htuple .) double *
First radius (half length) of the rectangle representing the relation.
Assertion : Length1 ≥ 0.0
. Length2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.height(-array) ; (Htuple .) double *
Second radius (half width) of the rectangle representing the relation.
Assertion : (Length2 ≥ 0.0) ∧ (Length2 ≤ Length1)
. AngleStart (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Smallest relative orientation angle.
. AngleExtent (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Extent of the relative orientation angle.
Example (Syntax: HDevelop)

* Read the model image.


read_image (ModelImage, ’model_image.tif’)
* Describe the model components.
gen_rectangle2 (ComponentRegions, 318, 109, -1.62, 34, 19)
gen_rectangle2 (Rectangle2, 342, 238, -1.63, 32, 17)
gen_rectangle2 (Rectangle3, 355, 505, 1.41, 25, 17)
ComponentRegions := [ComponentRegions,Rectangle2]
ComponentRegions := [ComponentRegions,Rectangle3]
* Create the component model.
create_component_model (ModelImage, ComponentRegions, 20, 20, rad(25), 0,
rad(360), 15, 40, 15, 10, 0.8, 0, 0, ’none’,
’use_polarity’, ’true’, ComponentModelID,
RootRanking)
* Get the component model tree.
get_component_model_tree (Tree, Relations, ComponentModelID, RootRanking,
’model_image’, StartNode, EndNode, Row, Column,
Phi, Length1, Length2, AngleStart, AngleExtent)
dev_set_colored (12)
dev_display (ModelImage)
dev_display (Tree)
dev_display (Relations)

HALCON 8.0.2
562 CHAPTER 7. MATCHING

Result
If the parameters are valid, the operator get_component_model_tree returns the value H_MSG_TRUE. If
necessary an exception is raised.
Parallelization Information
get_component_model_tree is reentrant and processed without parallelization.
Possible Predecessors
create_trained_component_model, create_component_model
See also
train_model_components
Module
Matching

get_component_relations ( Hobject *Relations,


Hlong ComponentTrainingID, Hlong ReferenceComponent,
const char *Image, double *Row, double *Column, double *Phi,
double *Length1, double *Length2, double *AngleStart,
double *AngleExtent )

T_get_component_relations ( Hobject *Relations,


const Htuple ComponentTrainingID, const Htuple ReferenceComponent,
const Htuple Image, Htuple *Row, Htuple *Column, Htuple *Phi,
Htuple *Length1, Htuple *Length2, Htuple *AngleStart,
Htuple *AngleExtent )

Return the relations between the model components that are contained in a training result.
get_component_relations returns the relations between model components after training them with
train_model_components. With the parameter ReferenceComponent, you can select a reference com-
ponent. get_component_relations then returns the relations between the reference component and
all other components in the model image (if Image = ’model_image’ or Image = 0) or in a training image
(if Image ≥ 1). In order to obtain the relations in the ith training image, Image must be set to i. The re-
sult of the training returned by train_model_components must be passed in ComponentTrainingID.
ReferenceComponent describes the index of the reference component and must be within the range of 0 and
n-1, if n is the number of model components (see train_model_components).
The relations are returned in form of regions in Relations as well as in form of numerical values in Row,
Column, Phi, Length1, Length2, AngleStart, and AngleExtent.
The region object tuple Relations is designed as follows. For each component a separate region is returned.
Consequently, Relations contains n regions, where the order of the regions within the tuple is determined by the
index of the corresponding components. The positions of all components in the image are represented by circles
with a radius of 3 pixels. For each component other than the reference component ReferenceComponent, ad-
ditionally the position relation and the orientation relation relative to the reference component are represented.
The position relation is represented by a rectangle and the orientation relation is represend by a circle sec-
tor with a radius of 30 pixels. The center of the circle is placed at the mean relative position of the compo-
nent. The rectangle describes the movement of the reference point of the respective component relative to the
pose of the reference component, while the circle sector describes the variation of the relative orientation (see
train_model_components). A relative orientation of 0 corresponds to the relative orientation of both com-
ponents in the model image. If both components appear in the same relative orientation in all images, the circle
sector consequently degenerates to a straight line.
In addition to the region object tuple Relations, the relations are also returned in form of numerical values in
Row, Column, Phi, Length1, Length2, AngleStart, and AngleExtent. These parameters are tuples
of length n and contain the relations of all components relative to the reference component, where the order of
the values within the tuples is determined by the index of the corresponding component. The position relation is
described by the parameters of the corresponding rectangle Row, Column, Phi, Length1, and Length2 (see
gen_rectangle2). The orientation relation is described by the starting angle AngleStart and the angle
extent AngleExtent. For the reference component only the position within the image is returned in Row and
Column. All other values are set to 0.

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 563

If the reference component has not been found in the current image, an array of empty regions is returned and the
corresponding parameter values are set to 0.
The operator get_component_relations is particularly useful in order to visualize the result of the train-
ing that was performed with train_model_components. With this, it is possible to evaluate the varia-
tions that are contained in the training images. Sometimes it might be reasonable to restart the training with
train_model_components while using a different set of training images.
Parameter

. Relations (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *


Region representation of the relations.
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . component_training ; (Htuple .) Hlong
Handle of the training result.
. ReferenceComponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Index of reference component.
Restriction : ReferenceComponent ≥ 0
. Image (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char * / Hlong
Image for which the component relations are to be returned.
Default Value : "model_image"
Suggested values : Image ∈ {"model_image", 0, 1, 2, 3, 4, 5, 6, 7, 8}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; (Htuple .) double *
Row coordinate of the center of the rectangle representing the relation.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; (Htuple .) double *
Column index of the center of the rectangle representing the relation.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; (Htuple .) double *
Orientation of the rectangle representing the relation (radians).
Assertion : ((−pi/2) < Phi) ∧ (Phi ≤ (pi/2))
. Length1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.width(-array) ; (Htuple .) double *
First radius (half length) of the rectangle representing the relation.
Assertion : Length1 ≥ 0.0
. Length2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.height(-array) ; (Htuple .) double *
Second radius (half width) of the rectangle representing the relation.
Assertion : (Length2 ≥ 0.0) ∧ (Length2 ≤ Length1)
. AngleStart (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Smallest relative orientation angle.
. AngleExtent (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Extent of the relative orientation angles.
Result
If the handle of the training result is valid, the operator get_component_relations returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
get_component_relations is reentrant and processed without parallelization.
Possible Predecessors
train_model_components
Possible Successors
train_model_components
See also
gen_rectangle2
Module
Matching

HALCON 8.0.2
564 CHAPTER 7. MATCHING

get_found_component_model ( Hobject *FoundComponents,


Hlong ComponentModelID, Hlong ModelStart, Hlong ModelEnd,
double RowComp, double ColumnComp, double AngleComp, double ScoreComp,
Hlong ModelComp, Hlong ModelMatch, const char *MarkOrientation,
double *RowCompInst, double *ColumnCompInst, double *AngleCompInst,
double *ScoreCompInst )

T_get_found_component_model ( Hobject *FoundComponents,


const Htuple ComponentModelID, const Htuple ModelStart,
const Htuple ModelEnd, const Htuple RowComp, const Htuple ColumnComp,
const Htuple AngleComp, const Htuple ScoreComp,
const Htuple ModelComp, const Htuple ModelMatch,
const Htuple MarkOrientation, Htuple *RowCompInst,
Htuple *ColumnCompInst, Htuple *AngleCompInst, Htuple *ScoreCompInst )

Return the components of a found instance of a component model.


get_found_component_model returns the components of a found instance of the component model
ComponentModelID in form of contour regions in FoundComponents as well as in numerical form.
The operator get_found_component_model is particularly useful in order to visualize the matches that
have been obtained by find_component_model.
The pose of the returned components corresponds to their pose in the search image as returned by
find_component_model. Hence, the parameters ModelStart, ModelEnd, RowComp, ColumnComp,
AngleComp, ScoreComp, and ModelComp must be passed to get_found_component_model as they
have been returned by find_component_model. In ModelMatch the index of the found instance of the
component model must be passed. Consequently, ModelMatch must lie within the range between 0 and m-1,
where m is the number of elements in ModelStart and ModelEnd, and hence corresponds to the number of
found model instances. For example, if the best match should be retuned, ModelMatch should be set to 0.
When dealing with rotationally symmetric components, one may wish to mark the current orientation of the found
component. This can be achieved by setting MarkOrientation to ’true’. In this case, the contour region
of each component is complemented by an arrow at its reference point that points in the reference direction.
The reference direction of a component is based on the orientation of the component in the model image (see
train_model_components or create_component_model) and is represented by an arrow that starts
at the reference point and points to the right in the horizontal direction.
For convenience, the pose parameters as well as the score of each component of the found model instance
are additionally returned in numerical form in RowCompInst, ColumnCompInst, AngleCompInst, and
ScoreCompInst. The four tuples are always of length n, where n is the number of components in the com-
ponent model ComponentModelID. If a component could not be found during the search, an empty region
is passed in the corresponding element of FoundComponents and the value of the corresponding element in
RowCompInst, ColumnCompInst, AngleCompInst, and ScoreCompInst is set to 0.
Parameter
. FoundComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Found components of the selected component model instance.
. ComponentModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . component_model ; (Htuple .) Hlong
Handle of the component model.
. ModelStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Start index of each found instance of the component model in the tuples describing the component matches.
. ModelEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
End index of each found instance of the component model to the tuples describing the component matches.
. RowComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double
Row coordinate of the found component matches.
. ColumnComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double
Column coordinate of the found component matches.
. AngleComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double
Rotation angle of the found component matches.
. ScoreComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
Score of the found component matches.

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 565

. ModelComp (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong


Index of the found components.
. ModelMatch (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Index of the found instance of the component model to be returned.
. MarkOrientation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Mark the orientation of the components.
Default Value : "false"
List of values : MarkOrientation ∈ {"true", "false"}
. RowCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *
Row coordinate of all components of the selected model instance.
. ColumnCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column coordinate of all components of the selected model instance.
. AngleCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Rotation angle of all components of the selected model instance.
. ScoreCompInst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Score of all components of the selected model instance.
Example (Syntax: HDevelop)

* Read a component model from file.


read_component_model (’pliers.cpm’, ComponentModelID)
* Find the component model in a run-time image.
read_image (SearchImage, ’search_image.tif’)
find_component_model (SearchImage, ComponentModelID, RootRanking, 0,
rad(360), 0.5, 0, 0.5, ’stop_search’, ’prune_branch’,
’none’, 0.8, ’least_squares’, 0, 0.8, ModelStart,
ModelEnd, Score, RowComp, ColumnComp, AngleComp,
ScoreComp, ModelComp)
* Visualize the found instances.
for i := 0 to |ModelStart|-1 by 1
get_found_component_model (FoundComponents, ComponentModelID,
ModelStart, ModelEnd, RowComp, ColumnComp,
AngleComp, ScoreComp, ModelComp, i, ’false’,
RowCompInst, ColumnCompInst, AngleCompInst,
ScoreCompInst)
dev_display (FoundComponents)
endfor

Result
If the parameters are valid, the operator get_found_component_model returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
get_found_component_model is reentrant and processed without parallelization.
Possible Predecessors
find_component_model
See also
train_model_components, create_component_model
Module
Matching

HALCON 8.0.2
566 CHAPTER 7. MATCHING

get_training_components ( Hobject *TrainingComponents,


Hlong ComponentTrainingID, const char *Components, const char *Image,
const char *MarkOrientation, double *Row, double *Column,
double *Angle, double *Score )

T_get_training_components ( Hobject *TrainingComponents,


const Htuple ComponentTrainingID, const Htuple Components,
const Htuple Image, const Htuple MarkOrientation, Htuple *Row,
Htuple *Column, Htuple *Angle, Htuple *Score )

Return the initial or model components in a certain image.


get_training_components returns all initial components (if Components = ’initial_components’) or all
model components (if Components = ’model_components’) in TrainingComponents in form of contour
regions as well as in numerical form. Alternatively, by directly passing the index of an initial component, all found
poses of that initial component (i.e., the poses before solving the ambiguities in train_model_components)
are returned.
The pose of the returned components corresponds to their pose in the model image (if Image = ’model_image’
or Image = 0) or in a training image (if Image ≥ 1). In order to obtain the components in the pose at which they
were found in the ith training image, Image must be set to i. Furthermore, when dealing with rotationally sym-
metric components, one may wish to mark the current orientation of the found component. This can be achieved
by setting MarkOrientation to ’true’. In this case, the contour region of each component is complemented by
an arrow at its reference point pointing in the reference direction. The reference direction of a component is based
on the orientation of the component in the model image and is represented by an arrow that starts at the reference
point and points to the right in the horizontal direction.
In addition to the contour regions, the pose and the score of all found components is returned in
Row, Column, Angle, and Score (see find_shape_model). If Components was set to ’ini-
tial_components’ or ’model_components’, the tuples TrainingComponents, Row, Column, Angle, and
Score always contain the same number of elements as initial components or model components contained in
ComponentTrainingID, respectively. If one component was not found in the image, an empty region is re-
turned in the corresponding element of TrainingComponents and the elements of the four output control
parameters are set to the value 0. In contrast, if the index of an initial component is passed in Components, these
tuples contain as many elements as matches of the corresponding initial component were found in the image.
The operator get_training_components is particularly useful in order to visualize the result of the
training ComponentTrainingID, which was performed with train_model_components. With this,
it is possible to evaluate the suitability of the training images or to inspect the influence of the param-
eters of train_model_components. Sometimes it might be reasonable to restart the training with
train_model_components using a different set of training images or after adjusting the parameters.
Parameter

. TrainingComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *


Contour regions of the initial components or of the model components.
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . component_training ; (Htuple .) Hlong
Handle of the training result.
. Components (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char * / Hlong
Type of returned components or index of an initial component.
Default Value : "model_components"
Suggested values : Components ∈ {"model_components", "initial_components", 0, 1, 2, 3, 4, 5}
. Image (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char * / Hlong
Image for which the components are to be returned.
Default Value : "model_image"
Suggested values : Image ∈ {"model_image", 0, 1, 2, 3, 4, 5, 6, 7, 8}
. MarkOrientation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Mark the orientation of the components.
Default Value : "false"
List of values : MarkOrientation ∈ {"true", "false"}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *
Row coordinate of the found instances of all initial components or model components.

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 567

. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *


Column coordinate of the found instances of all initial components or model components.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Rotation angle of the found instances of all components.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Score of the found instances of all components.
Example (Syntax: HDevelop)

* Get the model image.


read_image (ModelImage, ’model_image.tif’)
* Define the regions for the initial components.
gen_rectangle2 (InitialComponentRegions, 212, 233, 0.62, 167, 29)
gen_rectangle2 (Rectangle2, 298, 363, 1.17, 162, 34)
gen_rectangle2 (Rectangle3, 63, 444, -0.26, 50, 27)
gen_rectangle2 (Rectangle4, 120, 473, 0, 33, 20)
InitialComponentRegions := [InitialComponentRegions,Rectangle2]
InitialComponentRegions := [InitialComponentRegions,Rectangle3]
InitialComponentRegions := [InitialComponentRegions,Rectangle4]
* Get the training images.
TrainingImages := []
for i := 1 to 4 by 1
read_image (TrainingImage, ’training_image-’+i+’.tif’)
TrainingImages := [TrainingImages,TrainingImage]
endfor
* Extract the model components and train the relations.
train_model_components (ModelImage, InitialComponentRegions, TrainingImages,
ModelComponents, 22, 60, 30, 0.6, 0, 0, rad(60),
’speed’, ’rigidity’, 0.2, 0.4, ComponentTrainingID)
* Visualize the result of the training.
NumInitComp := |InitialComponentRegions|
NumTrainings := |TrainingImages|
for i := 1 to NumTrainings by 1
TrainingImage := TrainingImages[i]
for j := 0 to NumInitComp-1 by 1
* Visualize the ambiguous poses of each initial component.
get_training_components (TrainingComponents, ComponentTrainingID,
j, i, ’false’, Row, Column, Angle, Score)
endfor
* Visualize the final poses of the initial components.
get_training_components (TrainingComponents, ComponentTrainingID,
’initial_components’, i, ’false’,
Row, Column, Angle, Score)
* Visualize the final poses of the model components.
get_training_components (TrainingComponents, ComponentTrainingID,
’model_components’, i, ’false’,
Row, Column, Angle, Score)
endfor

Result
If the handle of the training result is valid, the operator get_training_components returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
get_training_components is reentrant and processed without parallelization.
Possible Predecessors
train_model_components
Possible Successors
train_model_components

HALCON 8.0.2
568 CHAPTER 7. MATCHING

See also
find_shape_model
Module
Matching

inspect_clustered_components ( Hobject *ModelComponents,


Hlong ComponentTrainingID, const char *AmbiguityCriterion,
double MaxContourOverlap, double ClusterThreshold )

T_inspect_clustered_components ( Hobject *ModelComponents,


const Htuple ComponentTrainingID, const Htuple AmbiguityCriterion,
const Htuple MaxContourOverlap, const Htuple ClusterThreshold )

Inspect the rigid model components obtained from the training.


inspect_clustered_components creates a representation of the rigid model components based on the
training result ComponentTrainingID in form of contour regions. The resulting rigid model components
are computed depending on the criterion that is used to solve the ambiguities AmbiguityCriterion, the
maximum allowable contour overlap MaxContourOverlap, and the cluster threshold ClusterThreshold
(see train_model_components). The cluster threshold, for example, influences the merging of the initial
components. The greater the threshold is chosen, the fewer initial components are merged. The determined rigid
model components are returned in ModelComponents.
Hence, after the components have been trained once by using train_model_components,
inspect_clustered_components can be used to estimate the effect of different values for the parameters
AmbiguityCriterion, MaxContourOverlap, and ClusterThreshold without performing the com-
plete training procedure several times. Once the desired parameter values have been found, they can be efficiently
adopted into the training result by using cluster_model_components.
Parameter
. ModelComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Contour regions of rigid model components.
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_training ; Hlong
Handle of the training result.
. AmbiguityCriterion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Criterion for solving the ambiguities.
Default Value : "rigidity"
List of values : AmbiguityCriterion ∈ {"distance", "orientation", "distance_orientation", "rigidity"}
. MaxContourOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Maximum contour overlap of the found initial components.
Default Value : 0.2
Suggested values : MaxContourOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MaxContourOverlap) ∧ (MaxContourOverlap ≤ 1)
. ClusterThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Threshold for clustering the initial components.
Default Value : 0.5
Suggested values : ClusterThreshold ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : (0 ≤ ClusterThreshold) ∧ (ClusterThreshold ≤ 1)
Example (Syntax: HDevelop)

* Get the model image.


read_image (ModelImage, ’model_image.tif’)
* Define the regions for the initial components.
gen_rectangle2 (InitialComponentRegions, 212, 233, 0.62, 167, 29)
gen_rectangle2 (Rectangle2, 298, 363, 1.17, 162, 34)
gen_rectangle2 (Rectangle3, 63, 444, -0.26, 50, 27)

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 569

gen_rectangle2 (Rectangle4, 120, 473, 0, 33, 20)


InitialComponentRegions := [InitialComponentRegions,Rectangle2]
InitialComponentRegions := [InitialComponentRegions,Rectangle3]
InitialComponentRegions := [InitialComponentRegions,Rectangle4]
* Get the training images
TrainingImages := []
for i := 1 to 4 by 1
read_image (TrainingImage, ’training_image-’+i$’02’+’.tif’)
TrainingImages := [TrainingImages,TrainingImage]
endfor
* Extract the model components and train the relations.
train_model_components (ModelImage, InitialComponentRegions, TrainingImages,
ModelComponents, 22, 60, 30, 0.65, 0, 0, rad(60),
’speed’, ’rigidity’, 0.2, 0.5, ComponentTrainingID)
* Find the best value for the parameter ClusterThreshold.
inspect_clustered_components (ModelComponents, ComponentTrainingID,
’rigidity’, 0.2, 0.4)
* Adopt the ClusterThreshold into the training result.
cluster_model_components (ModelComponents, ModelComponents,
ComponentTrainingID, ’rigidity’, 0.2, 0.4)
* Create the component model based on the training result.
create_trained_component_model (ComponentTrainingID, -rad(30), rad(60), 10,
0.5, ’auto’, ’auto’, ’none’, ’use_polarity’,
’false’, ComponentModelID, RootRanking)

Result
If the handle of the training result is valid, the operator inspect_clustered_components returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
inspect_clustered_components is reentrant and processed without parallelization.
Possible Predecessors
train_model_components
Possible Successors
cluster_model_components
Module
Matching

modify_component_relations ( Hlong ComponentTrainingID,


const char *ReferenceComponent, const char *ToleranceComponent,
double PositionTolerance, double AngleTolerance )

T_modify_component_relations ( const Htuple ComponentTrainingID,


const Htuple ReferenceComponent, const Htuple ToleranceComponent,
const Htuple PositionTolerance, const Htuple AngleTolerance )

Modify the relations within a training result.


modify_component_relations modifies the relations between the model components within the com-
ponent training result ComponentTrainingID. The selection of the relation(s) that should be changed
is performed by setting ReferenceComponent and ToleranceComponent, respectively. This
means that the relative movement of component ToleranceComponent with respect to the component
ReferenceComponent is modified.
The size of the change is specified as follows: By specifying a position tolerance PositionTolerance, the
semi-axes of the rectangle that describes the reference point’s movement (see train_model_components)
are enlarged by PositionTolerance pixels. Accordingly, by specifying an orientation toler-
ance AngleTolerance, the angle range that describes the variation of the relative orientation (see

HALCON 8.0.2
570 CHAPTER 7. MATCHING

train_model_components) is enlarged by AngleTolerance to both sides. Consequently, negative tol-


erance values lead to a decreased size of the relations. The operator modify_component_relations is
particularly useful when the training images that were used during the training do not cover the entire spectrum of
the relative movements.
In order to select the relations that should be modified, values for ReferenceComponent and
ToleranceComponent can be passed in one of the following ways: For each of both parameters either one
value, several values, or the string ’all’ can be passed. The following table summarizes the different possibilities
by giving the affected relations for different combinations of parameter values. Here, four model components are
assumed (0, 1, 2, and 3). If, for example, ReferenceComponent is set to 0 and ToleranceComponent
is set to 1, then the relation (0,1), which corresponds to the relative movement of component 1 with respect to
component 0, will be modified.
ReferenceComponent ToleranceComponent Affected Relation(s)
’all’ ’all’ (0,1) (0,2) (0,3)
(1,0) (1,2) (1,3)
(2,0) (2,1) (2,3)
(3,0) (3,1) (3,2)
’all’ [1,2] (0,1) (0,2)
(1,2)
(2,1)
(3,1) (3,2)
[0,1] ’all’ (0,1) (0,2) (0,3)
(1,0) (1,2) (1,3)
0 1 (0,1)
0 [1,2] (0,1) (0,2)
[0,1] 2 (0,2) (1,2)
[0,1,2] [1,2,3] (0,1) (1,2) (2,3)
The number of tolerance values passed in PositionTolerance and AngleTolerance must be either 1 or
be equal to the number of affected relations. In the former case all affected relations are modified by the same
value, whereas in the latter case each relation can be modified individually by passing different values within a
tuple.
Parameter
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . component_training ; (Htuple .) Hlong
Handle of the training result.
. ReferenceComponent (input_control) . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char * / Hlong
Model component(s) relative to which the movement(s) should be modified.
Default Value : "all"
Suggested values : ReferenceComponent ∈ {"all", 0, 1, 2, 3, 4, 5, 6}
. ToleranceComponent (input_control) . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char * / Hlong
Model component(s) of which the relative movement(s) should be modified.
Default Value : "all"
Suggested values : ToleranceComponent ∈ {"all", 0, 1, 2, 3, 4, 5, 6}
. PositionTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
Change of the position relation in pixels.
Suggested values : PositionTolerance ∈ {1, 2, 3, 4, 5, 10, 20, 30}
. AngleTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double
Change of the orientation relation in radians.
Suggested values : AngleTolerance ∈ {0.017, 0.035, 0, 052, 0, 070, 0.087, 0.175, 0.349}
Result
If the handle of the training result is valid, the operator modify_component_relations returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
modify_component_relations is processed completely exclusively without parallelization.
Possible Predecessors
train_model_components
Possible Successors
create_trained_component_model

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 571

Module
Matching

read_component_model ( const char *FileName, Hlong *ComponentModelID )


T_read_component_model ( const Htuple FileName,
Htuple *ComponentModelID )

Read a component model from a file.


The operator read_component_model reads a component model, which has been written with
write_component_model, from the file FileName and returns it in ComponentModelID.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name.
. ComponentModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_model ; Hlong *
Handle of the component model.
Result
If the file name is valid, the operator read_component_model returns the value H_MSG_TRUE. If necessary,
an exception handling is raised.
Parallelization Information
read_component_model is processed completely exclusively without parallelization.
Possible Successors
find_component_model
Module
Matching

read_training_components ( const char *FileName,


Hlong *ComponentTrainingID )

T_read_training_components ( const Htuple FileName,


Htuple *ComponentTrainingID )

Read a component training result from a file.


The operator read_training_components reads a component training result, which has been written with
write_training_components, from the file FileName and returns it in ComponentTrainingID.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name.
. ComponentTrainingID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_training ; Hlong *
Handle of the training result.
Result
If the file name is valid, the operator read_training_components returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
read_training_components is processed completely exclusively without parallelization.
Possible Successors
create_trained_component_model
See also
train_model_components, clear_training_components
Module
Matching

HALCON 8.0.2
572 CHAPTER 7. MATCHING

train_model_components ( const Hobject ModelImage,


const Hobject InitialComponents, const Hobject TrainingImages,
Hobject *ModelComponents, Hlong ContrastLow, Hlong ContrastHigh,
Hlong MinSize, double MinScore, Hlong SearchRowTol,
Hlong SearchColumnTol, double SearchAngleTol,
const char *TrainingEmphasis, const char *AmbiguityCriterion,
double MaxContourOverlap, double ClusterThreshold,
Hlong *ComponentTrainingID )

T_train_model_components ( const Hobject ModelImage,


const Hobject InitialComponents, const Hobject TrainingImages,
Hobject *ModelComponents, const Htuple ContrastLow,
const Htuple ContrastHigh, const Htuple MinSize,
const Htuple MinScore, const Htuple SearchRowTol,
const Htuple SearchColumnTol, const Htuple SearchAngleTol,
const Htuple TrainingEmphasis, const Htuple AmbiguityCriterion,
const Htuple MaxContourOverlap, const Htuple ClusterThreshold,
Htuple *ComponentTrainingID )

Train components and relations for the component-based matching.


train_model_components extracts the final (rigid) model components and trains their mutual relations, i.e.,
their relative movements, on the basis of the initial components by considering several training images. The result
of the training is returned in the handle ComponentTrainingID. The training result can be subsequently used
to create the actual component model using create_trained_component_model.
train_model_components should be used in cases where the relations of the components are not known
and should be trained automatically. In contrast, if the relations are known no training needs to be per-
formed with train_model_components. Instead, the component model can be directly created with
create_component_model.
If the initial components have been automatically created by using gen_initial_components,
InitialComponents contains the contour regions of the initial components. In contrast, if the initial com-
ponents should be defined by the user, they can be directly passed in InitialComponents. However, in-
stead of the contour regions for each initial component, its enclosing region must be passed in the tuple. The
(contour) regions refer to the model image ModelImage. If the initial components have been obtained using
gen_initial_components, the model image should be the same as in gen_initial_components.
Please note that each initial component is part of at most one rigid model component. This is because during the
training initial components can be merged into rigid model components if required (see below). However, they
cannot be split and distributed to several rigid model components.
train_model_components uses the following approach to perform the training: In the first step, the initial
components are searched in all training images. In some cases, one initial component may be found in an training
image more than once. Thus, in the second step, the resulting ambiguities are solved, i.e., the most probable pose
of each initial component is found. Consequently, after solving the ambiguities, in all training images at most one
pose of each initial component is available. In the next step the poses are analyzed and those initial components
that do not show any relative movement are clustered to the final rigid model components. Finally, in the last step
the relations between the model components are computed by analyzing their relative poses over the sequence of
training images. The parameters that are associated with the mentioned steps are explained in the following.
The training is performed based on several training images, which are passed in TrainingImages. Each train-
ing image must show at most one instance of the compound object and should contain the full range of allowed
relative movements of the model components. If, for example, the component model of an on/off switch should be
trained, one training image that shows the switch turned off is sufficient if the switch in the model image is turned
on, or vice versa.
The principle of the training is to find the initial components in all training images and to analyze their
poses. For this, for each initial component a shape model is created (see create_shape_model),
which is then used to determine the poses (position and orientation) of the initial components in the train-
ing images (see find_shape_model). Depending on the mode that is set by using set_system
(’pregenerate_shape_models’,...), the shape model is either pregenerated completely or com-
puted online during the search. The mode influences the computation time as well as the robustness of
the training. Furthermore, it should be noted that if single-channel image are used in ModelImage as

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 573

well as in TrainingImages the metric ’use_polarity’ is used internally for create_shape_model,


while if multichannel images are used in either ModelImage or TrainingImages the metric ’ig-
nore_color_polarity’ is used. Finally, it should be noted that while the number of channels in ModelImage
and TrainingImages may be different, e.g., to facilitate model generation from synthetically generated im-
ages, the number of channels in all the images in TrainingImages must be identical. For further details
see create_shape_model. The creation of the shape models can be influenced by choosing appropriate
values for the parameters ContrastLow, ContrastHigh, and MinSize. These parameters have the same
meaning as in gen_initial_components and can be automatically determined by passing ’auto’: If both
hysteresis threshold should be automatically determined, both ContrastLow and ContrastHigh must be set
to ’auto’. In contrast, if only one threshold value should be determined, ContrastLow must be set to ’auto’
while ContrastHigh must be set to an arbitrary value different from ’auto’. If the initial components have been
automatically created by gen_initial_components, the parameters ContrastLow, ContrastHigh,
and MinSize should be set to the same values as in gen_initial_components.
To influence the search for the initial components, the parameters MinScore, SearchRowTol,
SearchColumnTol, SearchAngleTol, and TrainingEmphasis can be set. The parameter MinScore
determines what score a potential match must at least have to be regarded as an instance of the initial component
in the training image. The larger MinScore is chosen, the faster the training is. If the initial components can
be expected never to be occluded in the training images, MinScore may be set as high as 0.8 or even 0.9 (see
find_shape_model).
By default, the components are searched only at points in which the component lies completely within the respec-
tive training image. This means that a component will not be found if it extends beyond the borders of the image,
even if it would achieve a score greater than MinScore. This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause components that extend beyond the image border
to be found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as
being occluded, i.e., they lower the score. It should be noted that the runtime of the training will increase in this
mode.
When dealing with a high number of initial components and many training images, the training may take a long
time (up to several minutes). In order to speed up the training it is possible to restrict the search space for the single
initial components in the training images. For this, the poses of the initial components in the model image are used
as reference pose. The parameters SearchRowTol and SearchColumnTol specify the position tolerance
region relative to the reference position in which the search is performed. Assume, for example, that the position of
an initial component in the model image is (100,200) and SearchRowTol is set to 20 and SearchColumnTol
is set to 10. Then, this initial component is searched in the training images only within the axis-aligned rectangle
that is determined by the upper left corner (80,190) and the lower right corner (120,210). The same holds for
the orientation angle range, which can be restricted by specifying the angle tolerance SearchAngleTol to
the angle range of [-SearchAngleTol,+SearchAngleTol]. Thus, it is possible to considerably reduce the
computational effort during the training by an adequate acquisition of the training images. If one of the three
parameters is set to -1, no restriction of the search space is applied in the corresponding dimension.
The input parameters ContrastLow, ContrastHigh, MinSize, MinScore, SearchRowTol,
SearchColumnTol, and SearchAngleTol must either contain one element, in which case the parameter is
used for all initial components, or must contain the same number of elements as the initial components contained
in InitialComponents, in which case each parameter element refers to the corresponding initial component
in InitialComponents.
The parameter TrainingEmphasis offers another possibility to influence the computation time of the training
and to simultaneously affect its robustness. If TrainingEmphasis is set to ’speed’, on the one hand the training
is comparatively fast, on the other hand it may happen in some cases that some initial components are not found in
the training images or are found at a wrong pose. Consequently, this would lead to an incorrect computation of the
rigid model components and their relations. The poses of the found initial components in the individual training
images can be examined by using get_training_components. If erroneous matches occur the training
should be restarted with TrainingEmphasis set to ’reliability’. This results in a higher robustness at the cost
of a longer computation time.
Furthermore, during the pose determination of the initial components ambiguities may occur if the initial com-
ponents are rotationally symmetric or if several initial components are identical or at least similar to each other.
To solve the ambiguities, the most probable pose is calculated for each initial component in each training im-
age. For this, the individual ambiguous poses are evaluated. The pose of an initial component receives a good
evaluation if the relative pose of the initial component with respect to the other initial components is similar to
the corresponding relative pose in the model image. The method to evaluate this similarity can be chosen with

HALCON 8.0.2
574 CHAPTER 7. MATCHING

AmbiguityCriterion. In almost all cases the best results are obtained with ’rigidity’, which assumes the
rigidity of the compound object. The more the rigidity of the compound object is violated by the pose of the initial
component, the worse its evaluation is. In the case of ’distance’, only the distance between the initial components
is considered during the evaluation. Hence, the pose of the initial component receives a good evaluation if its dis-
tances to the other initial components is similar to the corresponding distances in the model image. Accordingly,
when choosing ’orientation’, only the relative orientation is considered during the evaluation. Finally, the simulta-
neous consideration of distance and orientation can be achieved by choosing ’distance_orientation’. In contrast to
’rigidity’, the relative pose of the initial components is not considered when using ’distance_orientation’.
The process of solving the ambiguities can be further influenced by the parameter MaxContourOverlap. This
parameter describes the extent by which the contours of two initial component matches may overlap each other.
Let the letters ’I’ and ’T’, for example, be two initial components that should be searched in a training image
that shows the string ’IT’. Then, the initial component ’T’ should be found at its correct pose. In contrast, the
initial component ’I’ will be found at its correct pose (’I’) but also at the pose of the ’T’ because of the simi-
larity of the two components. To discard the wrong match of the initial component ’I’, an appropriate value for
MaxContourOverlap can be chosen: If overlapping matches should be tolerated, MaxContourOverlap
should be set to 1. If overlapping matches should be completely avoided, MaxContourOverlap should be set
to 0. By choosing a value between 0 and 1, the maximum percentage of overlapping contour pixels can be adjusted.
The decision which initial components can be clustered to rigid model components is made based on the poses
of the initial components in the model image and in the training images. Two initial components are merged
if they do not show any relative movement over all images. Assume that in the case of the above mentioned
switch the training image would show the same switch state as the model image, the algorithm would merge the
respective initial components because it assumes that the entire switch is one rigid model component. The extent
by which initial components are merged can be influenced with the parameter ClusterThreshold. This cluster
threshold is based on the probability that two initial components belong to the same rigid model component. Thus,
ClusterThreshold describes the minimum probability which two initial components must have in order to be
merged. Since the threshold is based on a probability value, it must lie in the interval between 0 and 1. The greater
the threshold is chosen, the smaller the number of initial components that are merged. If a threshold of 0 is chosen,
all initial components are merged into one rigid component, while for a threshold of 1 no merging is performed
and each initial component is adopted as one rigid model component.
The final rigid model components are returned in ModelComponents. Later, the index of a component region
in ModelComponents is used to denote the model component. The poses of the components in the training
images can be examined by using get_training_components.
After the determination of the model components their relative movements are analyzed by determining the move-
ment of one component with respect to a second component for each pair of components. For this, the components
are referred to their reference points. The reference point of a component is the center of gravity of its contour
region, which is returned in ModelComponents. It can be calculated by calling area_center. Finally, the
relative movement is represented by the smallest enclosing rectangle of arbitrary orientation of the reference point
movement and by the smallest enclosing angle interval of the relative orientation of the second component over all
images. The determined relations can be inspected by using get_component_relations.
Parameter

. ModelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image from which the shape models of the initial components should be created.
. InitialComponents (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Contour regions or enclosing regions of the initial components.
. TrainingImages (input_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Training images that are used for training the model components.
. ModelComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Contour regions of rigid model components.
. ContrastLow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Lower hysteresis threshold for the contrast of the initial components in the image.
Default Value : "auto"
Suggested values : ContrastLow ∈ {"auto", 10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Restriction : ContrastLow > 0

HALCON/C Reference Manual, 2008-5-13


7.1. COMPONENT-BASED 575

. ContrastHigh (input_control) . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *


Upper hysteresis threshold for the contrast of the initial components in the image.
Default Value : "auto"
Suggested values : ContrastHigh ∈ {"auto", 10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Restriction : (ContrastHigh > 0) ∧ (ContrastHigh ≥ ContrastLow)
. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Minimum size of connected contour regions.
Default Value : "auto"
Suggested values : MinSize ∈ {"auto", 0, 5, 10, 20, 30, 40}
Restriction : MinSize ≥ 0
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
Minimum score of the instances of the initial components to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MinScore) ∧ (MinScore ≤ 1)
. SearchRowTol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Search tolerance in row direction.
Default Value : -1
Suggested values : SearchRowTol ∈ {0, 10, 20, 30, 50, 100}
Restriction : (SearchRowTol = -1) ∨ (SearchColumnTol ≥ 0)
. SearchColumnTol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Search tolerance in column direction.
Default Value : -1
Suggested values : SearchColumnTol ∈ {0, 10, 20, 30, 50, 100}
Restriction : (SearchColumnTol = -1) ∨ (SearchColumnTol ≥ 0)
. SearchAngleTol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double
Angle search tolerance.
Default Value : -1
Suggested values : SearchAngleTol ∈ {0.0, 0.17, 0.39, 0.78, 1.57}
Restriction : (SearchAngleTol = -1) ∨ (SearchAngleTol ≥ 0)
. TrainingEmphasis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Decision whether the training emphasis should lie on a fast computation or on a high robustness.
Default Value : "speed"
List of values : TrainingEmphasis ∈ {"speed", "reliability"}
. AmbiguityCriterion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Criterion for solving ambiguous matches of the initial components in the training images.
Default Value : "rigidity"
List of values : AmbiguityCriterion ∈ {"distance", "orientation", "distance_orientation", "rigidity"}
. MaxContourOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Maximum contour overlap of the found initial components in a training image.
Default Value : 0.2
Suggested values : MaxContourOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Minimum Increment : 0.01
Recommended Increment : 0.05
Restriction : (0 ≤ MaxContourOverlap) ∧ (MaxContourOverlap ≤ 1)
. ClusterThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Threshold for clustering the initial components.
Default Value : 0.5
Suggested values : ClusterThreshold ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : (0 ≤ ClusterThreshold) ∧ (ClusterThreshold ≤ 1)
. ComponentTrainingID (output_control) . . . . . . . . . . . . . . . . . . . component_training ; (Htuple .) Hlong *
Handle of the training result.
Example (Syntax: HDevelop)

* Get the model image.

HALCON 8.0.2
576 CHAPTER 7. MATCHING

read_image (ModelImage, ’model_image.tif’)


* Define the regions for the initial components.
gen_rectangle2 (InitialComponentRegions, 212, 233, 0.62, 167, 29)
gen_rectangle2 (Rectangle2, 298, 363, 1.17, 162, 34)
gen_rectangle2 (Rectangle3, 63, 444, -0.26, 50, 27)
gen_rectangle2 (Rectangle4, 120, 473, 0, 33, 20)
InitialComponentRegions := [InitialComponentRegions,Rectangle2]
InitialComponentRegions := [InitialComponentRegions,Rectangle3]
InitialComponentRegions := [InitialComponentRegions,Rectangle4]
* Get the training images.
TrainingImages := []
for i := 1 to 4 by 1
read_image (TrainingImage, ’training_image-’+i+’.tif’)
TrainingImages := [TrainingImages,TrainingImage]
endfor
* Extract the model components and train the relations.
train_model_components (ModelImage, InitialComponentRegions, TrainingImages,
ModelComponents, 22, 60, 30, 0.6, 0, 0, rad(60),
’speed’, ’rigidity’, 0.2, 0.4, ComponentTrainingID)

Result
If the parameter values are correct, the operator train_model_components returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
train_model_components is processed completely exclusively without parallelization.
Possible Predecessors
gen_initial_components
Possible Successors
inspect_clustered_components, cluster_model_components,
modify_component_relations, write_training_components,
get_training_components, get_component_relations,
create_trained_component_model, clear_training_components,
clear_all_training_components
See also
create_shape_model, find_shape_model
Module
Matching

write_component_model ( Hlong ComponentModelID, const char *FileName )


T_write_component_model ( const Htuple ComponentModelID,
const Htuple FileName )

Write a component model to a file.


The operator write_component_model writes the component model ComponentModelID to the file
FileName. The model can be read again with read_component_model.
Parameter
. ComponentModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_model ; Hlong
Handle of the component model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the file name is valid (write permission), the operator write_component_model returns the value
H_MSG_TRUE. If necessary, an exception handling is raised.

HALCON/C Reference Manual, 2008-5-13


7.2. CORRELATION-BASED 577

Parallelization Information
write_component_model is reentrant and processed without parallelization.
Possible Predecessors
create_component_model, create_trained_component_model
Module
Matching

write_training_components ( Hlong ComponentTrainingID,


const char *FileName )

T_write_training_components ( const Htuple ComponentTrainingID,


const Htuple FileName )

Write a component training result to a file.


The operator write_training_components writes the component training result
ComponentTrainingID to the file FileName. The training result can be read again with
read_training_components.
Parameter
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . component_training ; Hlong
Handle of the training result.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the file name is valid (write permission), the operator write_training_components returns the value
H_MSG_TRUE. If necessary, an exception handling is raised.
Parallelization Information
write_training_components is reentrant and processed without parallelization.
Possible Predecessors
train_model_components
Module
Matching

7.2 Correlation-Based
clear_all_ncc_models ( )
T_clear_all_ncc_models ( )

Free the memory of all NCC models.


The operator clear_all_ncc_models frees the memory of all NCC models that were created by
create_ncc_model. After calling clear_all_ncc_models, no model can be used any longer.
Attention
clear_all_ncc_models exists solely for the purpose of implementing the “reset program” functionality in
HDevelop. clear_all_ncc_models must not be used in any application.
Result
clear_all_ncc_models always returns H_MSG_TRUE.
Parallelization Information
clear_all_ncc_models is processed completely exclusively without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model, write_ncc_model

HALCON 8.0.2
578 CHAPTER 7. MATCHING

Alternatives
clear_ncc_model
Module
Matching

clear_ncc_model ( Hlong ModelID )


T_clear_ncc_model ( const Htuple ModelID )

Free the memory of an NCC model.


The operator clear_ncc_model frees the memory of an NCC model that was created by
create_ncc_model. After calling clear_ncc_model, the model can no longer be used. The handle
ModelID becomes invalid.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncc_model ; Hlong
Handle of the model.
Result
If the handle of the model is valid, the operator clear_ncc_model returns the value H_MSG_TRUE. If nec-
essary an exception is raised.
Parallelization Information
clear_ncc_model is processed completely exclusively without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model, write_ncc_model
See also
clear_all_ncc_models
Module
Matching

create_ncc_model ( const Hobject Template, Hlong NumLevels,


double AngleStart, double AngleExtent, double AngleStep,
const char *Metric, Hlong *ModelID )

T_create_ncc_model ( const Hobject Template, const Htuple NumLevels,


const Htuple AngleStart, const Htuple AngleExtent,
const Htuple AngleStep, const Htuple Metric, Htuple *ModelID )

Prepare an NCC model for matching.


The operator create_ncc_model prepares a template, which is passed in the image Template, as an NCC
model used for matching using the normalized cross correlation (NCC). The ROI of the model is passed as the
domain of Template.
The model is generated using multiple image pyramid levels at multiple rotations on each level and is stored
in memory. The output parameter ModelID is a handle for this model, which is used in subsequent calls to
find_ncc_model.
The number of pyramid levels is determined with the parameter NumLevels. It should be chosen as large
as possible because by this the time necessary to find the object is significantly reduced. On the other hand,
NumLevels must be chosen such that the model is still recognizable and contains a sufficient number of points
(at least eight) on the highest pyramid level. This can be checked using the domains of the output images of
gen_gauss_pyramid. If not enough model points are generated, the number of pyramid levels is reduced
internally until enough model points are found on the highest pyramid level. If this procedure would lead to a
model with no pyramid levels, i.e., if the number of model points is already too small on the lowest pyramid level,
create_ncc_model returns an error message. If NumLevels is set to ’auto’ or 0, create_ncc_model
determines the number of pyramid levels automatically. The automatically computed number of pyramid levels

HALCON/C Reference Manual, 2008-5-13


7.2. CORRELATION-BASED 579

can be queried using get_ncc_model_params. In rare cases, it might happen that create_ncc_model
determines a value for the number of pyramid levels that is too large or too small. If the number of pyramid lev-
els is chosen too large, the model may not be recognized in the image or it may be necessary to select very low
parameters for MinScore in find_ncc_model in order to find the model. If the number of pyramid levels is
chosen too small, the time required to find the model in find_ncc_model may increase. In these cases, the
number of pyramid levels should be selected by inspecting the output of gen_gauss_pyramid. Here, Mode
= ’constant’ and Scale = 0.5 should be used.
The parameters AngleStart and AngleExtent determine the range of possible rotations, in which the model
can occur in the image. Note that the model can only be found in this range of angles by find_ncc_model. The
parameter AngleStep determines the step length within the selected range of angles. Hence, if subpixel accuracy
is not specified in find_ncc_model, this parameter specifies the accuracy that is achievable for the angles in
find_ncc_model. AngleStep should be chosen based on the size of the object. Smaller models do not
possess many different discrete rotations in the image, and hence AngleStep should be chosen larger for smaller
models. If AngleExtent is not an integer multiple of AngleStep, AngleStep is modified accordingly.
The model is pre-generated for the selected angle range and stored in memory. The memory required to store the
model is proportional to the number of angle steps and the number of points in the model. Hence, if AngleStep
is too small or AngleExtent too big, it may happen that the model no longer fits into the (virtual) memory. In
this case, either AngleStep must be enlarged or AngleExtent must be reduced. In any case, it is desirable
that the model completely fits into the main memory, because this avoids paging by the operating system, and
hence the time to find the object will be much smaller. Since angles can be determined with subpixel resolution
by find_ncc_model, AngleStep ≥ 1 can be selected for models of a diameter smaller than about 200
pixels. If AngleStep = 0 auto 0 or 0 is selected, create_ncc_model automatically determines a suitable
angle step length based on the size of the model. The automatically computed angle step length can be queried
using get_ncc_model_params.
The parameter Metric determines the conditions under which the model is recognized in the image. If Metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the model is
a bright object on a dark background, the object is found only if it is also brighter than the background. If Metric
= ’ignore_global_polarity’, the object is found in the image also if the contrast reverses globally. In the above
example, the object hence is also found if it is darker than the background. The runtime of find_ncc_model
will increase slightly in this case.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_ncc_model_origin.
Parameter
. Template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image whose domain will be used to create the model.
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Maximum number of pyramid levels.
Default Value : "auto"
List of values : NumLevels ∈ {"auto", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double
Smallest rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double
Extent of the rotation angles.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
. AngleStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double / const char *
Step length of the angles (resolution).
Default Value : "auto"
Suggested values : AngleStep ∈ {"auto", 0, 0.0175, 0.0349, 0.0524, 0.0698, 0.0873}
Restriction : (AngleStep ≥ 0) ∧ (AngleStep ≤ (pi/16))
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Match metric.
Default Value : "use_polarity"
List of values : Metric ∈ {"use_polarity", "ignore_global_polarity"}

HALCON 8.0.2
580 CHAPTER 7. MATCHING

. ModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncc_model ; Hlong *


Handle of the model.
Result
If the parameters are valid, the operator create_ncc_model returns the value H_MSG_TRUE. If the parameter
NumLevels are chosen such that the model contains too few points, the error 8510 is raised.
Parallelization Information
create_ncc_model is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, reduce_domain, threshold
Possible Successors
find_ncc_model, get_ncc_model_params, clear_ncc_model, write_ncc_model,
set_ncc_model_origin
Alternatives
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
create_template_rot
Module
Matching

T_find_ncc_model ( const Hobject Image, const Htuple ModelID,


const Htuple AngleStart, const Htuple AngleExtent,
const Htuple MinScore, const Htuple NumMatches,
const Htuple MaxOverlap, const Htuple SubPixel,
const Htuple NumLevels, Htuple *Row, Htuple *Column, Htuple *Angle,
Htuple *Score )

Find the best matches of an NCC model in an image.


The operator find_ncc_model finds the best NumMatches instances of the NCC model ModelID in
the input image Image. The model must have been created previously by calling create_ncc_model or
read_ncc_model.
The position and rotation of the found instances of the model is returned in Row, Column, and Angle. The
coordinates Row and Column are the coordinates of the origin of the shape model in the search image. By default,
the origin is the center of gravity of the domain (region) of the image that was used to create the NCC model with
create_ncc_model. A different origin can be set with set_ncc_model_origin. Additionally, the score
of each found instance is returned in Score. The score is the normalized cross correlation of the template t(r, c)
and the image i(r, c):

1 X t(u, v) − mt i(r + u, c + v) − mi (r, c)


ncc(r, c) = p · p
n s2t s2i (r, c)
(u,v)∈R

Here, n denotes the number of points in the template, R denotes the domain (ROI) of the template, mt is the mean
gray value of the template

1 X
mt = t(u, v)
n
(u,v)∈R

s2t is the variance of the gray values of the template

1 X 2
s2t = (t(u, v) − mt )
n
(u,v)∈R

mi (r, c) is the mean gray value of the image at position (r, c) over all points of the template (i.e., the template
points are shifted by (r, c))

HALCON/C Reference Manual, 2008-5-13


7.2. CORRELATION-BASED 581

1 X
mi (r, c) = i(r + u, c + v)
n
(u,v)∈R

and s2i (r, c) is the variance of the gray values of the image at position (r, c) over all points of the template

1 X 2
s2i (r, c) = (i(r + u, c + v) − mi (r, c))
n
(u,v)∈R

The NCC measures how well the template and image correspond at a particular point (r, c). It assumes values
between −1 and 1. The larger the absolute value of the correlation, the larger the degree of correspondence
between the template and image. A value of 1 means that the gray values in the image are a linear transformation
of the gray values in the template:

i(r + u, c + v) = at(u, v) + b

where a > 0. Similarly, a value of −1 means that the gray values in the image are a linear transformation of the
gray values in the template with a < 0. Hence, in this case the template occurs with a reversed polarity in the
image. Because of the above property, the NCC is invariant to linear illumination changes.
The NCC as defined above is used if the NCC model has been created with Metric = ’use_polarity’. If the model
has been created with Metric = ’ignore_global_polarity’, the absolute value of ncc(r, c) is used as the score.
It should be noted that the NCC is very sensitive to occlusion and clutter as well as to nonlinear illumination
changes in the image. If a model should be found in the presence of occlusion, clutter, or nonlinear illumination
changes the search should be performed using the shape-based matching (see, e.g., create_shape_model).
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_ncc_model. A different origin set with set_ncc_model_origin is not taken into account here.
The model is searched within those points of the domain of the image, in which the model lies completely within
the image. This means that the model will not be found if it extends beyond the borders of the image, even if it
would achieve a score greater than MinScore (see below).
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. If necessary, the range of rotations is clipped to the range given when the model was created with
create_ncc_model. In particular, this means that the angle ranges of the model and the search must truly
overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all angles in the re-
mainder of the paragraph are given in degrees, whereas they have to be specified in radians in find_ncc_model.
Hence, if the model, for example, was created with AngleStart = −20◦ and AngleExtent = 40◦ and the
angle search space in find_ncc_model is, for example, set to AngleStart = 350◦ and AngleExtent =
20◦ , the model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ .
To find the model, in this example it would be necessary to select AngleStart = −10◦ .
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rotations
are found in the image. If the model has repeating structures it may happen that multiple instances with identical
rotations are found at similar positions in the image. The parameter MaxOverlap determines by what fraction
(i.e., a number between 0 and 1) two instances may at most overlap in order to consider them as different instances,
and hence to be returned separately. If two instances overlap each other by more than MaxOverlap only the
best instance is returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary
orientation (see smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances
may not overlap at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’false’, the model’s pose is only determined with pixel accuracy and the angle resolution

HALCON 8.0.2
582 CHAPTER 7. MATCHING

that was specified with create_ncc_model. If SubPixel is set to ’true’, the position as well as the rotation
are determined with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This
mode costs almost no computation time and achieves a high accuracy. Hence, SubPixel should usually be set to
’true’.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the num-
ber of levels is clipped to the range given when the shape model was created with create_ncc_model. If
NumLevels is set to 0, the number of pyramid levels specified in create_ncc_model is used. Optionally,
NumLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. If the lowest pyramid level to use is chosen too large, it may happen that
the desired accuracy cannot be achieved, or that wrong instances of the model are found because the model is not
specific enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the model.
In this case, the lowest pyramid level to use must be set to a smaller value.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2


Input image in which the model should be found.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncc_model ; Htuple . Hlong
Handle of the model.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Smallest rotation of the model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Extent of the rotation angles.
Default Value : 0.78
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtent ≥ 0
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum score of the instances of the model to be found.
Default Value : 0.8
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MinScore ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. NumMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of instances of the model to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. MaxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Maximum overlap of the instances of the model to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MaxOverlap ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Subpixel accuracy.
Default Value : "true"
List of values : SubPixel ∈ {"false", "true"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Number of pyramid levels used in the matching (and lowest pyramid level to use if |NumLevels| = 2).
Default Value : 0
List of values : NumLevels ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}

HALCON/C Reference Manual, 2008-5-13


7.2. CORRELATION-BASED 583

. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *


Row coordinate of the found instances of the model.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the found instances of the model.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad-array ; Htuple . double *
Rotation angle of the found instances of the model.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Score of the found instances of the model.
Example (Syntax: HDevelop)

create_ncc_model (TemplateImage, ’auto’, rad(-45), rad(90), ’auto’,


’use_polarity’, ModelID)
find_ncc_model (SearchImage, ModelID, rad(-45), rad(90), 0.7, 1,
0.5, ’true’, 0, Row, Column, Angle, Score)
vector_angle_to_rigid (0, 0, 0, Row, Column, Angle, HomMat2D)
affine_trans_pixel (HomMat2D, 0, 0, RowObject, ColumnObject)
disp_cross (WindowHandle, RowObject, ColumnObject, 10, 0)

Result
If the parameter values are correct, the operator find_ncc_model returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_ncc_model is reentrant and processed without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model, set_ncc_model_origin
Possible Successors
clear_ncc_model
Alternatives
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models,
best_match_rot_mg
Module
Matching

get_ncc_model_origin ( Hlong ModelID, double *Row, double *Column )


T_get_ncc_model_origin ( const Htuple ModelID, Htuple *Row,
Htuple *Column )

Return the origin (reference point) of an NCC model.


The operator get_ncc_model_origin returns the origin (reference point) of the NCC model ModelID. The
origin is specified relative to the center of gravity of the domain (region) of the image that was used to create the
NCC model with create_ncc_model. Hence, an origin of (0,0) means that the center of gravity of the domain
of the model image is used as the origin. An origin of (-20,-40) means that the origin lies to the upper left of the
center of gravity.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncc_model ; Hlong
Handle of the model.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .point.y ; double *
Row coordinate of the origin of the NCC model.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double *
Column coordinate of the origin of the NCC model.

HALCON 8.0.2
584 CHAPTER 7. MATCHING

Result
If the handle of the model is valid, the operator get_ncc_model_origin returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
get_ncc_model_origin is reentrant and processed without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model, set_ncc_model_origin
Possible Successors
find_ncc_model
See also
area_center
Module
Matching

get_ncc_model_params ( Hlong ModelID, Hlong *NumLevels,


double *AngleStart, double *AngleExtent, double *AngleStep,
char *Metric )

T_get_ncc_model_params ( const Htuple ModelID, Htuple *NumLevels,


Htuple *AngleStart, Htuple *AngleExtent, Htuple *AngleStep,
Htuple *Metric )

Return the parameters of an NCC model.


The operator get_ncc_model_params returns the parameters of the NCC model ModelID that were used to
create it using create_ncc_model. In particular, this output can be used to check the parameters NumLevels
and AngleStep if they were determined automatically during the model creation with create_ncc_model.
Furthermore, this output can be used to check the parameters if the model was read with read_ncc_model.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncc_model ; Hlong
Handle of the model.
. NumLevels (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of pyramid levels.
. AngleStart (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double *
Smallest rotation of the pattern.
. AngleExtent (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double *
Extent of the rotation angles.
Assertion : AngleExtent ≥ 0
. AngleStep (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double *
Step length of the angles (resolution).
Assertion : AngleStep ≥ 0
. Metric (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Match metric.
Result
If the handle of the model is valid, the operator get_ncc_model_params returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
get_ncc_model_params is reentrant and processed without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model
See also
find_ncc_model
Module
Matching

HALCON/C Reference Manual, 2008-5-13


7.2. CORRELATION-BASED 585

read_ncc_model ( const char *FileName, Hlong *ModelID )


T_read_ncc_model ( const Htuple FileName, Htuple *ModelID )

Read an NCC model from a file.


The operator read_ncc_model reads an NCC model, which has been written with write_ncc_model,
from the file FileName.
Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *


File name.
. ModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncc_model ; Hlong *
Handle of the model.
Result
If the file name is valid, the operator read_ncc_model returns the value H_MSG_TRUE. If necessary an
exception is raised.
Parallelization Information
read_ncc_model is processed completely exclusively without parallelization.
Possible Successors
find_ncc_model
See also
create_ncc_model, clear_ncc_model
Module
Matching

set_ncc_model_origin ( Hlong ModelID, double Row, double Column )


T_set_ncc_model_origin ( const Htuple ModelID, const Htuple Row,
const Htuple Column )

Set the origin (reference point) of an NCC model.


The operator set_ncc_model_origin sets the origin (reference point) of the NCC model ModelID to a new
value. The origin is specified relative to the center of gravity of the domain (region) of the image that was used to
create the NCC model with create_ncc_model. Hence, an origin of (0,0) means that the center of gravity of
the domain of the model image is used as the origin. An origin of (-20,-40) means that the origin lies to the upper
left of the center of gravity.
Parameter

. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncc_model ; Hlong


Handle of the model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; double
Row coordinate of the origin of the NCC model.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double
Column coordinate of the origin of the NCC model.
Result
If the handle of the model is valid, the operator set_ncc_model_origin returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
set_ncc_model_origin is processed completely exclusively without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model
Possible Successors
find_ncc_model, get_ncc_model_origin

HALCON 8.0.2
586 CHAPTER 7. MATCHING

See also
area_center
Module
Matching

write_ncc_model ( Hlong ModelID, const char *FileName )


T_write_ncc_model ( const Htuple ModelID, const Htuple FileName )

Write an NCC model to a file.


The operator write_ncc_model writes an NCC model to the file FileName. The model can be read again
with read_ncc_model.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncc_model ; Hlong
Handle of the model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the file name is valid (write permission), the operator write_ncc_model returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
write_ncc_model is reentrant and processed without parallelization.
Possible Predecessors
create_ncc_model
Module
Matching

7.3 Gray-Value-Based

adapt_template ( const Hobject Image, Hlong TemplateID )


T_adapt_template ( const Hobject Image, const Htuple TemplateID )

Adapting a template to the size of an image.


The operator adapt_template serves to adapt a template which has been created by create_template
to the size of an image. The operator adapt_template can be called before the template is used with images
of another size, or if the image used to create the template had another size. If it is not called explicitly it will be
called internally each time another image size is used. The contents of the image is hereby irrelevant; only the
width of Image will be considered.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Image which determines the size of the later matching.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong
Template number.
Result
If the parameter values are correct, the operator adapt_template returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
adapt_template is reentrant and processed without parallelization.
Possible Predecessors
create_template, create_template_rot, read_template

HALCON/C Reference Manual, 2008-5-13


7.3. GRAY-VALUE-BASED 587

Possible Successors
set_reference_template, best_match, fast_match, fast_match_mg,
set_offset_template, best_match_mg, best_match_pre_mg, best_match_rot,
best_match_rot_mg
Module
Matching

best_match ( const Hobject Image, Hlong TemplateID, double MaxError,


const char *SubPixel, double *Row, double *Column, double *Error )

T_best_match ( const Hobject Image, const Htuple TemplateID,


const Htuple MaxError, const Htuple SubPixel, Htuple *Row,
Htuple *Column, Htuple *Error )

Searching the best matching of a template and an image.


The operator best_match performs a matching of the template of TemplateID and Image. Hereby the tem-
plate will be moved over the points of Image so that the template will lie always inside Image. best_match
works similar to fast_match, with the exception, that each time a better match is found the value of MaxError
is internally updated to a lower value to reduce runtime.
With regard to the parameter SubPixel, the position will be indicated by subpixel accuracy. The matching
criterion (“displaced frame difference”) is defined as follows:
P
u,v |Image[row − u, col − v] − TemplateID[u, v]|
error[row, col] =
area(T emplateID)

The runtime of the operator depends on the size of the domain of Image. Therefore it is important to restrict the
domain as far as possible, i.e. to apply the operator only in a very confined “region of interest”. The parameter
MaxError determines the maximal error which the searched position is allowed to have at most. The lower this
value is, the faster the operator runs.
Row and Column return the position of the best match, whereby Error indicates the average difference of the
grayvalues. If no position with an error below MaxError was found the position (0, 0) and a matching result of
255 for Error are returned. In this case MaxError has to be set larger.
The maximum error of the position (without noise) is 0.1 pixel. The average error is 0.03 pixel.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image inside of which the pattern has to be found.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; (Htuple .) Hlong
Template number.
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Maximum average difference of the grayvalues.
Default Value : 20
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Subpixel accuracy in case of ’true’.
Default Value : "false"
List of values : SubPixel ∈ {"true", "false"}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *
Row position of the best match.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column position of the best match.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Average divergence of the grayvalues of the best match.

HALCON 8.0.2
588 CHAPTER 7. MATCHING

Result
If the parameter values are correct, the operator best_match returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template, read_template, set_offset_template, set_reference_template,
adapt_template, draw_region, draw_rectangle1, reduce_domain
Alternatives
fast_match, fast_match_mg, best_match_mg, best_match_pre_mg, best_match_rot,
best_match_rot_mg, exhaustive_match, exhaustive_match_mg
Module
Matching

best_match_mg ( const Hobject Image, Hlong TemplateID, double MaxError,


const char *SubPixel, Hlong NumLevels, Hlong WhichLevels, double *Row,
double *Column, double *Error )

T_best_match_mg ( const Hobject Image, const Htuple TemplateID,


const Htuple MaxError, const Htuple SubPixel, const Htuple NumLevels,
const Htuple WhichLevels, Htuple *Row, Htuple *Column, Htuple *Error )

Searching the best grayvalue matches in a pyramid.


best_match_mg applies gray value matching using an image pyramid. best_match_mg works analogously
to best_match, but it is faster due to the use of a pyramid. Input is an image with an optionally reduced domain.
The paramter MaxError specifies the maximum error for template matching. Using smaller values results in a
reduced runtime but it is possible that the pattern might be missed. The value of MaxError has to set larger
compared with best_match, because the error is at higher levels of the pyramid often increased.
SubPixel specifies if the result is calculated with sub pixel accuracy or not. A value of 1 for SubPixel results
in an operator similar to best_match, i.e. only the original gray values are used. For values larger than 1, the
algorithm starts at the lowest resultion and searches for a position with the lowest matching error. At the next higher
resolution this position is refined. This is continued up to the maximum resolution (WhichLevels = ’all’). As an
alternative Method the mode WhichLevels with value ’original’ can be used. In this case not only the position
with the lowest error but all points below MaxError are analysed further on in the next higher resolution. This
method is slower but it is more stable and the possibilty to miss the correct position is very low. In this case it is
often possible to start with a lower resolution (higher level in Pyramid, i.e. larger value for NumLevels) which
leads to a reduced runtime. Besides the values ’all’ and ’original’ for WhichLevels you can specify the pyramid
level explicitly where to switch between a “match all” and the ”best match”. Here 0 corresponds to ’original’ and
NumLevels - 1 is equivalent to ’all’. A value in between is in most cases a good compromise between speed
and a stable detection. A larger value for WhichLevels results in a reduced runtime, a smaller value results in a
more stable detection. The value of NumLevels has to equal or smaller than the value used to create the template.
The position of the found matching position is returned in Row and Column. The corresponding error is given
in Error. If no point below MaxError is found a value of 255 for Error and 0 for Row and Column is
returned. If the desired object is missed (no object found or wrong position) you have to set MaxError higher or
WhichLevels lower. Check also if the illumination has changed (see set_offset_template).
The maximum error of the position (without noise) is 0.1 pixel. The average error is 0.03 pixel.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Input image inside of which the pattern has to be found.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong
Template number.

HALCON/C Reference Manual, 2008-5-13


7.3. GRAY-VALUE-BASED 589

. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double


Maximal average difference of the grayvalues.
Default Value : 30
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Exactness in subpixels in case of ’true’.
Default Value : "false"
List of values : SubPixel ∈ {"true", "false"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of the used resolution levels.
Default Value : 4
List of values : NumLevels ∈ {1, 2, 3, 4, 5, 6}
. WhichLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Resolution level up to which the method “best match” is used.
Default Value : 2
Suggested values : WhichLevels ∈ {"all", "original", 0, 1, 2, 3, 4, 5, 6}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .point.y ; double *
Row position of the best match.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double *
Column position of the best match.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Average divergence of the grayvalues in the best match.
Result
If the parameter values are correct, the operator best_match_mg returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match_mg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template, read_template, adapt_template, draw_region, draw_rectangle1,
reduce_domain, set_reference_template, set_offset_template
Alternatives
fast_match, fast_match_mg, best_match, best_match_pre_mg, best_match_rot,
best_match_rot_mg, exhaustive_match, exhaustive_match_mg
Module
Matching

best_match_pre_mg ( const Hobject ImagePyramid, Hlong TemplateID,


double MaxError, const char *SubPixel, Hlong NumLevels,
Hlong WhichLevels, double *Row, double *Column, double *Error )

T_best_match_pre_mg ( const Hobject ImagePyramid,


const Htuple TemplateID, const Htuple MaxError, const Htuple SubPixel,
const Htuple NumLevels, const Htuple WhichLevels, Htuple *Row,
Htuple *Column, Htuple *Error )

Searching the best grayvalue matches in a pre generated pyramid.


best_match_pre_mg applies gray value matching using an image pyramid. best_match_pre_mg works
analogously to best_match_mg, but it makes use of pre calculated pyramid which has to be generated before-
hand using gen_gauss_pyramid. This reduces runtime if more than one match has to be done or the pyramid
has be used otherwise. The pyramid has to be generated using the zooming factor 0.5 and the mode ’constant’.

HALCON 8.0.2
590 CHAPTER 7. MATCHING

Parameter
. ImagePyramid (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image-array ; Hobject : byte
Image pyramid inside of which the pattern has to be found.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong
Template number.
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Maximal average difference of the grayvalues.
Default Value : 30
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Exactness in subpixels in case of ’true’.
Default Value : "false"
List of values : SubPixel ∈ {"true", "false"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of the used resolution levels.
Default Value : 3
List of values : NumLevels ∈ {1, 2, 3, 4, 5, 6}
. WhichLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Resolution level up to which the method “best match” is used.
Default Value : "original"
Suggested values : WhichLevels ∈ {"all", "original", 0, 1, 2, 3, 4, 5, 6}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .point.y ; double *
Row position of the best match.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double *
Column position of the best match.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Average divergence of the grayvalues in the best match.
Result
If the parameter values are correct, the operator best_match_pre_mg returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match_pre_mg is reentrant and processed without parallelization.
Possible Predecessors
gen_gauss_pyramid, create_template, read_template, adapt_template, draw_region,
draw_rectangle1, reduce_domain, set_reference_template
Alternatives
fast_match, fast_match_mg, exhaustive_match, exhaustive_match_mg
Module
Matching

best_match_rot ( const Hobject Image, Hlong TemplateID,


double AngleStart, double AngleExtend, double MaxError,
const char *SubPixel, double *Row, double *Column, double *Angle,
double *Error )

T_best_match_rot ( const Hobject Image, const Htuple TemplateID,


const Htuple AngleStart, const Htuple AngleExtend,
const Htuple MaxError, const Htuple SubPixel, Htuple *Row,
Htuple *Column, Htuple *Angle, Htuple *Error )

Searching the best matching of a template and an image with rotation.

HALCON/C Reference Manual, 2008-5-13


7.3. GRAY-VALUE-BASED 591

The operator best_match_rot performs a matching of the template of TemplateID and Image. It works
similar to best_match with the extension that the pattern can be rotated. The parameters AngleStart
and AngleExtend define the maximum rotation of the pattern: AngleStart specifies the maximum counter
clockwise rotation and AngleExtend the maximum clockwise rotation relative to this angle. Both values have
to smaller or equal to the values used for the creation of the pattern (see create_template_rot). In addition
to best_match best_match_rot returns the rotion angle of the pattern in Angle (radiant). The accuracy
of this angle depends on the parameter AngleStep of create_template_rot. In the case of SubPixel =
’true’ the position and the angle are calculated with “sub pixel” accuracy.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Input image inside of which the pattern has to be found.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; (Htuple .) Hlong
Template number.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Smallest Rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtend (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Maximum positive Extension of AngleStart.
Default Value : 0.79
Suggested values : AngleExtend ∈ {6.28, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtend > 0
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Maximum average difference of the grayvalues.
Default Value : 30
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Subpixel accuracy in case of ’true’.
Default Value : "false"
List of values : SubPixel ∈ {"true", "false"}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *
Row position of the best match.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column position of the best match.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Rotation angle of pattern.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Average divergence of the grayvalues of the best match.
Result
If the parameter values are correct, the operator best_match_rot returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match_rot is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template_rot, read_template, set_offset_template,
set_reference_template, adapt_template, draw_region, draw_rectangle1,
reduce_domain
Alternatives
best_match_rot_mg
See also
best_match, best_match_mg

HALCON 8.0.2
592 CHAPTER 7. MATCHING

Module
Matching

best_match_rot_mg ( const Hobject Image, Hlong TemplateID,


double AngleStart, double AngleExtend, double MaxError,
const char *SubPixel, Hlong NumLevels, double *Row, double *Column,
double *Angle, double *Error )

T_best_match_rot_mg ( const Hobject Image, const Htuple TemplateID,


const Htuple AngleStart, const Htuple AngleExtend,
const Htuple MaxError, const Htuple SubPixel, const Htuple NumLevels,
Htuple *Row, Htuple *Column, Htuple *Angle, Htuple *Error )

Searching the best matching of a template and a pyramid with rotation.


The operator best_match_rot_mg performs a matching of the template of TemplateID and Image.
It works similar to best_match_mg with the extension that the pattern can be rotated analogously to
best_match_rot. The parameters AngleStart and AngleExtend define the maximum rotation of the
pattern: AngleStart specifies the maximum counter clockwise rotation and AngleExtend the maximum
clockwise rotation relative to this angle. Both values have to smaller or equal to the values used for the creation
of the pattern (see create_template_rot). In addition to best_match_mg best_match_rot_mg
returns the rotion angle of the pattern in Angle (radiant).
The value of MaxError must be set larger in comparison with the operator best_match_rot, because often
the error is larger at higher levels of the pyramid.
In the case of SubPixel = ’true’ the position and the angle are calculated with “sub pixel” accuracy.
The value of NumLevels has to equal or smaller than the value used to create the template.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Input image inside of which the pattern has to be found.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; (Htuple .) Hlong
Template number.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Smallest Rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtend (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Maximum positive Extension of AngleStart.
Default Value : 0.79
Suggested values : AngleExtend ∈ {6.28, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtend > 0
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Maximum average difference of the grayvalues.
Default Value : 40
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 1
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Subpixel accuracy in case of ’true’.
Default Value : "false"
List of values : SubPixel ∈ {"true", "false"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Number of the used resolution levels.
Default Value : 3
List of values : NumLevels ∈ {1, 2, 3, 4, 5, 6}

HALCON/C Reference Manual, 2008-5-13


7.3. GRAY-VALUE-BASED 593

. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *


Row position of the best match.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column position of the best match.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Rotation angle of pattern.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Average divergence of the grayvalues of the best match.
Result
If the parameter values are correct, the operator best_match_rot_mg returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match_rot_mg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template_rot, set_reference_template, set_offset_template,
adapt_template, draw_region, draw_rectangle1, reduce_domain
Alternatives
best_match_rot, best_match_mg
See also
fast_match
Module
Matching

clear_all_templates ( )
T_clear_all_templates ( )

Deallocation of the memory of all templates.


The operator clear_all_templates deallocates the memory of all template that were created by
create_template or create_template_rot. After calling clear_all_templates, no template
can be used any longer.
Attention
clear_all_templates exists solely for the purpose of implementing the “reset program” functionality in
HDevelop. clear_all_templates must not be used in any application.
Result
clear_all_templates always returns H_MSG_TRUE.
Parallelization Information
clear_all_templates is processed completely exclusively without parallelization.
Possible Predecessors
create_template, create_template_rot, read_template, write_template
Alternatives
clear_template
Module
Matching

clear_template ( Hlong TemplateID )


T_clear_template ( const Htuple TemplateID )

Deallocation of the memory of a template.

HALCON 8.0.2
594 CHAPTER 7. MATCHING

The operator clear_template deallocates the memory of a template which has been created by
create_template or create_template_rot. After execution of the operator clear_template
the template can no longer be used. The value of TemplateID is not valid. However, the number can be used
again by further calls of create_template or create_template_rot.
Parameter
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong
Template number.
Result
If the number of the template is valid, the operator clear_template returns the value H_MSG_TRUE. If
necessary an exception handling will be raised.
Parallelization Information
clear_template is processed completely exclusively without parallelization.
Possible Predecessors
create_template, create_template_rot, read_template, write_template
See also
clear_all_templates
Module
Matching

create_template ( const Hobject Template, Hlong FirstError,


Hlong NumLevel, const char *Optimize, const char *GrayValues,
Hlong *TemplateID )

T_create_template ( const Hobject Template, const Htuple FirstError,


const Htuple NumLevel, const Htuple Optimize, const Htuple GrayValues,
Htuple *TemplateID )

Preparing a pattern for template matching.


The operator create_template preprocesses a pattern (Template), which is passed as an image, for the
template matching. After the transformation, a number (TemplateID) is assigned to the template for being used
in the further process. The shape and the size of Template can be chosen arbitrarily. You have to be aware, that
the matching is only applied to that part of an image where Template fits completely into the image.
The template has be chosen such that it contains no pixels of the (changing) background. Here you can make
use of the artitrary shape of a template which is not restricted to a rectangle. To create a template you can use
segmentation operators like threshold. In the case of sub pixel accurate matching Template has in addition
to be one pixel smaller than the pattern (i.e. one pixel border to the changing background). This can be done e.g.
by applying the operator erosion_circle.
The parameter NumLevel specifies the number of pyramid levels (NumLevel = 1 means only original gray
values) which can be used for matching. The number of levels used later for matching will be below or equal this
value. If the pattern becomes to small due to zooming, the maximum number of pyramid levels is automatically
reduced (without error message).
The parameter GrayValues defines, wheter the original gray values (’original’, ’normalized) or the edge ampli-
tude (’gradient’, ’sobel’) is used. With ’original’ the sum of the differences is used as feature which is very stable
and fast if there is no change in illumination. ’normalized is used if the illumination changes. The method is a bit
slower and not quite as stable. If there is no change in illumination the mode ’original’ should be used. The edge
amplitude is another method to be invariant to changes in illumination. The disadvantage is the increased execution
time and the higher sensitivity to changes in the shape of the pattern. The mode ’gradient’ is slighy faster but more
sensitive to noise.
The maximum error for matching has typically to be chosen higher when using the edge amplitude. The mode
chosen by GrayValues leads automatically to calling the appropriate filter during matching — if necessary.
As an alternative to the gradient approach the operator set_offset_template can be used, if the change in
illumination is known.
The parameter Optimize specifies if the pattern has to optimized for runtime. This optimization results in a
longer time to create the template but reduces the time for matching. In addition the optimization leads to a more

HALCON/C Reference Manual, 2008-5-13


7.3. GRAY-VALUE-BASED 595

stable matching, i.e., the possibilty to miss good matches is reduced. The optimization process selects the most
stable and significant gray values to be tested first during the matching process. Using this technique a wrong
match can be eliminated very early.
The reference position for the template is its center of gravity. I.e. if you apply the template to the orig-
inal image the center of gravity is returned. This default reference can be adapted using the operator
set_reference_template.
In sub pixel mode a special position correction is calculated which is added after each matching: The template is
applied to the original image and the difference between the found position and the center of gravity is used as a
correction vector. This is important for patterns in a textured context or for asymetric pattern. For most templates
this correction vector is near null.
If the pattern is no longer used, it has to be freed by the operator clear_template in order to deallocate the
memory.
Before the use of the template, which is stored independently of the image size, it can be adapted explicitly to the
size of a definite image size by using adapt_template.
Parameter

. Template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte


Input image whose domain will be processed for the pattern matching.
. FirstError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Not yet in use.
Default Value : 255
List of values : FirstError ∈ {255}
. NumLevel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximal number of pyramid levels.
Default Value : 4
List of values : NumLevel ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. Optimize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Kind of optimizing.
Default Value : "sort"
List of values : Optimize ∈ {"none", "sort"}
. GrayValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Kind of grayvalues.
Default Value : "original"
List of values : GrayValues ∈ {"original", "normalized", "gradient", "sobel"}
. TemplateID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong *
Template number.
Result
If the parameters are valid, the operator create_template returns the value H_MSG_TRUE. If necessary an
exception handling will be raised.
Parallelization Information
create_template is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, reduce_domain, threshold
Possible Successors
adapt_template, set_reference_template, clear_template, write_template,
set_offset_template, best_match, best_match_mg, fast_match, fast_match_mg
Alternatives
create_template_rot, read_template
Module
Matching

HALCON 8.0.2
596 CHAPTER 7. MATCHING

create_template_rot ( const Hobject Template, Hlong NumLevel,


double AngleStart, double AngleExtend, double AngleStep,
const char *Optimize, const char *GrayValues, Hlong *TemplateID )

T_create_template_rot ( const Hobject Template, const Htuple NumLevel,


const Htuple AngleStart, const Htuple AngleExtend,
const Htuple AngleStep, const Htuple Optimize,
const Htuple GrayValues, Htuple *TemplateID )

Preparing a pattern for template matching with rotation.


The operator create_template_rot preprocesses a pattern, which is passed as an image, for the template
matching. An extension to create_template the matching can applied to rotated patterns. The parameters
AngleStart and AngleExtend define the maximum rotation of the pattern: AngleStart specifies the
maximum counter clockwise rotation and AngleExtend the maximum clockwise rotation relative to this angle.
Therefore AngleExtend has to be smaller than 2π. With the parameter AngleStep the maximum angle
resolution (on the highest resolution level) can be specified.
You have to be aware, that all possible rotations are calculated beforehand to reduce runtime during matching. This
leads to a higher execution time for create_template_rot and high memory requirements for the template.
The amount of memory depends on the parameters AngleExtend and AngleStep. The number of pyramid
levels can be neglected. If A is the number of pixels of Template, the memory M needed for the template in
byte is about:

A ∗ 12 ∗ AngleExtend
M=
AngleStep

After the transformation, a number (TemplateID) is assigned to the template for being used in the further
process.
A description of the other parameters can be found at the operator create_template.
Attention
You have to be aware, that depending on the resolution a large number of pre calculated patterns have to be created
which might result in a large amount of memory needed.
Parameter

. Template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte


Input image whose domain will be processed for the pattern matching.
. NumLevel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximal number of pyramid levels.
Default Value : 4
List of values : NumLevel ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double
Smallest Rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtend (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double
Maximum positive Extension of AngleStart.
Default Value : 0.79
Suggested values : AngleExtend ∈ {6.28, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtend > 0
. AngleStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double
Step rate (angle precision) of matching.
Default Value : 0.0982
Suggested values : AngleStep ∈ {0.3927, 0.1963, 0.0982, 0.0491, 0.0245}
Restriction : AngleStep > 0
. Optimize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Kind of optimizing.
Default Value : "sort"
List of values : Optimize ∈ {"none", "sort"}

HALCON/C Reference Manual, 2008-5-13


7.3. GRAY-VALUE-BASED 597

. GrayValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *


Kind of grayvalues.
Default Value : "original"
List of values : GrayValues ∈ {"original", "normalized", "gradient", "sobel"}
. TemplateID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong *
Template number.
Result
If the parameters are valid, the operator create_template_rot returns the value H_MSG_TRUE. If neces-
sary an exception handling will be raised.
Parallelization Information
create_template_rot is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, reduce_domain, threshold
Possible Successors
best_match_rot, best_match_rot_mg, adapt_template, set_reference_template,
clear_template, set_offset_template, write_template
Alternatives
create_template
Module
Matching

fast_match ( const Hobject Image, Hobject *Matches, Hlong TemplateID,


double MaxError )

T_fast_match ( const Hobject Image, Hobject *Matches,


const Htuple TemplateID, const Htuple MaxError )

Searching all good matches of a template and an image.


The operator fast_match performs a matching of the template of TemplateID and Image. Hereby the
template will be moved over the points of Image so that the template always lies completely inside of Image.
The matching criterion (“displaced frame difference”) is defined as follows:
P
u,v |Image[row − u, col − v] − TemplateID[u, v]|
error[row, col] =
area(T emplateID)

The difference between fast_match and exhaustive_match is that the matching for one position is
stopped if the error is to high. This leads to a reduced runtime but one might miss correct matches. The runtime of
the operator depends mainly on the size of the domain of Image. Therefore it is important to restrict the domain
as far as possible, i.e. to apply the operator only in a very confined “region of interest”. The parameter MaxError
determines the maximal error which the searched position is allowed to show. The lower this value is, the faster
the operator runs.
All points which show a matching error smaller than MaxError will be returned in the output region Matches.
This region can be used for further processing. For example by using a connection and best_match to find all
the matching objects. If no point has a match error below MaxError the empty region (i.e no points) is returned.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Input image inside of which the pattern has to be found.
. Matches (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
All points whose error lies below a certain threshold.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong
Template number.

HALCON 8.0.2
598 CHAPTER 7. MATCHING

. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double


Maximal average difference of the grayvalues.
Default Value : 20
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 1
Result
If the parameter values are correct, the operator fast_match returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
fast_match is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template, read_template, adapt_template, draw_region, draw_rectangle1,
reduce_domain
Possible Successors
connection, best_match
Alternatives
best_match, best_match_mg, fast_match_mg, exhaustive_match, exhaustive_match_mg
Module
Matching

fast_match_mg ( const Hobject Image, Hobject *Matches,


Hlong TemplateID, double MaxError, Hlong NumLevel )

T_fast_match_mg ( const Hobject Image, Hobject *Matches,


const Htuple TemplateID, const Htuple MaxError,
const Htuple NumLevel )

Searching all good grayvalue matches in a pyramid.


The operator fast_match_mg like the operator fast_match performs a matching of the template of
TemplateID and Image. In contrast to fast_match, however, the search for good matches starts in scaled
down images (pyramid). The number of levels of the pyramid will be determined by NumLevel. Hereby the
value 1 indicates that no pyramid will be used. In this case the operator fast_match_mg is equivalent to the
operator fast_match. The value 2 triggers the search in the image with half the framesize. The search for
all those points showing an error small enough in the scaled down image (error smaller than MaxError) will be
refined at the corresponding positions in the original image (Image).
The runtime of matching dependends on the parameter MaxError: the larger the value the longer is the processing
time, because more points of the pattern have to be tested. If MaxError is to low the pattern will not be found.
The value has therefore to be optimized for every application.
NumLevel indicates the number of levels of the pyramid, including the original image. Optionally a second value
can be given. This value specifies the number (0..n) of the lowest level which is used the the matching. The region
found up to this level will then be zoomed to the size of the original level. This can used to increase the runtime in
the case that the accuracy does not have to be so high.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Input image inside of which the pattern has to be found.
. Matches (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
All points which have an error below a certain threshold.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; (Htuple .) Hlong
Template number.

HALCON/C Reference Manual, 2008-5-13


7.3. GRAY-VALUE-BASED 599

. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double


Maximal average difference of the grayvalues.
Default Value : 30
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. NumLevel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Number of levels in the pyramid.
Default Value : 3
List of values : NumLevel ∈ {1, 2, 3, 4, 5, 6, 7, 8}
Result
If the parameter values are correct, the operator fast_match_mg returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
fast_match_mg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template, read_template, adapt_template, draw_region, draw_rectangle1,
reduce_domain
Alternatives
best_match, best_match_mg, fast_match, exhaustive_match, exhaustive_match_mg
Module
Matching

read_template ( const char *FileName, Hlong *TemplateID )


T_read_template ( const Htuple FileName, Htuple *TemplateID )

Reading a template from file.


The operator read_template reads a matching template from file which has been written with
write_template.
Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *


file name.
. TemplateID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong *
Template number.
Result
If the file name is valid, the operator read_template returns the value H_MSG_TRUE. If necessary an excep-
tion handling will be raised.
Parallelization Information
read_template is processed completely exclusively without parallelization.
Possible Successors
adapt_template, set_reference_template, set_offset_template, best_match,
fast_match, best_match_rot
Module
Matching

HALCON 8.0.2
600 CHAPTER 7. MATCHING

set_offset_template ( Hlong TemplateID, Hlong GrayOffset )


T_set_offset_template ( const Htuple TemplateID,
const Htuple GrayOffset )

Gray value offset for template.


set_offset_template adds a gray value offset to the template to eliminate gray value changes in the im-
age. The parameter GrayOffset specifies a difference relative to the gray values of the pattern when it was
created with create_template (not relative to the last call of set_offset_template). The values of
GrayOffset has to be chosen according to the gray values of the image: A brighter image results in a positive
value, a darker image results in a negative value. set_offset_template has to be called each time the
gray values of the image changes. The gray values can be meassured in a reference area using intensity or
min_max_gray
Parameter

. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong


Template number.
. GrayOffset (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Offset of gray values.
Default Value : 0
Suggested values : GrayOffset ∈ {-10, -5, -2, -1, 0, 1, 2, 5, 10}
Typical range of values : -255 ≤ GrayOffset ≤ 255
Minimum Increment : 1
Recommended Increment : 1
Result
If the parameter values are correct, the operator set_offset_template returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
set_offset_template is reentrant and processed without parallelization.
Possible Predecessors
create_template, adapt_template, read_template
Possible Successors
best_match, best_match_mg, best_match_rot, fast_match, fast_match_mg
Module
Matching

set_reference_template ( Hlong TemplateID, double Row, double Column )


T_set_reference_template ( const Htuple TemplateID, const Htuple Row,
const Htuple Column )

Define reference position for a matching template.


set_reference_template allows to define a new reference position for a template. As default after call-
ing create_template or create_template_rot the center of gravity of the template is used. Using
set_reference_template the reference position can be redefined. In the case of the center of gravity as
reference the vector (0, 0) is returned after matching for a null translation of the pattern relative to the image.
Parameter

. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong


Template number.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; double
Reference position of template (row).
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double
Reference position of template (column).

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 601

Result
If the parameter values are correct, the operator set_reference_template returns the value
H_MSG_TRUE. If necessary, an exception handling is raised.
Parallelization Information
set_reference_template is reentrant and processed without parallelization.
Possible Predecessors
create_template, create_template_rot, read_template, adapt_template
Possible Successors
best_match, best_match_mg, best_match_rot, fast_match, fast_match_mg
Module
Matching

write_template ( Hlong TemplateID, const char *FileName )


T_write_template ( const Htuple TemplateID, const Htuple FileName )

Writing a template to file.


The operator write_template writes a matching template to file which can be read again with
read_template.
Parameter
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong
Template number.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
file name.
Result
If the file name is valid (permission to write), the operator write_template returns the value H_MSG_TRUE.
If necessary an exception handling will be raised.
Parallelization Information
write_template is reentrant and processed without parallelization.
Possible Predecessors
create_template, create_template_rot
Module
Matching

7.4 Shape-Based

clear_all_shape_models ( )
T_clear_all_shape_models ( )

Free the memory of all shape models.


The operator clear_all_shape_models frees the memory of all shape models that were created by
create_shape_model, create_scaled_shape_model, or create_aniso_shape_model. Af-
ter calling clear_all_shape_models, no model can be used any longer.
Attention
clear_all_shape_models exists solely for the purpose of implementing the “reset program” functionality
in HDevelop. clear_all_shape_models must not be used in any application.
Result
clear_all_shape_models always returns H_MSG_TRUE.
Parallelization Information
clear_all_shape_models is processed completely exclusively without parallelization.

HALCON 8.0.2
602 CHAPTER 7. MATCHING

Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
read_shape_model, write_shape_model
Alternatives
clear_shape_model
Module
Matching

clear_shape_model ( Hlong ModelID )


T_clear_shape_model ( const Htuple ModelID )

Free the memory of a shape model.


The operator clear_shape_model frees the memory of a shape model that was created by
create_shape_model, create_scaled_shape_model, or create_aniso_shape_model. Af-
ter calling clear_shape_model, the model can no longer be used. The handle ModelID becomes invalid.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; Hlong
Handle of the model.
Result
If the handle of the model is valid, the operator clear_shape_model returns the value H_MSG_TRUE. If
necessary an exception is raised.
Parallelization Information
clear_shape_model is processed completely exclusively without parallelization.
Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
read_shape_model, write_shape_model
See also
clear_all_shape_models
Module
Matching

create_aniso_shape_model ( const Hobject Template, Hlong NumLevels,


double AngleStart, double AngleExtent, double AngleStep,
double ScaleRMin, double ScaleRMax, double ScaleRStep,
double ScaleCMin, double ScaleCMax, double ScaleCStep,
const char *Optimization, const char *Metric, Hlong Contrast,
Hlong MinContrast, Hlong *ModelID )

T_create_aniso_shape_model ( const Hobject Template,


const Htuple NumLevels, const Htuple AngleStart,
const Htuple AngleExtent, const Htuple AngleStep,
const Htuple ScaleRMin, const Htuple ScaleRMax,
const Htuple ScaleRStep, const Htuple ScaleCMin,
const Htuple ScaleCMax, const Htuple ScaleCStep,
const Htuple Optimization, const Htuple Metric, const Htuple Contrast,
const Htuple MinContrast, Htuple *ModelID )

Prepare a shape model for anisotropic scale invariant matching.


The operator create_aniso_shape_model prepares a template, which is passed in the image Template,
as a shape model used for anisotropic scale invariant matching. The ROI of the model is passed as the domain of
Template.

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 603

The model is generated using multiple image pyramid levels and is stored in memory. If a complete pregeneration
of the model is selected (see below), the model is generated at multiple rotations and anisotropic scales (i.e.,
independent scales in the row and column direction) on each level. The output parameter ModelID is a handle
for this model, which is used in subsequent calls to find_aniso_shape_model.
The number of pyramid levels is determined with the parameter NumLevels. It should be chosen as
large as possible because by this the time necessary to find the object is significantly reduced. On the
other hand, NumLevels must be chosen such that the model is still recognizable and contains a sufficient
number of points (at least four) on the highest pyramid level. This can be checked using the output of
inspect_shape_model. If not enough model points are generated, the number of pyramid levels is reduced
internally until enough model points are found on the highest pyramid level. If this procedure would lead to a
model with no pyramid levels, i.e., if the number of model points is already too small on the lowest pyramid level,
create_aniso_shape_model returns with an error message. If NumLevels is set to ’auto’ (or 0 for back-
wards compatibility), create_aniso_shape_model determines the number of pyramid levels automatically.
The automatically computed number of pyramid levels can be queried using get_shape_model_params. In
rare cases, it might happen that create_aniso_shape_model determines a value for the number of pyra-
mid levels that is too large or too small. If the number of pyramid levels is chosen too large, the model may not
be recognized in the image or it may be necessary to select very low parameters for MinScore or Greediness in
find_aniso_shape_model in order to find the model. If the number of pyramid levels is chosen too small,
the time required to find the model in find_aniso_shape_model may increase. In these cases, the number
of pyramid levels should be selected using the output of inspect_shape_model.
The parameters AngleStart and AngleExtent determine the range of possible rotations, in which
the model can occur in the image. Note that the model can only be found in this range of angles by
find_aniso_shape_model. The parameter AngleStep determines the step length within the selected
range of angles. Hence, if subpixel accuracy is not specified in find_aniso_shape_model, this param-
eter specifies the accuracy that is achievable for the angles in find_aniso_shape_model. AngleStep
should be chosen based on the size of the object. Smaller models do not have many different discrete rotations
in the image, and hence AngleStep should be chosen larger for smaller models. If AngleExtent is not an
integer multiple of AngleStep, AngleStep is modified accordingly.
The parameters ScaleRMin, ScaleRMax, ScaleCMin, and ScaleCMax determine the range of possible
anisotropic scales of the model in the row and column direction. A scale of 1 in both scale factors corresponds to
the original size of the model. The parameters ScaleRStep and ScaleCStep determine the step length within
the selected range of scales. Hence, if subpixel accuracy is not specified in find_aniso_shape_model,
these parameters specify the accuracy that is achievable for the scales in find_aniso_shape_model. Like
AngleStep, ScaleRStep and ScaleCStep should be chosen based on the size of the object. If the respective
range of scales is not an integer multiple of ScaleRStep and ScaleCStep, ScaleRStep and ScaleCStep
are modified accordingly.
Note that the transformations are treated internally such that the scalings are applied first, followed by the rotation.
Therefore, the model should usually be aligned such that it appears horizontally or vertically in the model image.
If a complete pregeneration of the model is selected (see below), the model is pre-generated for the selected
angle and scale range and stored in memory. The memory required to store the model is proportional to
the number of angle steps, the number of scale steps, and the number of points in the model. Hence, if
AngleStep, ScaleRStep, or ScaleCStep are too small or AngleExtent or the range of scales are
too big, it may happen that the model no longer fits into the (virtual) memory. In this case, AngleStep,
ScaleRStep, or ScaleCStep must be enlarged or AngleExtent or the range of scales must be re-
duced. In any case, it is desirable that the model completely fits into the main memory, because this avoids
paging by the operating system, and hence the time to find the object will be much smaller. Since an-
gles can be determined with subpixel resolution by find_aniso_shape_model, AngleStep ≥ 1◦ and
ScaleRStep, ScaleCStep ≥ 0.02 can be selected for models of a diameter smaller than about 200 pixels.
If AngleStep = 0 auto 0 or ScaleRStep, ScaleCStep = 0 auto 0 (or 0 for backwards compatibility in both
cases) is selected, create_aniso_shape_model automatically determines a suitable angle or scale step
length, respectively, based on the size of the model. The automatically computed angle and scale step lengths can
be queried using get_shape_model_params.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
find_aniso_shape_model. Because of this, the recognition of the model might require slightly more time.
For particularly large models, it may be useful to reduce the number of model points by setting Optimization
to a value different from ’none’. If Optimization = ’none’, all model points are stored. In all other cases,

HALCON 8.0.2
604 CHAPTER 7. MATCHING

the number of points is reduced according to the value of Optimization. If the number of points is reduced,
it may be necessary in find_aniso_shape_model to set the parameter Greediness to a smaller value,
e.g., 0.7 or 0.8. For small models, the reduction of the number of model points does not result in a speed-up of
the search because in this case usually significantly more potential instances of the model must be examined. If
Optimization is set to ’auto’, create_aniso_shape_model automatically determines the reduction of
the number of model points.
Optionally, a second value can be passed in Optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of Optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with set_system(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, find_aniso_shape_model
typically returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a
completely pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two
modes. If maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter Contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
Contrast should be chosen such that only the significant features of the template are used for the model.
Contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in edges_image. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see hysteresis_threshold. Optionally, Contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in Contrast. In this case, the first two values can simply be set
to identical values. The effect of this parameter can be checked in advance with inspect_shape_model.
If Contrast is set to ’auto’, create_aniso_shape_model determines the three above described values
automatically. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’),
or the minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not deter-
mined automatically can additionally be passed in the form of a tuple. Also various combinations are allowed: If,
for example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are determined
automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while the hys-
teresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determination of
the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be preferred
if certain model components should be included or suppressed because of application-specific reasons or if the
object contains several different contrasts. Therefore, the contrast thresholds should be automatically determined
with determine_shape_model_params and subsequently verified using inspect_shape_model be-
fore calling create_aniso_shape_model.
With MinContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by find_aniso_shape_model. In other words, this parameter separates the model from the noise
in the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If
multichannel images are used for the model and the search images, and if the parameter Metric is set to ’ig-
nore_color_polarity’ (see below) the noise in one channel must be multiplied by the square root of the number
of channels to determine MinContrast. If, for example, the gray values fluctuate within a range of 10 gray
levels in a single channel and the image is a three-channel image MinContrast should be set to 17. Obviously,
MinContrast must be smaller than Contrast. If the model should be recognized in very low contrast im-
ages, MinContrast must be set to a correspondingly small value. If the model should be recognized even if it
is severely occluded, MinContrast should be slightly larger than the range of gray value fluctuations created
by noise in order to ensure that the position and rotation of the model are extracted robustly and accurately by
find_aniso_shape_model. If MinContrast is set to ’auto’, the minimum contrast is determined auto-
matically based on the noise in the model image. Consequently, an automatic determination only makes sense if

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 605

the image noise during the recognition is similar to the noise in the model image. Furthermore, in some cases it is
advisable to increase the automatically determined value in order to increase the robustness against occlusions (see
above). The automatically computed minimum contrast can be queried using get_shape_model_params.
The parameter Metric determines the conditions under which the model is recognized in the image. If Metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the
model is a bright object on a dark background, the object is found only if it is also brighter than the back-
ground. If Metric = ’ignore_global_polarity’, the object is found in the image also if the contrast reverses
globally. In the above example, the object hence is also found if it is darker than the background. The runtime
of find_aniso_shape_model will increase slightly in this case. If Metric = ’ignore_local_polarity’, the
model is found even if the contrast changes locally. This mode can, for example, be useful if the object consists
of a part with medium gray value, within which either darker or brighter sub-objects lie. Since in this case the
runtime of find_aniso_shape_model increases significantly, it is usually better to create several models
that reflect the possible contrast variations of the object with create_aniso_shape_model, and to match
them simultaneously with find_aniso_shape_models. The above three metrics can only be applied to
single-channel images. If a multichannel image is used as the model image or as the search image only the first
channel will be used (and no error message will be returned). If Metric = ’ignore_color_polarity’, the model
is found even if the color contrast changes locally. This is, for example, the case if parts of the object can change
their color, e.g., from red to green. In particular, this mode is useful if it is not known in advance in which channels
the object is visible. In this mode, the runtime of find_aniso_shape_model can also increase significantly.
The metric ’ignore_color_polarity’ can be used for images with an arbitrary number of channels. If it is used for
single-channel images it has the same effect as ’ignore_local_polarity’. It should be noted that for Metric =
’ignore_color_polarity’ the number of channels in the model creation with create_aniso_shape_model
and in the search with find_aniso_shape_model can be different. This can, for example, be used to create
a model from a synthetically generated single-channel image. Furthermore, it should be noted that the channels do
not need to contain a spectral subdivision of the light (like in an RGB image). The channels can, for example, also
contain images of the same object that were obtained by illuminating the object from different directions.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_shape_model_origin.
Parameter

. Template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image whose domain will be used to create the model.
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong / const char *
Maximum number of pyramid levels.
Default Value : "auto"
List of values : NumLevels ∈ {"auto", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Smallest rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Extent of the rotation angles.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
. AngleStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double / const char *
Step length of the angles (resolution).
Default Value : "auto"
Suggested values : AngleStep ∈ {"auto", 0.0175, 0.0349, 0.0524, 0.0698, 0.0873}
Restriction : (AngleStep ≥ 0) ∧ (AngleStep ≤ (pi/16))
. ScaleRMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Minimum scale of the pattern in the row direction.
Default Value : 0.9
Suggested values : ScaleRMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleRMin > 0

HALCON 8.0.2
606 CHAPTER 7. MATCHING

. ScaleRMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double


Maximum scale of the pattern in the row direction.
Default Value : 1.1
Suggested values : ScaleRMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleRMax ≥ ScaleRMin
. ScaleRStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double / const char *
Scale step length (resolution) in the row direction.
Default Value : "auto"
Suggested values : ScaleRStep ∈ {"auto", 0.01, 0.02, 0.05, 0.1, 0.15, 0.2}
Restriction : ScaleRStep ≥ 0
. ScaleCMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Minimum scale of the pattern in the column direction.
Default Value : 0.9
Suggested values : ScaleCMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleCMin > 0
. ScaleCMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Maximum scale of the pattern in the column direction.
Default Value : 1.1
Suggested values : ScaleCMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleCMax ≥ ScaleCMin
. ScaleCStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double / const char *
Scale step length (resolution) in the column direction.
Default Value : "auto"
Suggested values : ScaleCStep ∈ {"auto", 0.01, 0.02, 0.05, 0.1, 0.15, 0.2}
Restriction : ScaleCStep ≥ 0
. Optimization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Kind of optimization and optionally method used for generating the model.
Default Value : "auto"
List of values : Optimization ∈ {"auto", "none", "point_reduction_low", "point_reduction_medium",
"point_reduction_high", "pregeneration", "no_pregeneration"}
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Match metric.
Default Value : "use_polarity"
List of values : Metric ∈ {"use_polarity", "ignore_global_polarity", "ignore_local_polarity",
"ignore_color_polarity"}
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) Hlong / const char *
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum
size of the object parts.
Default Value : "auto"
Suggested values : Contrast ∈ {"auto", "auto_contrast", "auto_contrast_hyst", "auto_min_size", 10, 20,
30, 40, 60, 80, 100, 120, 140, 160}
. MinContrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) Hlong / const char *
Minimum contrast of the objects in the search images.
Default Value : "auto"
Suggested values : MinContrast ∈ {"auto", 1, 2, 3, 5, 7, 10, 20, 30, 40}
Restriction : MinContrast < Contrast
. ModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; (Htuple .) Hlong *
Handle of the model.
Result
If the parameters are valid, the operator create_aniso_shape_model returns the value H_MSG_TRUE. If
necessary an exception is raised. If the parameters NumLevels and Contrast are chosen such that the model
contains too few points, the error 8510 is raised.
Parallelization Information
create_aniso_shape_model is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, reduce_domain, threshold

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 607

Possible Successors
find_aniso_shape_model, find_aniso_shape_models, get_shape_model_params,
clear_shape_model, write_shape_model, set_shape_model_origin
Alternatives
create_shape_model, create_scaled_shape_model, create_template_rot
See also
set_system, get_system
Module
Matching

create_scaled_shape_model ( const Hobject Template, Hlong NumLevels,


double AngleStart, double AngleExtent, double AngleStep,
double ScaleMin, double ScaleMax, double ScaleStep,
const char *Optimization, const char *Metric, Hlong Contrast,
Hlong MinContrast, Hlong *ModelID )

T_create_scaled_shape_model ( const Hobject Template,


const Htuple NumLevels, const Htuple AngleStart,
const Htuple AngleExtent, const Htuple AngleStep,
const Htuple ScaleMin, const Htuple ScaleMax, const Htuple ScaleStep,
const Htuple Optimization, const Htuple Metric, const Htuple Contrast,
const Htuple MinContrast, Htuple *ModelID )

Prepare a shape model for scale invariant matching.


The operator create_scaled_shape_model prepares a template, which is passed in the image Template,
as a shape model used for scale invariant matching. The ROI of the model is passed as the domain of Template.
The model is generated using multiple image pyramid levels and is stored in memory. If a complete pre-
generation of the model is selected (see below), the model is generated at multiple rotations and scales on
each level. The output parameter ModelID is a handle for this model, which is used in subsequent calls to
find_scaled_shape_model.
The number of pyramid levels is determined with the parameter NumLevels. It should be chosen as
large as possible because by this the time necessary to find the object is significantly reduced. On the
other hand, NumLevels must be chosen such that the model is still recognizable and contains a sufficient
number of points (at least four) on the highest pyramid level. This can be checked using the output of
inspect_shape_model. If not enough model points are generated, the number of pyramid levels is re-
duced internally until enough model points are found on the highest pyramid level. If this procedure would
lead to a model with no pyramid levels, i.e., if the number of model points is already too small on the low-
est pyramid level, create_scaled_shape_model returns with an error message. If NumLevels is set
to ’auto’ (or 0 for backwards compatibility), create_scaled_shape_model determines the number of
pyramid levels automatically. The automatically computed number of pyramid levels can be queried using
get_shape_model_params. In rare cases, it might happen that create_scaled_shape_model deter-
mines a value for the number of pyramid levels that is too large or too small. If the number of pyramid levels is cho-
sen too large, the model may not be recognized in the image or it may be necessary to select very low parameters for
MinScore or Greediness in find_scaled_shape_model in order to find the model. If the number of pyramid
levels is chosen too small, the time required to find the model in find_scaled_shape_model may increase.
In these cases, the number of pyramid levels should be selected using the output of inspect_shape_model.
The parameters AngleStart and AngleExtent determine the range of possible rotations, in which
the model can occur in the image. Note that the model can only be found in this range of angles by
find_scaled_shape_model. The parameter AngleStep determines the step length within the selected
range of angles. Hence, if subpixel accuracy is not specified in find_scaled_shape_model, this parameter
specifies the accuracy that is achievable for the angles in find_scaled_shape_model. AngleStep should
be chosen based on the size of the object. Smaller models do not have many different discrete rotations in the
image, and hence AngleStep should be chosen larger for smaller models. If AngleExtent is not an integer
multiple of AngleStep, AngleStep is modified accordingly.
The parameters ScaleMin and ScaleMax determine the range of possible scales (sizes) of the model. A scale

HALCON 8.0.2
608 CHAPTER 7. MATCHING

of 1 corresponds to the original size of the model. The parameter ScaleStep determines the step length within
the selected range of scales. Hence, if subpixel accuracy is not specified in find_scaled_shape_model,
this parameter specifies the accuracy that is achievable for the scales in find_scaled_shape_model. Like
AngleStep, ScaleStep should be chosen based on the size of the object. If the range of scales is not an integer
multiple of ScaleStep, ScaleStep is modified accordingly.
If a complete pregeneration of the model is selected (see below), the model is pre-generated for the selected angle
and scale range and stored in memory. The memory required to store the model is proportional to the number
of angle steps, the number of scale steps, and the number of points in the model. Hence, if AngleStep or
ScaleStep are too small or AngleExtent or the range of scales are too big, it may happen that the model
no longer fits into the (virtual) memory. In this case, either AngleStep or ScaleStep must be enlarged or
AngleExtent or the range of scales must be reduced. In any case, it is desirable that the model completely fits
into the main memory, because this avoids paging by the operating system, and hence the time to find the object will
be much smaller. Since angles can be determined with subpixel resolution by find_scaled_shape_model,
AngleStep ≥ 1◦ and ScaleStep ≥ 0.02 can be selected for models of a diameter smaller than about 200
pixels. If AngleStep = 0 auto 0 or ScaleStep = 0 auto 0 (or 0 for backwards compatibility in both cases)
is selected, create_scaled_shape_model automatically determines a suitable angle or scale step length,
respectively, based on the size of the model. The automatically computed angle and scale step lengths can be
queried using get_shape_model_params.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
find_scaled_shape_model. Because of this, the recognition of the model might require slightly more
time.
For particularly large models, it may be useful to reduce the number of model points by setting Optimization
to a value different from ’none’. If Optimization = ’none’, all model points are stored. In all other cases,
the number of points is reduced according to the value of Optimization. If the number of points is reduced,
it may be necessary in find_scaled_shape_model to set the parameter Greediness to a smaller value,
e.g., 0.7 or 0.8. For small models, the reduction of the number of model points does not result in a speed-up of
the search because in this case usually significantly more potential instances of the model must be examined. If
Optimization is set to ’auto’, create_scaled_shape_model automatically determines the reduction
of the number of model points.
Optionally, a second value can be passed in Optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of Optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with set_system(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, find_scaled_shape_model
typically returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a
completely pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two
modes. If maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter Contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
Contrast should be chosen such that only the significant features of the template are used for the model.
Contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in edges_image. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see hysteresis_threshold. Optionally, Contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in Contrast. In this case, the first two values can simply be set to
identical values. The effect of this parameter can be checked in advance with inspect_shape_model. If
Contrast is set to ’auto’, create_scaled_shape_model determines the three above described values
automatically. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’),

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 609

or the minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not deter-
mined automatically can additionally be passed in the form of a tuple. Also various combinations are allowed: If,
for example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are determined
automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while the hys-
teresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determination of
the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be preferred
if certain model components should be included or suppressed because of application-specific reasons or if the
object contains several different contrasts. Therefore, the contrast thresholds should be automatically determined
with determine_shape_model_params and subsequently verified using inspect_shape_model be-
fore calling create_scaled_shape_model.
With MinContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by find_scaled_shape_model. In other words, this parameter separates the model from the noise
in the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If
multichannel images are used for the model and the search images, and if the parameter Metric is set to ’ig-
nore_color_polarity’ (see below) the noise in one channel must be multiplied by the square root of the number
of channels to determine MinContrast. If, for example, the gray values fluctuate within a range of 10 gray
levels in a single channel and the image is a three-channel image MinContrast should be set to 17. Obviously,
MinContrast must be smaller than Contrast. If the model should be recognized in very low contrast im-
ages, MinContrast must be set to a correspondingly small value. If the model should be recognized even if it
is severely occluded, MinContrast should be slightly larger than the range of gray value fluctuations created
by noise in order to ensure that the position and rotation of the model are extracted robustly and accurately by
find_scaled_shape_model. If MinContrast is set to ’auto’, the minimum contrast is determined auto-
matically based on the noise in the model image. Consequently, an automatic determination only makes sense if
the image noise during the recognition is similar to the noise in the model image. Furthermore, in some cases it is
advisable to increase the automatically determined value in order to increase the robustness against occlusions (see
above). The automatically computed minimum contrast can be queried using get_shape_model_params.
The parameter Metric determines the conditions under which the model is recognized in the image. If Metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the
model is a bright object on a dark background, the object is found only if it is also brighter than the back-
ground. If Metric = ’ignore_global_polarity’, the object is found in the image also if the contrast reverses
globally. In the above example, the object hence is also found if it is darker than the background. The runtime of
find_scaled_shape_model will increase slightly in this case. If Metric = ’ignore_local_polarity’, the
model is found even if the contrast changes locally. This mode can, for example, be useful if the object consists
of a part with medium gray value, within which either darker or brighter sub-objects lie. Since in this case the
runtime of find_scaled_shape_model increases significantly, it is usually better to create several models
that reflect the possible contrast variations of the object with create_scaled_shape_model, and to match
them simultaneously with find_scaled_shape_models. The above three metrics can only be applied to
single-channel images. If a multichannel image is used as the model image or as the search image only the first
channel will be used (and no error message will be returned). If Metric = ’ignore_color_polarity’, the model is
found even if the color contrast changes locally. This is, for example, the case if parts of the object can change their
color, e.g., from red to green. In particular, this mode is useful if it is not known in advance in which channels the
object is visible. In this mode, the runtime of find_scaled_shape_model can also increase significantly.
The metric ’ignore_color_polarity’ can be used for images with an arbitrary number of channels. If it is used for
single-channel images it has the same effect as ’ignore_local_polarity’. It should be noted that for Metric =
’ignore_color_polarity’ the number of channels in the model creation with create_scaled_shape_model
and in the search with find_scaled_shape_model can be different. This can, for example, be used to create
a model from a synthetically generated single-channel image. Furthermore, it should be noted that the channels do
not need to contain a spectral subdivision of the light (like in an RGB image). The channels can, for example, also
contain images of the same object that were obtained by illuminating the object from different directions.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_shape_model_origin.
Parameter

. Template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image whose domain will be used to create the model.

HALCON 8.0.2
610 CHAPTER 7. MATCHING

. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong / const char *


Maximum number of pyramid levels.
Default Value : "auto"
List of values : NumLevels ∈ {"auto", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Smallest rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Extent of the rotation angles.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
. AngleStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double / const char *
Step length of the angles (resolution).
Default Value : "auto"
Suggested values : AngleStep ∈ {"auto", 0.0175, 0.0349, 0.0524, 0.0698, 0.0873}
Restriction : (AngleStep ≥ 0) ∧ (AngleStep ≤ (pi/16))
. ScaleMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Minimum scale of the pattern.
Default Value : 0.9
Suggested values : ScaleMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleMin > 0
. ScaleMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Maximum scale of the pattern.
Default Value : 1.1
Suggested values : ScaleMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleMax ≥ ScaleMin
. ScaleStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double / const char *
Scale step length (resolution).
Default Value : "auto"
Suggested values : ScaleStep ∈ {"auto", 0.01, 0.02, 0.05, 0.1, 0.15, 0.2}
Restriction : ScaleStep ≥ 0
. Optimization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Kind of optimization and optionally method used for generating the model.
Default Value : "auto"
List of values : Optimization ∈ {"auto", "none", "point_reduction_low", "point_reduction_medium",
"point_reduction_high", "pregeneration", "no_pregeneration"}
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Match metric.
Default Value : "use_polarity"
List of values : Metric ∈ {"use_polarity", "ignore_global_polarity", "ignore_local_polarity",
"ignore_color_polarity"}
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) Hlong / const char *
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum
size of the object parts.
Default Value : "auto"
Suggested values : Contrast ∈ {"auto", "auto_contrast", "auto_contrast_hyst", "auto_min_size", 10, 20,
30, 40, 60, 80, 100, 120, 140, 160}
. MinContrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) Hlong / const char *
Minimum contrast of the objects in the search images.
Default Value : "auto"
Suggested values : MinContrast ∈ {"auto", 1, 2, 3, 5, 7, 10, 20, 30, 40}
Restriction : MinContrast < Contrast
. ModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; (Htuple .) Hlong *
Handle of the model.
Result
If the parameters are valid, the operator create_scaled_shape_model returns the value H_MSG_TRUE.

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 611

If necessary an exception is raised. If the parameters NumLevels and Contrast are chosen such that the model
contains too few points, the error 8510 is raised.
Parallelization Information
create_scaled_shape_model is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, reduce_domain, threshold
Possible Successors
find_scaled_shape_model, find_scaled_shape_models, get_shape_model_params,
clear_shape_model, write_shape_model, set_shape_model_origin
Alternatives
create_shape_model, create_aniso_shape_model, create_template_rot
See also
set_system, get_system
Module
Matching

create_shape_model ( const Hobject Template, Hlong NumLevels,


double AngleStart, double AngleExtent, double AngleStep,
const char *Optimization, const char *Metric, Hlong Contrast,
Hlong MinContrast, Hlong *ModelID )

T_create_shape_model ( const Hobject Template, const Htuple NumLevels,


const Htuple AngleStart, const Htuple AngleExtent,
const Htuple AngleStep, const Htuple Optimization,
const Htuple Metric, const Htuple Contrast, const Htuple MinContrast,
Htuple *ModelID )

Prepare a shape model for matching.


The operator create_shape_model prepares a template, which is passed in the image Template, as a shape
model used for matching. The ROI of the model is passed as the domain of Template.
The model is generated using multiple image pyramid levels and is stored in memory. If a complete pregeneration
of the model is selected (see below), the model is generated at multiple rotations on each level. The output
parameter ModelID is a handle for this model, which is used in subsequent calls to find_shape_model.
The number of pyramid levels is determined with the parameter NumLevels. It should be chosen as large as pos-
sible because by this the time necessary to find the object is significantly reduced. On the other hand, NumLevels
must be chosen such that the model is still recognizable and contains a sufficient number of points (at least four)
on the highest pyramid level. This can be checked using the output of inspect_shape_model. If not enough
model points are generated, the number of pyramid levels is reduced internally until enough model points are found
on the highest pyramid level. If this procedure would lead to a model with no pyramid levels, i.e., if the number
of model points is already too small on the lowest pyramid level, create_shape_model returns with an error
message. If NumLevels is set to ’auto’ (or 0 for backwards compatibility), create_shape_model deter-
mines the number of pyramid levels automatically. The automatically computed number of pyramid levels can be
queried using get_shape_model_params. In rare cases, it might happen that create_shape_model
determines a value for the number of pyramid levels that is too large or too small. If the number of pyramid levels
is chosen too large, the model may not be recognized in the image or it may be necessary to select very low param-
eters for MinScore or Greediness in find_shape_model in order to find the model. If the number of pyramid
levels is chosen too small, the time required to find the model in find_shape_model may increase. In these
cases, the number of pyramid levels should be selected using the output of inspect_shape_model.
The parameters AngleStart and AngleExtent determine the range of possible rotations, in which the model
can occur in the image. Note that the model can only be found in this range of angles by find_shape_model.
The parameter AngleStep determines the step length within the selected range of angles. Hence, if subpixel
accuracy is not specified in find_shape_model, this parameter specifies the accuracy that is achievable for
the angles in find_shape_model. AngleStep should be chosen based on the size of the object. Smaller
models do not possess many different discrete rotations in the image, and hence AngleStep should be chosen

HALCON 8.0.2
612 CHAPTER 7. MATCHING

larger for smaller models. If AngleExtent is not an integer multiple of AngleStep, AngleStep is modified
accordingly.
If a complete pregeneration of the model is selected (see below), the model is pre-generated for the selected
angle range and stored in memory. The memory required to store the model is proportional to the number of
angle steps and the number of points in the model. Hence, if AngleStep is too small or AngleExtent too
big, it may happen that the model no longer fits into the (virtual) memory. In this case, either AngleStep
must be enlarged or AngleExtent must be reduced. In any case, it is desirable that the model completely
fits into the main memory, because this avoids paging by the operating system, and hence the time to find the
object will be much smaller. Since angles can be determined with subpixel resolution by find_shape_model,
AngleStep ≥ 1 can be selected for models of a diameter smaller than about 200 pixels. If AngleStep = 0 auto 0
(or 0 for backwards compatibility) is selected, create_shape_model automatically determines a suitable
angle step length based on the size of the model. The automatically computed angle step length can be queried
using get_shape_model_params.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
find_shape_model. Because of this, the recognition of the model might require slightly more time.
For particularly large models, it may be useful to reduce the number of model points by setting Optimization
to a value different from ’none’. If Optimization = ’none’, all model points are stored. In all other cases, the
number of points is reduced according to the value of Optimization. If the number of points is reduced, it may
be necessary in find_shape_model to set the parameter Greediness to a smaller value, e.g., 0.7 or 0.8.
For small models, the reduction of the number of model points does not result in a speed-up of the search because
in this case usually significantly more potential instances of the model must be examined. If Optimization is
set to ’auto’, create_shape_model automatically determines the reduction of the number of model points.
Optionally, a second value can be passed in Optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of Optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with set_system(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, find_shape_model typically
returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a completely
pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two modes. If
maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter Contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
Contrast should be chosen such that only the significant features of the template are used for the model.
Contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in edges_image. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see hysteresis_threshold. Optionally, Contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in Contrast. In this case, the first two values can simply be set to
identical values. The effect of this parameter can be checked in advance with inspect_shape_model. If
Contrast is set to ’auto’, create_shape_model determines the three above described values automati-
cally. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’), or the
minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not determined
automatically can additionally be passed in the form of a tuple. Also various combinations are allowed: If, for
example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are determined
automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while the hys-
teresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determination of
the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be preferred
if certain model components should be included or suppressed because of application-specific reasons or if the

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 613

object contains several different contrasts. Therefore, the contrast thresholds should be automatically determined
with determine_shape_model_params and subsequently verified using inspect_shape_model be-
fore calling create_shape_model.
With MinContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by find_shape_model. In other words, this parameter separates the model from the noise in the image.
Therefore, a good choice is the range of gray value changes caused by the noise in the image. If, for example, the
gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If multichannel images
are used for the model and the search images, and if the parameter Metric is set to ’ignore_color_polarity’ (see
below) the noise in one channel must be multiplied by the square root of the number of channels to determine
MinContrast. If, for example, the gray values fluctuate within a range of 10 gray levels in a single channel
and the image is a three-channel image MinContrast should be set to 17. Obviously, MinContrast must
be smaller than Contrast. If the model should be recognized in very low contrast images, MinContrast
must be set to a correspondingly small value. If the model should be recognized even if it is severely occluded,
MinContrast should be slightly larger than the range of gray value fluctuations created by noise in order to en-
sure that the position and rotation of the model are extracted robustly and accurately by find_shape_model. If
MinContrast is set to ’auto’, the minimum contrast is determined automatically based on the noise in the model
image. Consequently, an automatic determination only makes sense if the image noise during the recognition is
similar to the noise in the model image. Furthermore, in some cases it is advisable to increase the automatically
determined value in order to increase the robustness against occlusions (see above). The automatically computed
minimum contrast can be queried using get_shape_model_params.
The parameter Metric determines the conditions under which the model is recognized in the image. If Metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the model is
a bright object on a dark background, the object is found only if it is also brighter than the background. If Metric
= ’ignore_global_polarity’, the object is found in the image also if the contrast reverses globally. In the above
example, the object hence is also found if it is darker than the background. The runtime of find_shape_model
will increase slightly in this case. If Metric = ’ignore_local_polarity’, the model is found even if the contrast
changes locally. This mode can, for example, be useful if the object consists of a part with medium gray value,
within which either darker or brighter sub-objects lie. Since in this case the runtime of find_shape_model
increases significantly, it is usually better to create several models that reflect the possible contrast variations of
the object with create_shape_model, and to match them simultaneously with find_shape_models.
The above three metrics can only be applied to single-channel images. If a multichannel image is used as the
model image or as the search image only the first channel will be used (and no error message will be returned).
If Metric = ’ignore_color_polarity’, the model is found even if the color contrast changes locally. This is,
for example, the case if parts of the object can change their color, e.g., from red to green. In particular, this
mode is useful if it is not known in advance in which channels the object is visible. In this mode, the runtime
of find_shape_model can also increase significantly. The metric ’ignore_color_polarity’ can be used for
images with an arbitrary number of channels. If it is used for single-channel images it has the same effect as
’ignore_local_polarity’. It should be noted that for Metric = ’ignore_color_polarity’ the number of channels
in the model creation with create_shape_model and in the search with find_shape_model can be
different. This can, for example, be used to create a model from a synthetically generated single-channel image.
Furthermore, it should be noted that the channels do not need to contain a spectral subdivision of the light (like
in an RGB image). The channels can, for example, also contain images of the same object that were obtained by
illuminating the object from different directions.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_shape_model_origin.
Parameter

. Template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image whose domain will be used to create the model.
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong / const char *
Maximum number of pyramid levels.
Default Value : "auto"
List of values : NumLevels ∈ {"auto", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Smallest rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}

HALCON 8.0.2
614 CHAPTER 7. MATCHING

. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double


Extent of the rotation angles.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
. AngleStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double / const char *
Step length of the angles (resolution).
Default Value : "auto"
Suggested values : AngleStep ∈ {"auto", 0.0175, 0.0349, 0.0524, 0.0698, 0.0873}
Restriction : (AngleStep ≥ 0) ∧ (AngleStep ≤ (pi/16))
. Optimization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Kind of optimization and optionally method used for generating the model.
Default Value : "auto"
List of values : Optimization ∈ {"auto", "none", "point_reduction_low", "point_reduction_medium",
"point_reduction_high", "pregeneration", "no_pregeneration"}
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Match metric.
Default Value : "use_polarity"
List of values : Metric ∈ {"use_polarity", "ignore_global_polarity", "ignore_local_polarity",
"ignore_color_polarity"}
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) Hlong / const char *
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum
size of the object parts.
Default Value : "auto"
Suggested values : Contrast ∈ {"auto", "auto_contrast", "auto_contrast_hyst", "auto_min_size", 10, 20,
30, 40, 60, 80, 100, 120, 140, 160}
. MinContrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) Hlong / const char *
Minimum contrast of the objects in the search images.
Default Value : "auto"
Suggested values : MinContrast ∈ {"auto", 1, 2, 3, 5, 7, 10, 20, 30, 40}
Restriction : MinContrast < Contrast
. ModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; (Htuple .) Hlong *
Handle of the model.
Result
If the parameters are valid, the operator create_shape_model returns the value H_MSG_TRUE. If necessary
an exception is raised. If the parameters NumLevels and Contrast are chosen such that the model contains
too few points, the error 8510 is raised.
Parallelization Information
create_shape_model is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, reduce_domain, threshold
Possible Successors
find_shape_model, find_shape_models, get_shape_model_params,
clear_shape_model, write_shape_model, set_shape_model_origin
Alternatives
create_scaled_shape_model, create_aniso_shape_model, create_template_rot
See also
set_system, get_system
Module
Matching

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 615

T_determine_shape_model_params ( const Hobject Template,


const Htuple NumLevels, const Htuple AngleStart,
const Htuple AngleExtent, const Htuple ScaleMin,
const Htuple ScaleMax, const Htuple Optimization, const Htuple Metric,
const Htuple Contrast, const Htuple MinContrast,
const Htuple Parameters, Htuple *ParameterName,
Htuple *ParameterValue )

Determine the parameters of a shape model.


determine_shape_model_params determines certain parameters of a shape model automatically from
the model image Template. The parameters to be determined can be specified with Parameters.
determine_shape_model_params can be used to determine the same parameters that are determined auto-
matically when the respective parameter in create_shape_model, create_scaled_shape_model,
or create_aniso_shape_model is set to ’auto’: the number of pyramid levels (Parameters =
’num_levels’), the angle step length (Parameters = ’angle_step’), the scale step length (Parameters =
’scale_step’ for isotropic scaling and ’scale_r_step’ and/or ’scale_c_step’ for anisotropic scaling), the kind of op-
timization (Parameters = ’optimization’), the threshold (Parameters = ’contrast’) or the hysteresis thresh-
olds (Parameters = ’contrast_hyst’) for the contrast, the minimum size of the object parts (Parameters =
’min_size’), and the minimum contrast (Parameters = ’min_contrast’). By passing a tuple of the above values
in Parameters, an arbitrary combination of these parameters can be determined. If all of the above parameters
should be determined, the value ’all’ can be passed. In this case both hysteresis thresholds are determined, i.e., the
operator behaves like passing ’contrast_hyst’ instead of ’contrast’.
determine_shape_model_params is mainly useful to determine the above parameters before creating the
model, e.g., in an interactive system, which makes suggestions for these parameters to the user, but enables the
user to modify the suggested values.
The automatically determined parameters are returned as a name-value pair in ParameterName and
ParameterValue. The parameter names in ParameterName are identical to the names in Parameters,
where, of course, the value ’all’ is replaced by the actual parameter names. An exception is the parameter ’con-
trast_hyst’, for which the two values ’contrast_low’ and ’contrast_high’ are returned.
The remaining parameters (NumLevels, AngleStart, AngleExtent, ScaleMin, ScaleMax,
Optimization, Metric, Contrast, and MinContrast) have the same meaning as in
create_shape_model, create_scaled_shape_model, and create_aniso_shape_model.
The description of these parameters can be looked up with these operators. These parameters are used by
determine_shape_model_params to calculate the parameters to be determined in the same manner as
in create_shape_model, create_scaled_shape_model, and create_aniso_shape_model.
It should be noted that if the parameters of a shape model with isotropic scaling should be determined, i.e.,
if Parameters contains ’scale_step’ either explicitly or implicitly via ’all’, the parameters ScaleMin and
ScaleMax must contain one value each. If the parameters of a shape model with anisotropic scaling should
be determined, i.e., if Parameters contains ’scale_r_step’ or ’scale_c_step’ either explicitly or implicitly, the
parameters ScaleMin and ScaleMax must contain two values each. In this case, the first value of the respective
parameter refers to the scaling in the row direction, while the second value refers to the scaling in the column
direction.
Note that in determine_shape_model_params some parameters appear that can also be determined au-
tomatically (NumLevels, Optimization, Contrast, MinContrast). If these parameters should not be
determined automatically, i.e., their name is not passed in ParameterName, the corresponding parameters must
contain valid values and must not be set to ’auto’. In contrast, if these parameters are to be determined au-
tomatically, their values are treated in the following way: If the optimization or the (hysteresis) contrast is to be
determined automatically, i.e., ParameterName contains the value ’optimization’ or ’contrast’ (’contrast_hyst’),
the values passed in Optimization and Contrast are ignored. In contrast, if the maximum number of pyra-
mid levels or the minimum contrast is to be determined automatically, i.e., ParameterName contains the value
’num_levels’ or ’min_contrast’, you can let HALCON determine suitable values and at the same time specify an
upper or lower boundary, respectively:
If the maximum number of pyramid levels should be specified in advance, NumLevels can be set to the particular
value. If in this case Parameters contains the value ’num_levels’, the computed number of pyramid levels is
at most NumLevels. If NumLevels is set to ’auto’ (or 0 for backwards compatibility), the number of pyramid
levels is determined without restrictions as large as possible.
If the minimum contrast should be specified in advance, MinContrast can be set to the particular value. If in this

HALCON 8.0.2
616 CHAPTER 7. MATCHING

case Parameters contains the value ’min_contrast’, the computed minimum contrast is at least MinContrast.
If MinContrast is set to ’auto’, the minimum contrast is determined without restrictions.
Parameter

. Template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image whose domain will be used to create the model.
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong / const char *
Maximum number of pyramid levels.
Default Value : "auto"
List of values : NumLevels ∈ {"auto", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Smallest rotation of the model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Extent of the rotation angles.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
. ScaleMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Minimum scale of the model.
Default Value : 0.9
Suggested values : ScaleMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleMin > 0
. ScaleMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Maximum scale of the model.
Default Value : 1.1
Suggested values : ScaleMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleMax ≥ ScaleMin
. Optimization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Kind of optimization.
Default Value : "auto"
List of values : Optimization ∈ {"auto", "none", "point_reduction_low", "point_reduction_medium",
"point_reduction_high"}
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Match metric.
Default Value : "use_polarity"
List of values : Metric ∈ {"use_polarity", "ignore_global_polarity", "ignore_local_polarity",
"ignore_color_polarity"}
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . Hlong / const char *
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum
size of the object parts.
Default Value : "auto"
Suggested values : Contrast ∈ {"auto", "auto_contrast", "auto_contrast_hyst", "auto_min_size", 10, 20,
30, 40, 60, 80, 100, 120, 140, 160}
. MinContrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Minimum contrast of the objects in the search images.
Default Value : "auto"
Suggested values : MinContrast ∈ {"auto", 1, 2, 3, 5, 7, 10, 20, 30, 40}
Restriction : MinContrast < Contrast
. Parameters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Parameters to be determined automatically.
Default Value : "all"
List of values : Parameters ∈ {"all", "num_levels", "angle_step", "scale_step", "scale_r_step",
"scale_c_step", "optimization", "contrast", "contrast_hyst", "min_size", "min_contrast"}
. ParameterName (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Name of the automatically determined parameter.

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 617

. ParameterValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *


Value of the automatically determined parameter.
Result
If the parameters are valid, the operator determine_shape_model_params returns the value
H_MSG_TRUE. If necessary an exception is raised. If the parameters NumLevels and Contrast are chosen
such that the model contains too few points, or the input image does not contain a sufficient number of significant
features, the error 8510 is raised.
Parallelization Information
determine_shape_model_params is reentrant and processed without parallelization.
Possible Predecessors
draw_region, reduce_domain, threshold
Possible Successors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model
See also
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models
Module
Matching

T_find_aniso_shape_model ( const Hobject Image, const Htuple ModelID,


const Htuple AngleStart, const Htuple AngleExtent,
const Htuple ScaleRMin, const Htuple ScaleRMax,
const Htuple ScaleCMin, const Htuple ScaleCMax, const Htuple MinScore,
const Htuple NumMatches, const Htuple MaxOverlap,
const Htuple SubPixel, const Htuple NumLevels,
const Htuple Greediness, Htuple *Row, Htuple *Column, Htuple *Angle,
Htuple *ScaleR, Htuple *ScaleC, Htuple *Score )

Find the best matches of an anisotropic scale invariant shape model in an image.
The operator find_aniso_shape_model finds the best NumMatches instances of the anisotropic scale
invariant shape model ModelID in the input image Image. The model must have been created previously by
calling create_aniso_shape_model or read_shape_model.
The position, rotation, and scale in the row and column direction of the found instances of the model are returned
in Row, Column, Angle, ScaleR, and ScaleC. The coordinates Row and Column are the coordinates of the
origin of the shape model in the search image. By default, the origin is the center of gravity of the domain (region)
of the image that was used to create the shape model with create_aniso_shape_model. A different origin
can be set with set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example below shows how to create this matrix and use it to display the model at the found position in the
search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_aniso_shape_model. A different origin set with set_shape_model_origin is not taken into
account. The model is searched within those points of the domain of the image, in which the model lies completely
within the image. This means that the model will not be found if it extends beyond the borders of the image, even if
it would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.

HALCON 8.0.2
618 CHAPTER 7. MATCHING

The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleRMin, ScaleRMax, ScaleCMin, and ScaleCMax determine the range of
scales in the row and column directions for which the model is searched. If necessary, both ranges are clipped
to the range given when the model was created with create_aniso_shape_model. In particular, this
means that the angle ranges of the model and the search must truly overlap. The angle range in the search is
not adapted modulo 2π. To simplify the presentation, all angles in the remainder of the paragraph are given in
degrees, whereas they have to be specified in radians in find_aniso_shape_model. Hence, if the model,
for example, was created with AngleStart = −20◦ and AngleExtent = 40◦ and the angle search space in
find_scaled_shape_model is, for example, set to AngleStart = 350◦ and AngleExtent = 20◦ , the
model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ . To find
the model, in this example it would be necessary to select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_aniso_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_aniso_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_aniso_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set
to at least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired
accuracy cannot be achieved, or that wrong instances of the model are found because the model is not specific
enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this
case, the lowest pyramid level to use must be set to a smaller value.

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 619

The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Input image in which the model should be found.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; Htuple . Hlong
Handle of the model.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Smallest rotation of the model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Extent of the rotation angles.
Default Value : 0.78
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtent ≥ 0
. ScaleRMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum scale of the model in the row direction.
Default Value : 0.9
Suggested values : ScaleRMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleRMin > 0
. ScaleRMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Maximum scale of the model in the row direction.
Default Value : 1.1
Suggested values : ScaleRMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleRMax ≥ ScaleRMin
. ScaleCMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum scale of the model in the column direction.
Default Value : 0.9
Suggested values : ScaleCMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleCMin > 0
. ScaleCMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Maximum scale of the model in the column direction.
Default Value : 1.1
Suggested values : ScaleCMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleCMax ≥ ScaleCMin
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum score of the instances of the model to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MinScore ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. NumMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of instances of the model to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. MaxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Maximum overlap of the instances of the model to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MaxOverlap ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05

HALCON 8.0.2
620 CHAPTER 7. MATCHING

. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Subpixel accuracy if not equal to ’none’.
Default Value : "least_squares"
List of values : SubPixel ∈ {"none", "interpolation", "least_squares", "least_squares_high",
"least_squares_very_high"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Number of pyramid levels used in the matching (and lowest pyramid level to use if |NumLevels| = 2).
Default Value : 0
List of values : NumLevels ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. Greediness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
“Greediness” of the search heuristic (0: safe but slow; 1: fast but matches may be missed).
Default Value : 0.9
Suggested values : Greediness ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ Greediness ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the found instances of the model.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the found instances of the model.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad-array ; Htuple . double *
Rotation angle of the found instances of the model.
. ScaleR (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Scale of the found instances of the model in the row direction.
. ScaleC (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Scale of the found instances of the model in the column direction.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Score of the found instances of the model.
Example (Syntax: HDevelop)

create_aniso_shape_model (ImageReduced, 0, rad(-15), rad(30), 0,


0.9, 1.1, 0, 0.9, 1.1, 0, ’none’,
’use_polarity’, 30, 10, ModelID)
get_shape_model_contours (ModelXLD, ModelID, 1)
find_aniso_shape_model (SearchImage, ModelID, rad(-15), rad(30),
0.9, 1.1, 0.9, 1.1, 0.5, 1, 0.5, ’interpolation’,
0, 0, Row, Column, Angle, ScaleR, ScaleC, Score)
hom_mat2d_identity (HomMat2DIdentity)
hom_mat2d_scale (HomMat2DIdentity, ScaleR, ScaleC, 0, 0, HomMat2DScale)
hom_mat2d_rotate (HomMat2DScale, Angle, 0, 0, HomMat2DRotate)
hom_mat2d_translate (HomMat2DRotate, Row, Column, HomMat2DObject)
affine_trans_contour_xld (ModelXLD, ObjectXLD, HomMat2DObject)
affine_trans_pixel (HomMat2DObject, 0, 0, RowObject, ColObject)

Result
If the parameter values are correct, the operator find_aniso_shape_model returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_aniso_shape_model is reentrant and processed without parallelization.
Possible Predecessors
create_aniso_shape_model, read_shape_model, set_shape_model_origin
Possible Successors
clear_shape_model
Alternatives
find_shape_model, find_scaled_shape_model, find_shape_models,
find_scaled_shape_models, find_aniso_shape_models, best_match_rot_mg

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 621

See also
set_system, get_system
Module
Matching

T_find_aniso_shape_models ( const Hobject Image,


const Htuple ModelIDs, const Htuple AngleStart,
const Htuple AngleExtent, const Htuple ScaleRMin,
const Htuple ScaleRMax, const Htuple ScaleCMin,
const Htuple ScaleCMax, const Htuple MinScore,
const Htuple NumMatches, const Htuple MaxOverlap,
const Htuple SubPixel, const Htuple NumLevels,
const Htuple Greediness, Htuple *Row, Htuple *Column, Htuple *Angle,
Htuple *ScaleR, Htuple *ScaleC, Htuple *Score, Htuple *Model )

Find the best matches of multiple anisotropic scale invariant shape models.
The operator find_aniso_shape_models finds the best NumMatches instances of the anisotropic scale
invariant shape models that are passed in ModelIDs in the input image Image. The models must have been
created previously by calling create_aniso_shape_model or read_shape_model.
Hence, in contrast to find_aniso_shape_model, multiple models can be searched in the same image in
one call. This changes the semantics of all input parameters to some extent. All input parameters must either
contain one element, in which case the parameter is used for all models, or must contain the same number of ele-
ments as ModelIDs, in which case each parameter element refers to the corresponding element in ModelIDs.
(NumLevels may also contain either two or twice the number of elements as ModelIDs; see below.) As usual,
the domain of the input image Image is used to restrict the search space for the reference point of the models
ModelIDs. Consistent with the above semantics, the input image Image can therefore contain a single image
object or an image object tuple containing multiple image objects. If Image contains a single image object, its
domain is used as the region of interest for all models in ModelIDs. If Image contains multiple image objects,
each domain is used as the region of interest for the corresponding model in ModelIDs. In this case, the im-
age matrix of all image objects in the tuple must be identical, i.e., Image cannot be constructed in an arbitrary
manner using concat_obj, but must be created from the same image using add_channels or equivalent
calls. If this is not the case, an error message is returned. The above semantics also hold for the input con-
trol parameters. Hence, for example, MinScore can contain a single value or the same number of values as
ModelIDs. In the first case, the value of MinScore is used for all models in ModelIDs, while in the second
case the respective value of the elements in MinScore is used for the corresponding model in ModelIDs. An
extension to these semantics holds for NumMatches and MaxOverlap. If NumMatches contains one ele-
ment, find_aniso_shape_models returns the best NumMatches instances of the model irrespective of the
type of the model. If, for example, two models are passed in ModelIDs and NumMatches = 2 is selected, it
can happen that two instances of the first model and no instances of the second model, one instance of the first
model and one instance of the second model, or no instances of the first model and two instances of the second
model are returned. If, on the other hand, NumMatches contains multiple values, the number of instances re-
turned of the different models corresponds to the number specified in the respective entry in NumMatches. If,
for example, NumMatches = [1, 1] is selected, one instance of the first model and one instance of the second
model is returned. For a detailed description of the semantics of NumMatches, see below. A similar extension
of the semantics holds for MaxOverlap. If a single value is passed for MaxOverlap, the overlap is com-
puted for all found instances of the different models, irrespective of the model type, i.e., instances of the same
or of different models that overlap too much are eliminated. If, on the other hand, multiple values are passed in
MaxOverlap, the overlap is only computed for found instances of the model that have the same model type, i.e.,
only instances of the same model that overlap too much are eliminated. In this mode, models of different types
may overlap completely. For a detailed description of the semantics of MaxOverlap, see below. Hence, a call to
find_aniso_shape_models with multiple values for ModelIDs, NumMatches and MaxOverlap has
the same effect as multiple independent calls to find_aniso_shape_model with the respective parameters.
However, a single call to find_aniso_shape_models is considerably more efficient.
The type of the found instances of the models is returned in Model. The elements of Model are indices into the
tuple ModelIDs, i.e., they can contain values from 0 to |ModelIDs| − 1. Hence, a value of 0 in an element of
Model corresponds to an instance of the first model in ModelIDs.

HALCON 8.0.2
622 CHAPTER 7. MATCHING

The position, rotation, and scale in the row and column direction of the found instances of the model are returned
in Row, Column, Angle, ScaleR, and ScaleC. The coordinates Row and Column are the coordinates of the
origin of the shape model in the search image. By default, the origin is the center of gravity of the domain (region)
of the image that was used to create the shape model with create_aniso_shape_model. A different origin
can be set with set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for find_aniso_shape_model shows how to create this matrix and use it to display the
model at the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_aniso_shape_model. A different origin set with set_shape_model_origin is not taken into
account. The model is searched within those points of the domain of the image, in which the model lies completely
within the image. This means that the model will not be found if it extends beyond the borders of the image, even if
it would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleRMin, ScaleRMax, ScaleCMin, and ScaleCMax determine the range of
scales in the row and column directions for which the model is searched. If necessary, both ranges are clipped
to the range given when the model was created with create_aniso_shape_model. In particular, this
means that the angle ranges of the model and the search must truly overlap. The angle range in the search is
not adapted modulo 2π. To simplify the presentation, all angles in the remainder of the paragraph are given in
degrees, whereas they have to be specified in radians in find_aniso_shape_models. Hence, if the model,
for example, was created with AngleStart = −20◦ and AngleExtent = 40◦ and the angle search space in
find_aniso_shape_models is, for example, set to AngleStart = 350◦ and AngleExtent = 20◦ , the
model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ . To find
the model, in this example it would be necessary to select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_aniso_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 623

with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_aniso_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_aniso_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at
least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, NumLevels must contain twice the number of elements as ModelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevels. If, for example,
two models are specified in ModelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, NumLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in ModelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in NumLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2


Input image in which the models should be found.
. ModelIDs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model(-array) ; Htuple . Hlong
Handle of the models.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; Htuple . double
Smallest rotation of the models.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; Htuple . double
Extent of the rotation angles.
Default Value : 0.78
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtent ≥ 0
. ScaleRMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Minimum scale of the models in the row direction.
Default Value : 0.9
Suggested values : ScaleRMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleRMin > 0

HALCON 8.0.2
624 CHAPTER 7. MATCHING

. ScaleRMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double


Maximum scale of the models in the row direction.
Default Value : 1.1
Suggested values : ScaleRMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleRMax ≥ ScaleRMin
. ScaleCMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Minimum scale of the models in the column direction.
Default Value : 0.9
Suggested values : ScaleCMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleCMin > 0
. ScaleCMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Maximum scale of the models in the column direction.
Default Value : 1.1
Suggested values : ScaleCMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleCMax ≥ ScaleCMin
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Minimum score of the instances of the models to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MinScore ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. NumMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Number of instances of the models to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. MaxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Maximum overlap of the instances of the models to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MaxOverlap ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Subpixel accuracy if not equal to ’none’.
Default Value : "least_squares"
List of values : SubPixel ∈ {"none", "interpolation", "least_squares", "least_squares_high",
"least_squares_very_high"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Number of pyramid levels used in the matching (and lowest pyramid level to use if |NumLevels| = 2).
Default Value : 0
List of values : NumLevels ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. Greediness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
“Greediness” of the search heuristic (0: safe but slow; 1: fast but matches may be missed).
Default Value : 0.9
Suggested values : Greediness ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ Greediness ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the found instances of the models.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the found instances of the models.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad-array ; Htuple . double *
Rotation angle of the found instances of the models.
. ScaleR (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Scale of the found instances of the models in the row direction.

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 625

. ScaleC (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *


Scale of the found instances of the models in the column direction.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Score of the found instances of the models.
. Model (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Index of the found instances of the models.
Result
If the parameter values are correct, the operator find_aniso_shape_models returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_aniso_shape_models is reentrant and processed without parallelization.
Possible Predecessors
add_channels, create_aniso_shape_model, read_shape_model,
set_shape_model_origin
Possible Successors
clear_shape_model
Alternatives
find_shape_models, find_scaled_shape_models, find_shape_model,
find_scaled_shape_model, find_aniso_shape_model, best_match_rot_mg
See also
set_system, get_system
Module
Matching

T_find_scaled_shape_model ( const Hobject Image,


const Htuple ModelID, const Htuple AngleStart,
const Htuple AngleExtent, const Htuple ScaleMin,
const Htuple ScaleMax, const Htuple MinScore, const Htuple NumMatches,
const Htuple MaxOverlap, const Htuple SubPixel,
const Htuple NumLevels, const Htuple Greediness, Htuple *Row,
Htuple *Column, Htuple *Angle, Htuple *Scale, Htuple *Score )

Find the best matches of a scale invariant shape model in an image.


The operator find_scaled_shape_model finds the best NumMatches instances of the scale invariant
shape model ModelID in the input image Image. The model must have been created previously by calling
create_scaled_shape_model or read_shape_model.
The position, rotation, and scale of the found instances of the model are returned in Row, Column, Angle,
and Scale. The coordinates Row and Column are the coordinates of the origin of the shape model in the
search image. By default, the origin is the center of gravity of the domain (region) of the image that was
used to create the shape model with create_scaled_shape_model. A different origin can be set with
set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example below shows how to create this matrix and use it to display the model at the found position in the
search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_scaled_shape_model. A different origin set with set_shape_model_origin is not taken

HALCON 8.0.2
626 CHAPTER 7. MATCHING

into account. The model is searched within those points of the domain of the image, in which the model lies
completely within the image. This means that the model will not be found if it extends beyond the borders of the
image, even if it would achieve a score greater than MinScore (see below). This behavior can be changed with
set_system(’border_shape_models’,’true’), which will cause models that extend beyond the im-
age border to be found if they achieve a score greater than MinScore. Here, points lying outside the image are
regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase
in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleMin and ScaleMax determine the range of scales for which the model
is searched. If necessary, both ranges are clipped to the range given when the model was created with
create_scaled_shape_model. In particular, this means that the angle ranges of the model and the search
must truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians in
find_scaled_shape_model. Hence, if the model, for example, was created with AngleStart = −20◦
and AngleExtent = 40◦ and the angle search space in find_scaled_shape_model is, for example, set
to AngleStart = 350◦ and AngleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_scaled_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_scaled_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_scaled_shape_model is
used. If NumLevels is set to 0, the number of pyramid levels specified in create_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 627

level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set
to at least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired
accuracy cannot be achieved, or that wrong instances of the model are found because the model is not specific
enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this
case, the lowest pyramid level to use must be set to a smaller value.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image in which the model should be found.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; Htuple . Hlong
Handle of the model.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Smallest rotation of the model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Extent of the rotation angles.
Default Value : 0.78
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtent ≥ 0
. ScaleMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum scale of the model.
Default Value : 0.9
Suggested values : ScaleMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleMin > 0
. ScaleMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Maximum scale of the model.
Default Value : 1.1
Suggested values : ScaleMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleMax ≥ ScaleMin
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum score of the instances of the model to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MinScore ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. NumMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of instances of the model to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. MaxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Maximum overlap of the instances of the model to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MaxOverlap ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05

HALCON 8.0.2
628 CHAPTER 7. MATCHING

. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Subpixel accuracy if not equal to ’none’.
Default Value : "least_squares"
List of values : SubPixel ∈ {"none", "interpolation", "least_squares", "least_squares_high",
"least_squares_very_high"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Number of pyramid levels used in the matching (and lowest pyramid level to use if |NumLevels| = 2).
Default Value : 0
List of values : NumLevels ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. Greediness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
“Greediness” of the search heuristic (0: safe but slow; 1: fast but matches may be missed).
Default Value : 0.9
Suggested values : Greediness ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ Greediness ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the found instances of the model.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the found instances of the model.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad-array ; Htuple . double *
Rotation angle of the found instances of the model.
. Scale (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Scale of the found instances of the model.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Score of the found instances of the model.
Example (Syntax: HDevelop)

create_scaled_shape_model (ImageReduced, 0, rad(-45), rad(180), 0,


0.9, 1.1, 0, ’none’, ’use_polarity’,
30, 10, ModelID)
get_shape_model_contours (ModelXLD, ModelID, 1)
find_scaled_shape_model (SearchImage, ModelID, rad(-45), rad(180),
0.9, 1.1, 0.5, 1, 0.5, ’interpolation’,
0, 0, Row, Column, Angle, Scale, Score)
vector_angle_to_rigid (0, 0, 0, Row, Column, Angle, HomMat2DTmp)
hom_mat2d_scale (HomMat2DTmp, Scale, Scale, Row, Column, HomMat2DObject)
affine_trans_contour_xld (ModelXLD, ObjectXLD, HomMat2DObject)
affine_trans_pixel (HomMat2DObject, 0, 0, RowObject, ColObject)

Result
If the parameter values are correct, the operator find_scaled_shape_model returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_scaled_shape_model is reentrant and processed without parallelization.
Possible Predecessors
create_scaled_shape_model, read_shape_model, set_shape_model_origin
Possible Successors
clear_shape_model
Alternatives
find_shape_model, find_aniso_shape_model, find_shape_models,
find_scaled_shape_models, find_aniso_shape_models, best_match_rot_mg
See also
set_system, get_system
Module
Matching

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 629

T_find_scaled_shape_models ( const Hobject Image,


const Htuple ModelIDs, const Htuple AngleStart,
const Htuple AngleExtent, const Htuple ScaleMin,
const Htuple ScaleMax, const Htuple MinScore, const Htuple NumMatches,
const Htuple MaxOverlap, const Htuple SubPixel,
const Htuple NumLevels, const Htuple Greediness, Htuple *Row,
Htuple *Column, Htuple *Angle, Htuple *Scale, Htuple *Score,
Htuple *Model )

Find the best matches of multiple scale invariant shape models.


The operator find_scaled_shape_models finds the best NumMatches instances of the scale invariant
shape models that are passed in ModelIDs in the input image Image. The models must have been created
previously by calling create_scaled_shape_model or read_shape_model.
Hence, in contrast to find_scaled_shape_model, multiple models can be searched in the same image in
one call. This changes the semantics of all input parameters to some extent. All input parameters must either
contain one element, in which case the parameter is used for all models, or must contain the same number of ele-
ments as ModelIDs, in which case each parameter element refers to the corresponding element in ModelIDs.
(NumLevels may also contain either two or twice the number of elements as ModelIDs; see below.) As usual,
the domain of the input image Image is used to restrict the search space for the reference point of the models
ModelIDs. Consistent with the above semantics, the input image Image can therefore contain a single image
object or an image object tuple containing multiple image objects. If Image contains a single image object, its
domain is used as the region of interest for all models in ModelIDs. If Image contains multiple image objects,
each domain is used as the region of interest for the corresponding model in ModelIDs. In this case, the im-
age matrix of all image objects in the tuple must be identical, i.e., Image cannot be constructed in an arbitrary
manner using concat_obj, but must be created from the same image using add_channels or equivalent
calls. If this is not the case, an error message is returned. The above semantics also hold for the input con-
trol parameters. Hence, for example, MinScore can contain a single value or the same number of values as
ModelIDs. In the first case, the value of MinScore is used for all models in ModelIDs, while in the second
case the respective value of the elements in MinScore is used for the corresponding model in ModelIDs. An
extension to these semantics holds for NumMatches and MaxOverlap. If NumMatches contains one ele-
ment, find_scaled_shape_models returns the best NumMatches instances of the model irrespective of
the type of the model. If, for example, two models are passed in ModelIDs and NumMatches = 2 is selected,
it can happen that two instances of the first model and no instances of the second model, one instance of the first
model and one instance of the second model, or no instances of the first model and two instances of the second
model are returned. If, on the other hand, NumMatches contains multiple values, the number of instances re-
turned of the different models corresponds to the number specified in the respective entry in NumMatches. If,
for example, NumMatches = [1, 1] is selected, one instance of the first model and one instance of the second
model is returned. For a detailed description of the semantics of NumMatches, see below. A similar extension
of the semantics holds for MaxOverlap. If a single value is passed for MaxOverlap, the overlap is com-
puted for all found instances of the different models, irrespective of the model type, i.e., instances of the same
or of different models that overlap too much are eliminated. If, on the other hand, multiple values are passed in
MaxOverlap, the overlap is only computed for found instances of the model that have the same model type, i.e.,
only instances of the same model that overlap too much are eliminated. In this mode, models of different types
may overlap completely. For a detailed description of the semantics of MaxOverlap, see below. Hence, a call to
find_scaled_shape_models with multiple values for ModelIDs, NumMatches and MaxOverlap has
the same effect as multiple independent calls to find_scaled_shape_model with the respective parameters.
However, a single call to find_scaled_shape_models is considerably more efficient.
The type of the found instances of the models is returned in Model. The elements of Model are indices into the
tuple ModelIDs, i.e., they can contain values from 0 to |ModelIDs| − 1. Hence, a value of 0 in an element of
Model corresponds to an instance of the first model in ModelIDs.
The position, rotation, and scale of the found instances of the model are returned in Row, Column, Angle,
and Scale. The coordinates Row and Column are the coordinates of the origin of the shape model in the
search image. By default, the origin is the center of gravity of the domain (region) of the image that was
used to create the shape model with create_scaled_shape_model. A different origin can be set with
set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix

HALCON 8.0.2
630 CHAPTER 7. MATCHING

with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for find_scaled_shape_model shows how to create this matrix and use it to display the
model at the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_scaled_shape_model. A different origin set with set_shape_model_origin is not taken
into account. The model is searched within those points of the domain of the image, in which the model lies
completely within the image. This means that the model will not be found if it extends beyond the borders of the
image, even if it would achieve a score greater than MinScore (see below). This behavior can be changed with
set_system(’border_shape_models’,’true’), which will cause models that extend beyond the im-
age border to be found if they achieve a score greater than MinScore. Here, points lying outside the image are
regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase
in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleMin and ScaleMax determine the range of scales for which the model
is searched. If necessary, both ranges are clipped to the range given when the model was created with
create_scaled_shape_model. In particular, this means that the angle ranges of the model and the search
must truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians in
find_scaled_shape_models. Hence, if the model, for example, was created with AngleStart = −20◦
and AngleExtent = 40◦ and the angle search space in find_scaled_shape_models is, for example, set
to AngleStart = 350◦ and AngleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_scaled_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 631

to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_scaled_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_scaled_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at
least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, NumLevels must contain twice the number of elements as ModelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevels. If, for example,
two models are specified in ModelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, NumLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in ModelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in NumLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2


Input image in which the models should be found.
. ModelIDs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model(-array) ; Htuple . Hlong
Handle of the models.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; Htuple . double
Smallest rotation of the models.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; Htuple . double
Extent of the rotation angles.
Default Value : 0.78
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtent ≥ 0
. ScaleMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Minimum scale of the models.
Default Value : 0.9
Suggested values : ScaleMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleMin > 0
. ScaleMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Maximum scale of the models.
Default Value : 1.1
Suggested values : ScaleMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleMax ≥ ScaleMin

HALCON 8.0.2
632 CHAPTER 7. MATCHING

. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double


Minimum score of the instances of the models to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MinScore ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. NumMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Number of instances of the models to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. MaxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Maximum overlap of the instances of the models to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MaxOverlap ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Subpixel accuracy if not equal to ’none’.
Default Value : "least_squares"
List of values : SubPixel ∈ {"none", "interpolation", "least_squares", "least_squares_high",
"least_squares_very_high"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Number of pyramid levels used in the matching (and lowest pyramid level to use if |NumLevels| = 2).
Default Value : 0
List of values : NumLevels ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. Greediness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
“Greediness” of the search heuristic (0: safe but slow; 1: fast but matches may be missed).
Default Value : 0.9
Suggested values : Greediness ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ Greediness ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the found instances of the models.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the found instances of the models.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad-array ; Htuple . double *
Rotation angle of the found instances of the models.
. Scale (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Scale of the found instances of the models.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Score of the found instances of the models.
. Model (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Index of the found instances of the models.
Result
If the parameter values are correct, the operator find_scaled_shape_models returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_scaled_shape_models is reentrant and processed without parallelization.
Possible Predecessors
add_channels, create_scaled_shape_model, read_shape_model,
set_shape_model_origin
Possible Successors
clear_shape_model

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 633

Alternatives
find_shape_models, find_aniso_shape_models, find_shape_model,
find_scaled_shape_model, find_aniso_shape_model, best_match_rot_mg
See also
set_system, get_system
Module
Matching

T_find_shape_model ( const Hobject Image, const Htuple ModelID,


const Htuple AngleStart, const Htuple AngleExtent,
const Htuple MinScore, const Htuple NumMatches,
const Htuple MaxOverlap, const Htuple SubPixel,
const Htuple NumLevels, const Htuple Greediness, Htuple *Row,
Htuple *Column, Htuple *Angle, Htuple *Score )

Find the best matches of a shape model in an image.


The operator find_shape_model finds the best NumMatches instances of the shape model ModelID in
the input image Image. The model must have been created previously by calling create_shape_model or
read_shape_model.
The position and rotation of the found instances of the model is returned in Row, Column, and Angle. The
coordinates Row and Column are the coordinates of the origin of the shape model in the search image. By default,
the origin is the center of gravity of the domain (region) of the image that was used to create the shape model with
create_shape_model. A different origin can be set with set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example below shows how to create this matrix and use it to display the model at the found position in the
search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_shape_model. A different origin set with set_shape_model_origin is not taken into account.
The model is searched within those points of the domain of the image, in which the model lies completely within
the image. This means that the model will not be found if it extends beyond the borders of the image, even if it
would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. If necessary, the range of rotations is clipped to the range given when the model was created with
create_shape_model. In particular, this means that the angle ranges of the model and the search must
truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians
in find_shape_model. Hence, if the model, for example, was created with AngleStart = −20◦
and AngleExtent = 40◦ and the angle search space in find_shape_model is, for example, set to
AngleStart = 350◦ and AngleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation that is slightly outside the specified
range of rotations are found. This may happen if the specified range of rotations is smaller than the range given
when the model was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected

HALCON 8.0.2
634 CHAPTER 7. MATCHING

never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle resolution that was specified with create_shape_model. If SubPixel is set
to ’interpolation’ (or ’true’) the position as well as the rotation are determined with subpixel accuracy. In this
mode, the model’s pose is interpolated from the score function. This mode costs almost no computation time
and achieves an accuracy that is high enough for most applications. In some applications, however, the accuracy
requirements are extremely high. In these cases, the model’s pose can be determined through a least-squares ad-
justment, i.e., by minimizing the distances of the model points to their corresponding image points. In contrast to
’interpolation’, this mode requires additional computation time. The different modes for least-squares adjustment
(’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used to determine the accuracy with
which the minimum distance is being searched. The higher the accuracy is chosen, the longer the subpixel extrac-
tion will take, however. Usually, SubPixel should be set to ’interpolation’. If least-squares adjustment is desired,
’least_squares’ should be chosen because this results in the best tradeoff between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number
of levels is clipped to the range given when the shape model was created with create_shape_model. If
NumLevels is set to 0, the number of pyramid levels specified in create_shape_model is used. Optionally,
NumLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at least
’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on the
higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the lowest
pyramid level to use must be set to a smaller value.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image in which the model should be found.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; Htuple . Hlong
Handle of the model.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Smallest rotation of the model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 635

. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double


Extent of the rotation angles.
Default Value : 0.78
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtent ≥ 0
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum score of the instances of the model to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MinScore ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. NumMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of instances of the model to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. MaxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Maximum overlap of the instances of the model to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MaxOverlap ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Subpixel accuracy if not equal to ’none’.
Default Value : "least_squares"
List of values : SubPixel ∈ {"none", "interpolation", "least_squares", "least_squares_high",
"least_squares_very_high"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Number of pyramid levels used in the matching (and lowest pyramid level to use if |NumLevels| = 2).
Default Value : 0
List of values : NumLevels ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. Greediness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
“Greediness” of the search heuristic (0: safe but slow; 1: fast but matches may be missed).
Default Value : 0.9
Suggested values : Greediness ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ Greediness ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the found instances of the model.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the found instances of the model.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad-array ; Htuple . double *
Rotation angle of the found instances of the model.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Score of the found instances of the model.
Example (Syntax: HDevelop)

create_shape_model (ImageReduced, 0, rad(-45), rad(180), 0,


’none’, ’use_polarity’, 30, 10, ModelID)
get_shape_model_contours (ModelXLD, ModelID, 1)
find_shape_model (SearchImage, ModelID, rad(-45), rad(180),
0.5, 1, 0.5, ’interpolation’,
0, 0, Row, Column, Angle, Score)
vector_angle_to_rigid (0, 0, 0, Row, Column, Angle, HomMat2DObject)
affine_trans_contour_xld (ModelXLD, ObjectXLD, HomMat2DObject)
affine_trans_pixel (HomMat2DObject, 0, 0, RowObject, ColObject)

HALCON 8.0.2
636 CHAPTER 7. MATCHING

Result
If the parameter values are correct, the operator find_shape_model returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_shape_model is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model, read_shape_model, set_shape_model_origin
Possible Successors
clear_shape_model
Alternatives
find_scaled_shape_model, find_aniso_shape_model, find_scaled_shape_models,
find_shape_models, find_aniso_shape_models, best_match_rot_mg
See also
set_system, get_system
Module
Matching

T_find_shape_models ( const Hobject Image, const Htuple ModelIDs,


const Htuple AngleStart, const Htuple AngleExtent,
const Htuple MinScore, const Htuple NumMatches,
const Htuple MaxOverlap, const Htuple SubPixel,
const Htuple NumLevels, const Htuple Greediness, Htuple *Row,
Htuple *Column, Htuple *Angle, Htuple *Score, Htuple *Model )

Find the best matches of multiple shape models.


The operator find_shape_models finds the best NumMatches instances of the shape models that are passed
in the tuple ModelIDs in the input image Image. The models must have been created previously by calling
create_shape_model or read_shape_model.
Hence, in contrast to find_shape_model, multiple models can be searched in the same image in one call. This
changes the semantics of all input parameters to some extent. All input parameters must either contain one element,
in which case the parameter is used for all models, or must contain the same number of elements as ModelIDs,
in which case each parameter element refers to the corresponding element in ModelIDs. (NumLevels may also
contain either two or twice the number of elements as ModelIDs; see below.) As usual, the domain of the input
image Image is used to restrict the search space for the reference point of the models ModelIDs. Consistent
with the above semantics, the input image Image can therefore contain a single image object or an image object
tuple containing multiple image objects. If Image contains a single image object, its domain is used as the region
of interest for all models in ModelIDs. If Image contains multiple image objects, each domain is used as the
region of interest for the corresponding model in ModelIDs. In this case, the image matrix of all image objects
in the tuple must be identical, i.e., Image cannot be constructed in an arbitrary manner using concat_obj,
but must be created from the same image using add_channels or equivalent calls. If this is not the case, an
error message is returned. The above semantics also hold for the input control parameters. Hence, for example,
MinScore can contain a single value or the same number of values as ModelIDs. In the first case, the value
of MinScore is used for all models in ModelIDs, while in the second case the respective value of the elements
in MinScore is used for the corresponding model in ModelIDs. An extension to these semantics holds for
NumMatches and MaxOverlap. If NumMatches contains one element, find_shape_models returns the
best NumMatches instances of the model irrespective of the type of the model. If, for example, two models are
passed in ModelIDs and NumMatches = 2 is selected, it can happen that two instances of the first model and no
instances of the second model, one instance of the first model and one instance of the second model, or no instances
of the first model and two instances of the second model are returned. If, on the other hand, NumMatches contains
multiple values, the number of instances returned of the different models corresponds to the number specified in
the respective entry in NumMatches. If, for example, NumMatches = [1, 1] is selected, one instance of the
first model and one instance of the second model is returned. For a detailed description of the semantics of
NumMatches, see below. A similar extension of the semantics holds for MaxOverlap. If a single value is
passed for MaxOverlap, the overlap is computed for all found instances of the different models, irrespective of

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 637

the model type, i.e., instances of the same or of different models that overlap too much are eliminated. If, on the
other hand, multiple values are passed in MaxOverlap, the overlap is only computed for found instances of the
model that have the same model type, i.e., only instances of the same model that overlap too much are eliminated.
In this mode, models of different types may overlap completely. For a detailed description of the semantics
of MaxOverlap, see below. Hence, a call to find_shape_models with multiple values for ModelIDs,
NumMatches and MaxOverlap has the same effect as multiple independent calls to find_shape_model
with the respective parameters. However, a single call to find_shape_models is considerably more efficient.
The type of the found instances of the models is returned in Model. The elements of Model are indices into the
tuple ModelIDs, i.e., they can contain values from 0 to |ModelIDs| − 1. Hence, a value of 0 in an element of
Model corresponds to an instance of the first model in ModelIDs.
The position and rotation of the found instances of the model is returned in Row, Column, and Angle. The
coordinates Row and Column are the coordinates of the origin of the shape model in the search image. By default,
the origin is the center of gravity of the domain (region) of the image that was used to create the shape model with
create_shape_model. A different origin can be set with set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for find_shape_model shows how to create this matrix and use it to display the model at
the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_shape_model. A different origin set with set_shape_model_origin is not taken into account.
The model is searched within those points of the domain of the image, in which the model lies completely within
the image. This means that the model will not be found if it extends beyond the borders of the image, even if it
would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. If necessary, the range of rotations is clipped to the range given when the model was created with
create_shape_model. In particular, this means that the angle ranges of the model and the search must
truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians
in find_shape_models. Hence, if the model, for example, was created with AngleStart = −20◦
and AngleExtent = 40◦ and the angle search space in find_shape_models is, for example, set to
AngleStart = 350◦ and AngleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation that is slightly outside the specified
range of rotations are found. This may happen if the specified range of rotations is smaller than the range given
when the model was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be

HALCON 8.0.2
638 CHAPTER 7. MATCHING

returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle resolution that was specified with create_shape_model. If SubPixel is set
to ’interpolation’ (or ’true’) the position as well as the rotation are determined with subpixel accuracy. In this
mode, the model’s pose is interpolated from the score function. This mode costs almost no computation time
and achieves an accuracy that is high enough for most applications. In some applications, however, the accuracy
requirements are extremely high. In these cases, the model’s pose can be determined through a least-squares ad-
justment, i.e., by minimizing the distances of the model points to their corresponding image points. In contrast to
’interpolation’, this mode requires additional computation time. The different modes for least-squares adjustment
(’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used to determine the accuracy with
which the minimum distance is being searched. The higher the accuracy is chosen, the longer the subpixel extrac-
tion will take, however. Usually, SubPixel should be set to ’interpolation’. If least-squares adjustment is desired,
’least_squares’ should be chosen because this results in the best tradeoff between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number
of levels is clipped to the range given when the shape model was created with create_shape_model. If
NumLevels is set to 0, the number of pyramid levels specified in create_shape_model is used. Optionally,
NumLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at least
’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, NumLevels must contain twice the number of elements as ModelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevels. If, for example,
two models are specified in ModelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, NumLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in ModelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in NumLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2


Input image in which the models should be found.
. ModelIDs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model(-array) ; Htuple . Hlong
Handle of the models.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; Htuple . double
Smallest rotation of the models.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 639

. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; Htuple . double


Extent of the rotation angles.
Default Value : 0.78
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtent ≥ 0
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Minimum score of the instances of the models to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MinScore ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. NumMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Number of instances of the models to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. MaxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Maximum overlap of the instances of the models to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MaxOverlap ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Subpixel accuracy if not equal to ’none’.
Default Value : "least_squares"
List of values : SubPixel ∈ {"none", "interpolation", "least_squares", "least_squares_high",
"least_squares_very_high"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Number of pyramid levels used in the matching (and lowest pyramid level to use if |NumLevels| = 2).
Default Value : 0
List of values : NumLevels ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. Greediness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
“Greediness” of the search heuristic (0: safe but slow; 1: fast but matches may be missed).
Default Value : 0.9
Suggested values : Greediness ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ Greediness ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the found instances of the models.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the found instances of the models.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad-array ; Htuple . double *
Rotation angle of the found instances of the models.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Score of the found instances of the models.
. Model (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Index of the found instances of the models.
Result
If the parameter values are correct, the operator find_shape_models returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_shape_models is reentrant and processed without parallelization.
Possible Predecessors
add_channels, create_shape_model, read_shape_model, set_shape_model_origin

HALCON 8.0.2
640 CHAPTER 7. MATCHING

Possible Successors
clear_shape_model
Alternatives
find_scaled_shape_models, find_aniso_shape_models, find_shape_model,
find_scaled_shape_model, find_aniso_shape_model, best_match_rot_mg
See also
set_system, get_system
Module
Matching

get_shape_model_contours ( Hobject *ModelContours, Hlong ModelID,


Hlong Level )

T_get_shape_model_contours ( Hobject *ModelContours,


const Htuple ModelID, const Htuple Level )

Return the contour representation of a shape model.


The operator get_shape_model_contours returns a representation of the shape model ModelID as XLD
contours in ModelContours. The parameter Level determines for which pyramid level of the model the
contour representation should be returned. The contours can be used, for example, to visualize the found instances
of the model in an image. It should be noted that the position of ModelContours is normalized such that the
reference point of the model (see set_shape_model_origin) lies at the pixel position (0, 0). Hence, the
contours simply need to be translated to the found position in the image (and possibly rotated and scaled around
this point).
Parameter

. ModelContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *


Contour representation of the shape model.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; Hlong
Handle of the model.
. Level (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Pyramid level for which the contour representation should be returned.
Default Value : 1
Suggested values : Level ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : Level ≥ 1
Result
If the handle of the model is valid, the operator get_shape_model_contours returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
get_shape_model_contours is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
read_shape_model
See also
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models
Module
Matching

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 641

get_shape_model_origin ( Hlong ModelID, double *Row, double *Column )


T_get_shape_model_origin ( const Htuple ModelID, Htuple *Row,
Htuple *Column )

Return the origin (reference point) of a shape model.


The operator get_shape_model_origin returns the origin (reference point) of the shape model ModelID.
The origin is specified relative to the center of gravity of the domain (region) of the image that was
used to create the shape model with create_shape_model, create_scaled_shape_model, or
create_aniso_shape_model. Hence, an origin of (0,0) means that the center of gravity of the domain
of the model image is used as the origin. An origin of (-20,-40) means that the origin lies to the upper left of the
center of gravity.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; Hlong
Handle of the model.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .point.y ; double *
Row coordinate of the origin of the shape model.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double *
Column coordinate of the origin of the shape model.
Result
If the handle of the model is valid, the operator get_shape_model_origin returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
get_shape_model_origin is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
read_shape_model, set_shape_model_origin
Possible Successors
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models
See also
area_center
Module
Matching

get_shape_model_params ( Hlong ModelID, Hlong *NumLevels,


double *AngleStart, double *AngleExtent, double *AngleStep,
double *ScaleMin, double *ScaleMax, double *ScaleStep, char *Metric,
Hlong *MinContrast )

T_get_shape_model_params ( const Htuple ModelID, Htuple *NumLevels,


Htuple *AngleStart, Htuple *AngleExtent, Htuple *AngleStep,
Htuple *ScaleMin, Htuple *ScaleMax, Htuple *ScaleStep, Htuple *Metric,
Htuple *MinContrast )

Return the parameters of a shape model.


The operator get_shape_model_params returns the parameters of the shape model ModelID
that were used to create it using create_shape_model, create_scaled_shape_model, or
create_aniso_shape_model. In particular, this output can be used to check the parameters NumLevels,
AngleStep, ScaleStep, and MinContrast if they were determined automatically during the model creation
with create_shape_model, create_scaled_shape_model, or create_aniso_shape_model.
If the shape model was created using create_shape_model or create_scaled_shape_model a single
value is returned in ScaleMin, ScaleMax, and ScaleStep. This parameters corresponds to the isotropic
scaling parameters of the shape model. If the shape model was created using create_aniso_shape_model

HALCON 8.0.2
642 CHAPTER 7. MATCHING

two values each are returned in the above three parameters. Here, the first value of the respective parameter refers
to the scaling in the row direction, while the second value refers to the scaling in the column direction.
Note that the parameters Optimization and Contrast that also can be determined automatically during
the model creation cannot be queried by using get_shape_model_params. If their value is of interest
determine_shape_model_params should be used instead.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; (Htuple .) Hlong
Handle of the model.
. NumLevels (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong *
Number of pyramid levels.
. AngleStart (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double *
Smallest rotation of the pattern.
. AngleExtent (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double *
Extent of the rotation angles.
Assertion : AngleExtent ≥ 0
. AngleStep (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double *
Step length of the angles (resolution).
Assertion : (AngleStep ≥ 0) ∧ (AngleStep ≤ (pi/16))
. ScaleMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum scale of the pattern.
Assertion : ScaleMin > 0
. ScaleMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum scale of the pattern.
Assertion : ScaleMax ≥ ScaleMin
. ScaleStep (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Scale step length (resolution).
Assertion : ScaleStep ≥ 0
. Metric (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) char *
Match metric.
. MinContrast (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) Hlong *
Minimum contrast of the objects in the search images.
Result
If the handle of the model is valid, the operator get_shape_model_params returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
get_shape_model_params is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
read_shape_model
See also
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models
Module
Matching

inspect_shape_model ( const Hobject Image, Hobject *ModelImages,


Hobject *ModelRegions, Hlong NumLevels, Hlong Contrast )

T_inspect_shape_model ( const Hobject Image, Hobject *ModelImages,


Hobject *ModelRegions, const Htuple NumLevels, const Htuple Contrast )

Create the representation of a shape model.


inspect_shape_model creates a representation of a shape model. The operator is particularly useful in or-
der to determine the parameters NumLevels and Contrast, which are used in create_shape_model,

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 643

create_scaled_shape_model, or create_aniso_shape_model, quickly and conveniently. The


representation of the model is created on multiple image pyramid levels, where the number of levels is de-
termined by NumLevels. In contrast to create_shape_model, create_scaled_shape_model,
and create_aniso_shape_model, the model is only created for the rotation and scale of the ob-
ject in the input image, i.e., 0◦ and 1. As output, inspect_shape_model creates an image ob-
ject ModelImages containing the images of the individual levels of the image pyramid as well as a re-
gion in ModelRegions for each pyramid level that represents the model at the respective pyramid level.
The individual objects can be accessed with select_obj. If the input image Image has one chan-
nel the representation of the model is created with the method that is used in create_shape_model,
create_scaled_shape_model or create_aniso_shape_model for the metrics ’use_polarity’, ’ig-
nore_global_polarity’, and ’ignore_local_polarity’. If the input image has more than one channel the rep-
resentation is created with the method that is used for the metric ’ignore_color_polarity’. As described for
create_shape_model, create_scaled_shape_model, and create_aniso_shape_model, the
number of pyramid levels should be chosen as large as possible, while taking into account that the model must
be recognizable on the highest pyramid level and must have enough model points. The parameter Contrast
should be chosen such that only the significant features of the template object are used for the model. Contrast
can also contain a tuple with two values. In this case, the model is segmented using a method similar to the
hysteresis threshold method used in edges_image. Here, the first element of the tuple determines the lower
threshold, while the second element determines the upper threshold. For more information about the hysteresis
threshold method, see hysteresis_threshold. Optionally, Contrast can contain a third value as the last
element of the tuple. This value determines a threshold for the selection of significant model components based
on the size of the components, i.e., components that have fewer points than the minimum size thus specified are
suppressed. This threshold for the minimum size is divided by two for each successive pyramid level. If small
model components should be suppressed, but hysteresis thresholding should not be performed, nevertheless three
values must be specified in Contrast. In this case, the first two values can simply be set to identical values.
In its typical use, inspect_shape_model is called interactively multiple times with different parameters
for NumLevels and Contrast, until a satisfactory model is obtained. After this, create_shape_model,
create_scaled_shape_model, or create_aniso_shape_model are called with the parameters thus
obtained.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image.
. ModelImages (output_object) . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; Hobject * : byte / uint2
Image pyramid of the input image
. ModelRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Model region pyramid
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Number of pyramid levels.
Default Value : 4
List of values : NumLevels ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) Hlong
Threshold or hysteresis thresholds for the contrast of the object in the image and optionally minimum size of
the object parts.
Default Value : 30
Suggested values : Contrast ∈ {10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Result
If the parameters are valid, the operator inspect_shape_model returns the value H_MSG_TRUE. If neces-
sary an exception is raised.
Parallelization Information
inspect_shape_model is reentrant and processed without parallelization.
Possible Predecessors
reduce_domain
Possible Successors
select_obj
See also
create_shape_model, create_scaled_shape_model, create_aniso_shape_model

HALCON 8.0.2
644 CHAPTER 7. MATCHING

Module
Foundation

read_shape_model ( const char *FileName, Hlong *ModelID )


T_read_shape_model ( const Htuple FileName, Htuple *ModelID )

Read a shape model from a file.


The operator read_shape_model reads a shape model, which has been written with write_shape_model,
from the file FileName.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name.
. ModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; Hlong *
Handle of the model.
Result
If the file name is valid, the operator read_shape_model returns the value H_MSG_TRUE. If necessary an
exception is raised.
Parallelization Information
read_shape_model is processed completely exclusively without parallelization.
Possible Successors
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models
See also
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
clear_shape_model
Module
Matching

set_shape_model_origin ( Hlong ModelID, double Row, double Column )


T_set_shape_model_origin ( const Htuple ModelID, const Htuple Row,
const Htuple Column )

Set the origin (reference point) of a shape model.


The operator set_shape_model_origin sets the origin (reference point) of the shape model ModelID to
a new value. The origin is specified relative to the center of gravity of the domain (region) of the image that
was used to create the shape model with create_shape_model, create_scaled_shape_model, or
create_aniso_shape_model. Hence, an origin of (0,0) means that the center of gravity of the domain of
the model image is used as the origin. An origin of (-20,-40) means that the origin lies to the upper left of the
center of gravity.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; Hlong
Handle of the model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; double
Row coordinate of the origin of the shape model.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double
Column coordinate of the origin of the shape model.
Result
If the handle of the model is valid, the operator set_shape_model_origin returns the value H_MSG_TRUE.
If necessary an exception is raised.

HALCON/C Reference Manual, 2008-5-13


7.4. SHAPE-BASED 645

Parallelization Information
set_shape_model_origin is processed completely exclusively without parallelization.
Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
read_shape_model
Possible Successors
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models,
get_shape_model_origin
See also
area_center
Module
Matching

write_shape_model ( Hlong ModelID, const char *FileName )


T_write_shape_model ( const Htuple ModelID, const Htuple FileName )

Write a shape model to a file.


The operator write_shape_model writes a shape model to the file FileName. The model can be read again
with read_shape_model.
Parameter

. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; Hlong


Handle of the model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the file name is valid (write permission), the operator write_shape_model returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
write_shape_model is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model
Module
Matching

HALCON 8.0.2
646 CHAPTER 7. MATCHING

HALCON/C Reference Manual, 2008-5-13


Chapter 8

Matching-3D

T_affine_trans_object_model_3d ( const Htuple ObjectModel3DID,


const Htuple HomMat3D, Htuple *ObjectModel3DIDAffineTrans )

Apply an arbitrary affine 3D transformation to a 3D object model.


affine_trans_object_model_3d applies an arbitrary affine 3D transformation, i.e., scaling, rotation, and
translation, to a 3D object model and returns the handle of the transformed 3D object model. The affine transfor-
mation is described by the homogeneous transformation matrix given in HomMat3D.
The transformation matrix can be created using the operators hom_mat3d_identity, hom_mat3d_scale,
hom_mat3d_rotate, hom_mat3d_translate, etc., or be the result of pose_to_hom_mat3d (see
affine_trans_point_3d).
In general, the operator affine_trans_object_model_3d is not necessary in the context of 3D matching.
If a rotation of the 3D object model into a reference orientation should be performed, instead appropriate values
for the parameters RefRotX, RefRotY, RefRotZ, and OrderOfRotation should be passed to the operator
create_shape_model_3d.
Parameter

. ObjectModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; Htuple . Hlong


Handle of the 3D object model.
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Transformation matrix.
. ObjectModel3DIDAffineTrans (output_control) . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong *
Handle of the transformed 3D object model.
Result
affine_trans_object_model_3d returns H_MSG_TRUE if all parameters are correct. If necessary, an
exception is raised.
Parallelization Information
affine_trans_object_model_3d is processed completely exclusively without parallelization.
Possible Predecessors
read_object_model_3d_dxf
Possible Successors
project_object_model_3d
See also
affine_trans_point_3d
Module
3D Metrology

647
648 CHAPTER 8. MATCHING-3D

clear_all_object_model_3d ( )
T_clear_all_object_model_3d ( )

Free the memory of all 3D object models.


The operator clear_all_object_model_3d frees the memory of all 3D object models that were created by
read_object_model_3d_dxf. After calling clear_all_object_model_3d, no model can be used
any longer.
Attention
clear_all_object_model_3d exists solely for the purpose of implementing the “reset program” function-
ality in HDevelop. clear_all_object_model_3d must not be used in any application.
Result
clear_all_object_model_3d always returns H_MSG_TRUE.
Parallelization Information
clear_all_object_model_3d is processed completely exclusively without parallelization.
Possible Predecessors
read_object_model_3d_dxf
Alternatives
clear_object_model_3d
Module
3D Metrology

clear_all_shape_model_3d ( )
T_clear_all_shape_model_3d ( )

Free the memory of all 3D shape models.


The operator clear_all_shape_model_3d frees the memory of all 3D shape models that were created
by create_shape_model_3d. After calling clear_all_shape_model_3d, no model can be used any
longer.
Attention
clear_all_shape_model_3d exists solely for the purpose of implementing the “reset program” functional-
ity in HDevelop. clear_all_shape_model_3d must not be used in any application.
Result
clear_all_shape_model_3d always returns H_MSG_TRUE.
Parallelization Information
clear_all_shape_model_3d is processed completely exclusively without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d, write_shape_model_3d
Alternatives
clear_shape_model_3d
Module
3D Metrology

clear_object_model_3d ( Hlong ObjectModel3DID )


T_clear_object_model_3d ( const Htuple ObjectModel3DID )

Free the memory of a 3D object model.

HALCON/C Reference Manual, 2008-5-13


649

The operator clear_object_model_3d frees the memory of a 3D object model that was created by
read_object_model_3d_dxf. After calling clear_object_model_3d, the model can no longer be
used. The handle ObjectModel3DID becomes invalid.
Parameter
. ObjectModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; Hlong
Handle of the 3D object model.
Result
If the handle of the model is valid, the operator clear_object_model_3d returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
clear_object_model_3d is processed completely exclusively without parallelization.
Possible Predecessors
read_object_model_3d_dxf
See also
clear_all_object_model_3d
Module
3D Metrology

clear_shape_model_3d ( Hlong ShapeModel3DID )


T_clear_shape_model_3d ( const Htuple ShapeModel3DID )

Free the memory of a 3D shape model.


The operator clear_shape_model_3d frees the memory of a 3D shape model that was created by
create_shape_model_3d. After calling clear_shape_model_3d, the model can no longer be used.
The handle ShapeModel3DID becomes invalid.
Parameter
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; Hlong
Handle of the 3D shape model.
Result
If the handle of the model is valid, the operator clear_shape_model_3d returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
clear_shape_model_3d is processed completely exclusively without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d, write_shape_model_3d
See also
clear_all_shape_model_3d
Module
3D Metrology

convert_point_3d_cart_to_spher ( double X, double Y, double Z,


const char *EquatPlaneNormal, const char *ZeroMeridian,
double *Longitude, double *Latitude, double *Radius )

T_convert_point_3d_cart_to_spher ( const Htuple X, const Htuple Y,


const Htuple Z, const Htuple EquatPlaneNormal,
const Htuple ZeroMeridian, Htuple *Longitude, Htuple *Latitude,
Htuple *Radius )

Convert Cartesian coordinates of a 3D point to spherical coordinates.

HALCON 8.0.2
650 CHAPTER 8. MATCHING-3D

The operator convert_point_3d_cart_to_spher converts Cartesian coordinates of a 3D point, which


are given in X, Y, and Z, into spherical coordinates. The spherical coordinates are returned in Longitude,
Latitude, and Radius. The Longitude is returned in the range [−π, +π] while the Latitude is returned
in the range [−π/2, +π/2]. Furthermore, the latitude of the north pole is π/2, and hence, the latitude of the south
pole is −π/2.
The orientation of the spherical coordinate system with respect to the Cartesian coordinate system can be specified
with the parameters EquatPlaneNormal and ZeroMeridian.
EquatPlaneNormal determines the normal of the equatorial plane (longitude == 0) pointing to the north pole
(positive latitude) and may take the following values:

’x’: The equatorial plane is the yz plane. The positive x axis points to the north pole.
’-x’: The equatorial plane is the yz plane. The positive x axis points to the south pole.
’y’: The equatorial plane is the xz plane. The positive y axis points to the north pole.
’-y’: The equatorial plane is the xz plane. The positive y axis points to the south pole.
’z’: The equatorial plane is the xy plane. The positive z axis points to the north pole.
’-z’: The equatorial plane is the xy plane. The positive z axis points to the south pole.

The position of the zero meridian can be specified with the parameter ZeroMeridian. For this, the coordinate
axis (lying in the equatorial plane) that points to the zero meridian must be passed. The following values for
ZeroMeridian are valid:

’x’: The positive x axis points in the direction of the zero meridian.
’-x’: The negative x axis points in the direction of the zero meridian.
’y’: The positive y axis points in the direction of the zero meridian.
’-y’: The negative y axis points in the direction of the zero meridian.
’z’: The positive z axis points in the direction of the zero meridian.
’-z’: The negative z axis points in the direction of the zero meridian.

Only reasonable combinations of EquatPlaneNormal and ZeroMeridian are permitted, i.e., the normal
of the equatorial plane must not be parallel to the direction of the zero meridian. For example, the combination
EquatPlaneNormal=’y’ and ZeroMeridian=’-y’ is not permitted.
Note that in order to guarantee a consistent conversion back from spherical to Cartesian coordinates by using
convert_point_3d_spher_to_cart, the same values must be passed for EquatPlaneNormal and
ZeroMeridian as were passed to convert_point_3d_cart_to_spher.
The operator convert_point_3d_cart_to_spher can be used, for example, to convert a given camera
position into spherical coordinates. If multiple camera positions are converted in this way, one obtains a pose range
(in spherical coordinates), which can be passed to create_shape_model_3d in order to create a 3D shape
model.
Parameter

. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double


X coordinate of the 3D point.
. Y (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
Y coordinate of the 3D point.
. Z (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
Z coordinate of the 3D point.
. EquatPlaneNormal (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Normal vector of the equatorial plane (points to the north pole).
Default Value : "-y"
List of values : EquatPlaneNormal ∈ {"x", "y", "z", "-x", "-y", "-z"}
. ZeroMeridian (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Coordinate axis in the equatorial plane that points to the zero meridian.
Default Value : "-z"
List of values : ZeroMeridian ∈ {"x", "y", "z", "-x", "-y", "-z"}

HALCON/C Reference Manual, 2008-5-13


651

. Longitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *


Longitude of the 3D point.
. Latitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Latitude of the 3D point.
. Radius (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Radius of the 3D point.
Result
If the parameters are valid, the operator convert_point_3d_cart_to_spher returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
convert_point_3d_cart_to_spher is reentrant and processed without parallelization.
Possible Successors
create_shape_model_3d, find_shape_model_3d
See also
convert_point_3d_spher_to_cart
Module
3D Metrology

convert_point_3d_spher_to_cart ( double Longitude, double Latitude,


double Radius, const char *EquatPlaneNormal, const char *ZeroMeridian,
double *X, double *Y, double *Z )

T_convert_point_3d_spher_to_cart ( const Htuple Longitude,


const Htuple Latitude, const Htuple Radius,
const Htuple EquatPlaneNormal, const Htuple ZeroMeridian, Htuple *X,
Htuple *Y, Htuple *Z )

Convert spherical coordinates of a 3D point to Cartesian coordinates.


The operator convert_point_3d_spher_to_cart converts the spherical coordinates of a 3D point, which
are given in Longitude, Latitude, and Radius, into the Cartesian coordinates X, Y, and Z. The spherical
coordinates Longitude and Latitude must be specified in radians. Furthermore, the Latitude must be
within the range [−π/2, +π/2], where the latitude of the north pole is π/2, and hence, the latitude of the south
pole is −π/2.
The orientation of the spherical coordinate system with respect to the Cartesian coordinate system can be specified
with the parameters EquatPlaneNormal and ZeroMeridian.
EquatPlaneNormal determines the normal of the equatorial plane (longitude == 0) pointing to the north pole
(positive latitude) and may take the following values:

’x’: The equatorial plane is the yz plane. The positive x axis points to the north pole.
’-x’: The equatorial plane is the yz plane. The positive x axis points to the south pole.
’y’: The equatorial plane is the xz plane. The positive y axis points to the north pole.
’-y’: The equatorial plane is the xz plane. The positive y axis points to the south pole.
’z’: The equatorial plane is the xy plane. The positive z axis points to the north pole.
’-z’: The equatorial plane is the xy plane. The positive z axis points to the south pole.

The position of the zero meridian can be specified with the parameter ZeroMeridian. For this, the coordinate
axis (lying in the equatorial plane) that points to the zero meridian must be passed. The following values for
ZeroMeridian are valid:

’x’: The positive x axis points in the direction of the zero meridian.
’-x’: The negative x axis points in the direction of the zero meridian.
’y’: The positive y axis points in the direction of the zero meridian.
’-y’: The negative y axis points in the direction of the zero meridian.

HALCON 8.0.2
652 CHAPTER 8. MATCHING-3D

’z’: The positive z axis points in the direction of the zero meridian.
’-z’: The negative z axis points in the direction of the zero meridian.

Only reasonable combinations of EquatPlaneNormal and ZeroMeridian are permitted, i.e., the normal
of the equatorial plane must not be parallel to the direction of the zero meridian. For example, the combination
EquatPlaneNormal=’y’ and ZeroMeridian=’-y’ is not permitted.
Note that in order to guarantee a consistent conversion back from Cartesian to spherical coordinates by using
convert_point_3d_cart_to_spher, the same values must be passed for EquatPlaneNormal and
ZeroMeridian as were passed to convert_point_3d_spher_to_cart.
The operator convert_point_3d_spher_to_cart can be used, for example, to convert a camera position
that is given in spherical coordinates into Cartesian coordinates. The result can then be utilized to create a complete
camera pose by passing the Cartesian coordinates to create_cam_pose_look_at_point.
Parameter

. Longitude (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double


Longitude of the 3D point.
. Latitude (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double
Latitude of the 3D point.
Restriction : ((−pi/2) ≤ Latitude) ∧ (Latitude ≤ (pi/2))
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
Radius of the 3D point.
. EquatPlaneNormal (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Normal vector of the equatorial plane (points to the north pole).
Default Value : "-y"
List of values : EquatPlaneNormal ∈ {"x", "y", "z", "-x", "-y", "-z"}
. ZeroMeridian (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Coordinate axis in the equatorial plane that points to the zero meridian.
Default Value : "-z"
List of values : ZeroMeridian ∈ {"x", "y", "z", "-x", "-y", "-z"}
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
X coordinate of the 3D point.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Y coordinate of the 3D point.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Z coordinate of the 3D point.
Result
If the parameters are valid, the operator convert_point_3d_spher_to_cart returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
convert_point_3d_spher_to_cart is reentrant and processed without parallelization.
Possible Predecessors
get_shape_model_3d_params
See also
convert_point_3d_cart_to_spher
Module
3D Metrology

T_create_cam_pose_look_at_point ( const Htuple CamPosX,


const Htuple CamPosY, const Htuple CamPosZ, const Htuple LookAtX,
const Htuple LookAtY, const Htuple LookAtZ,
const Htuple RefPlaneNormal, const Htuple CamRoll, Htuple *CamPose )

Create a 3D camera pose from camera center and viewing direction.

HALCON/C Reference Manual, 2008-5-13


653

The operator create_cam_pose_look_at_point creates a 3D camera pose with respect to a world coor-
dinate system based on two points and the camera roll angle.
The first of the two points defines the position of the optical center of the camera in the world coordinate system,
i.e., the origin of the camera coordinate system. It is given by its three coordinates CamPosX, CamPosY, and
CamPosZ. The second of the two points defines the viewing direction of the camera. It represents the point in the
world coordinate system at which the camera is to look. It is also specified by its three coordinates LookAtX,
LookAtY, and LookAtZ. Consequently, the second point lies on the z axis of the camera coordinate system.
Finally, the remaining degree of freedom to be specified is a rotation of the camera around its z axis, i.e.,
the roll angle of the camera. To determine this rotation, the normal of a reference plane can be specified in
RefPlaneNormal, which defines the reference orientation of the camera. Finally, the camera roll angle can
be specified in CamRoll, which describes a rotation of the camera around its z axis with respect to its reference
orientation.
The reference plane can be seen as a plane in the world coordinate system that is parallel to the x axis of the
camera (in its reference orientation, i.e., with a roll angle of 0). In an alternative interpretation, the normal vector
of the reference plane projected onto the image plane points upwards, i.e., it is mapped to the negative y axis of the
camera coordinate system. The parameter RefPlaneNormal may take one of the following values:

’x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world coordi-
nate system points upwards in the image plane.
’-x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world
coordinate system points downwards in the image plane.
’y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world coordi-
nate system points upwards in the image plane.
’-y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world
coordinate system points downwards in the image plane.
’z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world coordi-
nate system points upwards in the image plane.
’-z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world
coordinate system points downwards in the image plane.

Alternatively to the above values, an arbitrary normal vector can be specified in RefPlaneNormal, which is not
restricted to the coordinate axes. For this, a tuple of three values representing the three components of the normal
vector must be passed.
Note that the position of the optical center and the point at which the camera looks must differ from each other.
Furthermore, the normal vector of the reference plane and the z axis of the camera must not be parallel. Otherwise,
the camera pose is not well-defined.
create_cam_pose_look_at_point is particularly useful if a 3D object model or a 3D shape
model should be visualized from a certain camera position. In this case, the pose that is cre-
ated by create_cam_pose_look_at_point can be passed to project_object_model_3d or
project_shape_model_3d, respectively.
It is also possible to pass tuples of different length for different input parameters. In this case, internally the
maximum number of parameter values over all input control parameters is computed. This number is taken as
the number of output camera poses. Then, all input parameters can contain a single value or the same number of
values as output camera poses. In the first case, the single value is used for the computation of all camera poses,
while in the second case the respective value of the element in the parameter is used for the computation of the
corresponding camera pose.
Parameter
. CamPosX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
X coordinate of the optical center of the camera.
. CamPosY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Y coordinate of the optical center of the camera.
. CamPosZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Z coordinate of the optical center of the camera.
. LookAtX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
X coordinate of the 3D point to which the camera is directed.

HALCON 8.0.2
654 CHAPTER 8. MATCHING-3D

. LookAtY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double


Y coordinate of the 3D point to which the camera is directed.
. LookAtZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Z coordinate of the 3D point to which the camera is directed.
. RefPlaneNormal (input_control) . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / double
Normal vector of the reference plane (points up).
Default Value : "-y"
List of values : RefPlaneNormal ∈ {"x", "y", "z", "-x", "-y", "-z"}
. CamRoll (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; Htuple . double
Camera roll angle.
Default Value : 0
. CamPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
3D camera pose.
Result
If the parameters are valid, the operator create_cam_pose_look_at_point returns the value
H_MSG_TRUE. If necessary an exception is raised. If the parameters are chosen such that the pose is not well
defined, the error 8940 is raised.
Parallelization Information
create_cam_pose_look_at_point is reentrant and processed without parallelization.
Possible Predecessors
convert_point_3d_spher_to_cart
Alternatives
create_pose
Module
3D Metrology

T_create_shape_model_3d ( const Htuple ObjectModel3DID,


const Htuple CamParam, const Htuple RefRotX, const Htuple RefRotY,
const Htuple RefRotZ, const Htuple OrderOfRotation,
const Htuple LongitudeMin, const Htuple LongitudeMax,
const Htuple LatitudeMin, const Htuple LatitudeMax,
const Htuple CamRollMin, const Htuple CamRollMax,
const Htuple DistMin, const Htuple DistMax, const Htuple MinContrast,
const Htuple GenParamNames, const Htuple GenParamValues,
Htuple *ShapeModel3DID )

Prepare a 3D object model for matching.


The operator create_shape_model_3d prepares a 3D object model, which is passed in
ObjectModel3DID, as a 3D shape model used for matching. The 3D object model must previously
been read from a file by using read_object_model_3d_dxf.
The 3D shape model is generated by computing different views of the 3D object model within a user-specified
pose range. The views are automatically generated by placing virtual cameras around the 3D object model and
projecting the 3D object model into the image plane of each virtual camera position. For each such obtained view
a 2D shape representation is computed. Thus, for the generation of the 3D shape model, no images of the object
are used but only the 3D object model, which is passed in ObjectModel3DID. The shape representations of all
views are stored in the 3D shape model, which is returned in ShapeModel3DID. During the matching process
with find_shape_model_3d, the shape representations are used to find out the best-matching view, from
which the pose is subsequently refined and returned.
In order to create the model views correctly, the camera parameters of the camera that will be used for the
matching must be passed in CamParam. The camera parameters are necessary, for example, to determine
the scale of the projections by using the actual focal length of the camera. Futhermore, they are used to
treat radial distortions of the lens correctly. Consequently, it is essential to calibrate the camera by using
camera_calibration before creating the 3D shape model. On the one hand, this is necessary to obtain
accurate poses from find_shape_model_3d. On the other hand, this makes the 3D matching applicable even
when using lenses with significant radial distortions.

HALCON/C Reference Manual, 2008-5-13


655

The pose range within which the model views are generated can be specified by the parameters RefRotX,
RefRotY, RefRotZ, OrderOfRotation, LongitudeMin, LongitudeMax, LatitudeMin,
LatitudeMax, CamRollMin, CamRollMax, DistMin, and DistMax. Note that the model will
only be recognized during the matching if it appears within the specified pose range. The parameters are described
in the following:
Before computing the views, the origin of the coordinate system of the 3D object model is moved to the refer-
ence point of the 3D object model, which is the center of the smallest enclosing axis-parallel cuboid and can be
queried by using get_object_model_3d_params. The virtual cameras, which are used to create the views,
are arranged around the 3D object model in such a way that they all look at the origin of the coordinate system,
i.e., the z axes of the cameras pass through the origin. The pose range can then be specified by restricting the
views to a certain quadrilateral on the sphere around the origin. This naturally leads to the use of the spheri-
cal coordinates longitude, latitude, and radius. The definition of the spherical coordinate system is chosen such
that the equatorial plane corresponds to the xz plane of the Cartesian coordinate system with the y axis pointing
to the south pole (negative latitude) and the negative z axis pointing in the direction of the zero meridian (see
convert_point_3d_spher_to_cart or convert_point_3d_cart_to_spher for further details
about the conversion between Cartesian and spherical coordinates). The advantage of this definition is that a cam-
era with the pose [0,0,z,0,0,0,0] has its optical center at longitude=0, latitude=0, and radius=z. In this case, the
radius represents the distance of the optical center of the camera to the reference point of the 3D object model.
The longitude range, for which views are to be generated, can be specified by LongitudeMin and
LongitudeMax, both given in radians. Accordingly, the latitude range can be specified by LatitudeMin
and LatitudeMax, also given in radians. The minimum and maximum distance between the camera cen-
ter and the model reference point is specified by DistMin and DistMax. Note that the unit of the distance
must be meters (assuming that the parameter Scale has been correctly set when reading the DXF file with
read_object_model_3d_dxf). Finally, the minimum and the maximum camera roll angle can be speci-
fied in CamRollMin and CamRollMax. This interval specifies the allowable camera rotation around its z axis
with respect to the 3D object model. If the image plane is parallel to the plane on which the objects reside and if it
is known that the object may rotate in this plane only in a restricted range, then it is reasonable to specify this range
in CamRollMin and CamRollMax. In all other cases the interpretation of the camera roll angle is difficult, and
hence, it is recommended to set this interval to [−π, +π]. Note that the larger the specified pose range is chosen
the more memory the model will consume (except from the range of the camera roll angle) and the slower the
matching will be.
The orientation of the coordinate system of the 3D object model is defined by the coordinates within the DXF file
that was read by using read_object_model_3d_dxf. Therefore, it is reasonable to previously rotate the
3D object model into a reference orientation such that the view that corresponds to longitude=0 and latitude=0 is
approximately at the center of the pose range. This can be achieved by passing appropriate values for the reference
orientation in RefRotX, RefRotY, RefRotZ, and OrderOfRotation. The rotation is performed around the
axes of the 3D object model, which origin was set to the reference point. The longitude and latitude range can then
be interpreted as a variation of the 3D object model pose around the reference orientation. There are two possible
ways to specify the reference orientation. The first possibility is to specify three rotation angles in RefRotX,
RefRotY, and RefRotZ and the order in which the three rotations are to be applied in OrderOfRotation,
which can either be ’gba’ or ’abg’. The second possibility is to specify the three components of the Rodriguez
rotation vector in RefRotX, RefRotY, and RefRotZ. In this case, OrderOfRotation must be set to ’ro-
driguez’ (see create_pose for detailed information about the order of the rotations and the definition of the
Rodriguez vector).
Thus, two transformations are applied to the 3D object model before computing the model views within the pose
range. The first transformation is the translation of the origin of the coordinate systems to the reference point. The
second transformation is the rotation of the 3D object model to the desired reference orientation around the axes
of the reference coordinate system. By combining both transformations one obtains the reference pose of the 3D
shape model. The reference pose of the 3D shape model thus describes the pose of the reference coordinate system
with respect to the coordinate system of the 3D object model defined by the DXF file. Let t = (x, y, z)0 be the
coordinates of the reference point of the 3D object model and R be the rotation matrix containing the reference
orientation. Then, a point pm given in the 3D object model coordinate system can be transformed to a point pr in
the reference coordinate system of the 3D shape model by applying the following formula:
pr = R · (pm − t)
This transformation can be expressed by a homogeneous 3D transformation matrix or alternatively in terms of a 3D
pose. The latter can be queried by passing ’reference_pose’ for the parameter GenParamNames of the operator
get_shape_model_3d_params. The above formula can be best imagined as a pose of pose type 8, 10, or 12,

HALCON 8.0.2
656 CHAPTER 8. MATCHING-3D

depending on the value that was chosen for OrderOfRotation (see create_pose for detailed information
about the different pose types). Note, however, that get_shape_model_3d_params always returns the pose
using the pose type 0. Finally, poses that are given in one of the two coordinate systems can be transformed to the
other coordinate system by using trans_pose_shape_model_3d.
With MinContrast, it can be determined which edge contrast the model must at least have in the recognition
performed by find_shape_model_3d. In other words, this parameter separates the model from the noise
in the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If
multichannel images are used for the search images, the noise in one channel must be multiplied by the square root
of the number of channels to determine MinContrast. If, for example, the gray values fluctuate within a range
of 10 gray levels in a single channel and the image is a three-channel image, MinContrast should be set to 17.
If the model should be recognized in very low contrast images, MinContrast must be set to a correspondingly
small value. If the model should be recognized even if it is severely occluded, MinContrast should be slightly
larger than the range of gray value fluctuations created by noise in order to ensure that the pose of the model is
extracted robustly and accurately by find_shape_model_3d.
The parameters described above are application-dependent and must be always specified when creating a 3D
shape model. In addition, there are some generic parameters that can optionally be used to influence the model
creation. For most applications these parameters need not to be specified but can be left at their default values.
If desired, these parameters and their corresponding values can be specified by using GenParamNames and
GenParamValues, respectively. The following values for GenParamNames are possible:
’num_levels’: For efficiency reasons the model views are generated on multiple pyramid levels. On higher levels
fewer views are generated than on lower levels. With the parameter ’num_levels’ the number of pyramid
levels on which model views are generated can be specified. It should be chosen as large as possible because
by this the time necessary to find the model is significantly reduced. On the other hand, the number of
levels must be chosen such that the shape representations of the views on the highest pyramid level are
still recognizable and contain a sufficient number of points (at least four). If not enough model points are
generated for a certain view, the view is deleted from the model and replaced by a view on a lower pyramid
level. If for all views on a pyramid level not enough model points are generated, the number of levels is
reduced internally until for at least one view enough model points are found on the highest pyramid level.
If this procedure would lead to a model with no pyramid levels, i.e., if the number of model points is too
small for all views already on the lowest pyramid level, create_shape_model_3d returns an error
message. If ’num_levels’ is set to ’auto’ (default value), create_shape_model_3d determines the
number of pyramid levels automatically. In this case all model views on all pyramid levels are automatically
checked whether their shape representations are still recognizable. If the shape representation of a certain
view is found to be not recognizable, the view is deleted from the model and replaced by a view on a lower
pyramid level. Note that if ’num_levels’ is set to ’auto’, the number of pyramid levels can be different for
different views. In rare cases, it might happen that create_shape_model_3d determines a value for
the number of pyramid levels that is too large or too small. If the number of pyramid levels is chosen too
large, the model may not be recognized in the image or it may be necessary to select very low parameters
for MinScore or Greediness in find_shape_model_3d in order to find the model. If the number
of pyramid levels is chosen too small, the time required to find the model in find_shape_model_3d
may increase. In these cases, the views on the pyramid levels should be checked by using the output of
get_shape_model_3d_contours.
Suggested values: ’auto’, 3, 4, 5, 6
Default value: ’auto’
’optimization’: For models with particularly large model views, it may be useful to reduce the number of model
points by setting ’optimization’ to a value different from ’none’. If ’optimization’ = ’none’, all model points
are stored. In all other cases, the number of points is reduced according to the value of ’optimization’. If
the number of points is reduced, it may be necessary in find_shape_model_3d to set the parame-
ter Greediness to a smaller value, e.g., 0.7 or 0.8. For models with small model views, the reduction
of the number of model points does not result in a speed-up of the search because in this case usually
significantly more potential instances of the model must be examined. If ’optimization’ is set to ’auto’,
create_shape_model_3d automatically determines the reduction of the number of model points for
each model view.
List of values: ’auto’, ’none’, ’point_reduction_low’, ’point_reduction_medium’, ’point_reduction_high’
Default value: ’auto’
’metric’: This parameter determines the conditions under which the model is recognized in the image. Cur-
rently, only the metric ’ignore_segment_polarity’ is supported, which recognizes an object even if the con-

HALCON/C Reference Manual, 2008-5-13


657

trast changes locally.


List of values: ’ignore_segment_polarity’
’min_face_angle’: 3D edges are only included in the shape representations of the views if the angle between
the two 3D faces that are incident with the 3D object model edge is at least ’min_face_angle’. If
’min_face_angle’ is set to 0.0, all edges are included. If ’min_face_angle’ is set to π (equivalent to 180
degrees), only the silhouette of the 3D object model is included. This parameter can be used to suppress
edges within curved surfaces, e.g., the surface of a cylinder or cone. Curved surfaces are approximated by
multiple planar faces. The edges between such neighboring planar faces should not be included in the shape
representation because they also do not appear in real images of the model. Thus, ’min_face_angle’ should
be set sufficiently high to suppress these edges. The effect of different values for ’min_face_angle’ can be
inspected by using project_object_model_3d before calling create_shape_model_3d. Note
that if edges that are not visible in the search image are included in the shape representation, the performance
(robustness and speed) of the matching may decrease considerably.
Suggested values: rad(10), rad(20), rad(30), rad(45)
Default value: rad(15)
’min_size’: This value determines a threshold for the selection of significant model components based on the size
of the components, i.e., connected components that have fewer points than the specified minimum size are
suppressed. This threshold for the minimum size is divided by two for each successive pyramid level.
Suggested values: ’auto’, 0, 3, 5, 10, 20
Default value: ’auto’
’model_tolerance’: The parameter specifies the tolerance of the projected 3D object model edges in the image,
given in pixels. The higher the value is chosen, the fewer views need to be generated. Consequently, a higher
value results in models that are less memory consuming and faster to find with find_shape_model_3d.
On the other hand, if the value is chosen too high, the robustness of the matching will decrease. Therefore,
this parameter should only be modified with care. For most applications, a good compromise between speed
and robustness is obtained when setting ’model_tolerance’ to 1.
Suggested values: 0, 1, 2
Default value: 1
Parameter
. ObjectModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; Htuple . Hlong
Handle of the 3D object model.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : CamParam = 8
. RefRotX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or without
unit).
Default Value : 0
Suggested values : RefRotX ∈ {-1.57, -0.78, -0.17, 0., 0.17, 0.78, 1.57}
. RefRotY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or without
unit).
Default Value : 0
Suggested values : RefRotY ∈ {-1.57, -0.78, -0.17, 0., 0.17, 0.78, 1.57}
. RefRotZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or without
unit).
Default Value : 0
Suggested values : RefRotZ ∈ {-1.57, -0.78, -0.17, 0., 0.17, 0.78, 1.57}
. OrderOfRotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Meaning of the rotation values of the reference orientation.
Default Value : "gba"
List of values : OrderOfRotation ∈ {"gba", "abg", "rodriguez"}
. LongitudeMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Minimum longitude of the model views.
Default Value : -0.35
Suggested values : LongitudeMin ∈ {-0.78, -0.35, -0.17}

HALCON 8.0.2
658 CHAPTER 8. MATCHING-3D

. LongitudeMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double


Maximum longitude of the model views.
Default Value : 0.35
Suggested values : LongitudeMax ∈ {0.17, 0.35, 0.78}
Restriction : LongitudeMax ≥ LongitudeMin
. LatitudeMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Minimum latitude of the model views.
Default Value : -0.35
Suggested values : LatitudeMin ∈ {-0.78, -0.35, -0.17}
Restriction : (−pi ≤ LatitudeMin) ∧ (LatitudeMin ≤ pi)
. LatitudeMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Maximum latitude of the model views.
Default Value : 0.35
Suggested values : LatitudeMax ∈ {0.17, 0.35, 0.78}
Restriction : ((−pi ≤ LatitudeMax) ∧ (LatitudeMax ≤ pi)) ∧ (LatitudeMax ≥ LatitudeMin)
. CamRollMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Minimum camera roll angle of the model views.
Default Value : -3.1416
Suggested values : CamRollMin ∈ {-3.14, -1.57, -0.39, 0.0, 0.39, 1.57, 3.14}
. CamRollMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Maximum camera roll angle of the model views.
Default Value : 3.1416
Suggested values : CamRollMax ∈ {-3.14, -1.57, -0.39, 0.0, 0.39, 1.57, 3.14}
Restriction : CamRollMax ≥ CamRollMin
. DistMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum camera-object-distance of the model views.
Default Value : 0.3
Suggested values : DistMin ∈ {0.05, 0.1, 0.2, 0.5}
Restriction : DistMin > 0
. DistMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Maximum camera-object-distance of the model views.
Default Value : 0.4
Suggested values : DistMax ∈ {0.1, 0.2, 0.5, 1.0}
Restriction : DistMax ≥ DistMin
. MinContrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Minimum contrast of the objects in the search images.
Default Value : 10
Suggested values : MinContrast ∈ {1, 2, 3, 5, 7, 10, 20, 30, 1000, 2000, 5000}
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; Htuple . const char *
Names of (optional) parameters for controlling the behavior of the operator.
Default Value : []
List of values : GenParamNames ∈ {"num_levels", "optimization", "metric", "min_face_angle",
"min_size", "model_tolerance"}
. GenParamValues (input_control) . . . . . . . attribute.name(-array) ; Htuple . Hlong / double / const char *
Values of the optional generic parameters.
Default Value : []
Suggested values : GenParamValues ∈ {0, 1, 2, 3, "auto", "none", "point_reduction_low",
"point_reduction_medium", "point_reduction_high", 0.1, 0.2, 0.3, "ignore_segment_polarity"}
. ShapeModel3DID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; Htuple . Hlong *
Handle of the 3D shape model.
Result
If the parameters are valid, the operator create_shape_model_3d returns the value H_MSG_TRUE. If
necessary an exception is raised. If the parameters are chosen such that all model views contain too few points, the
error 8510 is raised. In the case that the projected model is bigger than twice the image size in at least one model
view, the error 8910 is raised.
Parallelization Information
create_shape_model_3d is processed completely exclusively without parallelization.

HALCON/C Reference Manual, 2008-5-13


659

Possible Predecessors
read_object_model_3d_dxf, project_object_model_3d, get_object_model_3d_params
Possible Successors
find_shape_model_3d, write_shape_model_3d, project_shape_model_3d,
get_shape_model_3d_params, get_shape_model_3d_contours
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology

T_find_shape_model_3d ( const Hobject Image,


const Htuple ShapeModel3DID, const Htuple MinScore,
const Htuple Greediness, const Htuple NumLevels,
const Htuple GenParamNames, const Htuple GenParamValues, Htuple *Pose,
Htuple *CovPose, Htuple *Score )

Find the best matches of a 3D shape model in an image.


The operator find_shape_model_3d finds the best matches of the 3D shape model ShapeModel3DID
in the input Image. The 3D shape model must have been created previously by calling
create_shape_model_3d or read_shape_model_3d.
The 3D pose of the found instances of the model is returned in Pose. It describes the pose of the 3D object model
in the camera coordinate system. If a pose refinement was applied (see below), additionally the accuracy of the
six pose parameters are returned in CovPose. By default, CovPose contains the 6 standard deviations of the
pose parameters for each match. In contrast, if the generic parameter ’cov_pose_mode’ (see below) was set to
’covariances’, CovPose contains the 36 values of the complete 6 × 6 covariance matrix of the 6 pose parameters.
Note that this reflects only an inner accuracy from which the real accuracy of the pose may differ. Finally, the score
of each found instance is returned in Score. The score is a number between 0 and 1, which is an approximate
measure of how much of the model is visible in the image. If, for example, half of the model is occluded, the score
cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the 3D object model.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. Note that in images with a
high degree of clutter or strong background texture, MinScore should be set to a value not much lower than 0.7
since otherwise false matches could be found.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search
will be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which
may cause the model not to be found in rare cases, even though it is visible in the image. For Greediness =
1, the maximum search speed is achieved. In almost all cases, the 3D shape model will always be found for
Greediness = 0.9.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number
of levels is clipped to the range given when the 3D shape model was created with create_shape_model_3d.
If NumLevels is set to 0, the number of pyramid levels specified in create_shape_model_3d is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. If the lowest pyramid level to use is
chosen too large, it may happen that the desired accuracy cannot be achieved, or that wrong instances of the model
are found because the model is not specific enough on the higher pyramid levels to facilitate a reliable selection of
the correct instance of the model. In this case, the lowest pyramid level to use must be set to a smaller value.
In addition to the parameters described above, there are some generic parameters that can optionally be used to in-
fluence the matching. For most applications these parameters need not to be specified but can be left at their default

HALCON 8.0.2
660 CHAPTER 8. MATCHING-3D

values. If desired, these parameters and their corresponding values can be specified by using GenParamNames
and GenParamValues, respectively. The following values for GenParamNames are possible:

• If the pose range in which the model is to be searched is smaller than the pose range that was specified during
the model creation with create_shape_model_3d, the pose range can be restricted appropriately with
the following parameters. If the values lie outside the pose range of the model, the values are automatically
clipped to the pose range of the model.
’longitude_min’: Sets the minimum longitude of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-180)
’longitude_max’: Sets the maximum longitude of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(180)
’latitude_min’: Sets the minimum latitude of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-90)
’latitude_max’: Sets the maximum latitude of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(90)
’cam_roll_min’: Sets the minimum camera roll angle of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-180)
’cam_roll_max’: Sets the maximum camera roll angle of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(180)
’dist_min’: Sets the minimum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default value: 0
’dist_max’: Sets the maximum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default value: (∞)
• Further generic parameters that do not concern the pose range can be specified:
’num_matches’: With this parameter the maximum number of instances to be found can be determined.
If more than the specified number of instances with a score greater than MinScore are found in the
image, only the best ’num_matches’ instances are returned. If fewer than ’num_matches’ are found,
only that number is returned, i.e., the parameter MinScore takes precedence over ’num_matches’. If
’num_matches’ is set to 0, all matches that satisfy the score criterion are returned. Note that the more
matches should be found the slower the matching will be.
Suggested values: 0, 1, 2, 3
Default value: 1
’max_overlap’: It may happen that multiple instances with similar positions but with different orientations
are found in the image. The parameter ’max_overlap’ determines by what fraction (i.e., a number be-
tween 0 and 1) two instances may at most overlap in order to consider them as different instances, and
hence to be returned separately. If two instances overlap each other by more than the specified value only
the best instance is returned. The calculation of the overlap is based on the smallest enclosing rectangle
of arbitrary orientation (see smallest_rectangle2) of the found instances. If 0 max _overlap 0 = 0,
the found instances may not overlap at all, while for 0 max _overlap 0 = 1 all instances are returned.
Suggested values: 0.0, 0.2, 0.4, 0.6, 0.8, 1.0
Default value: 0.5
’pose_refinement’: This parameter determines whether the poses of the instances should be refined after
the matching. If ’pose_refinement’ is set to ’none’ the model’s pose is only determined with a limited
accuracy. In this case, the accuracy depends on several sampling steps that are used inside the match-
ing process and, therefore cannot be predicted very well. Therefore, ’pose_refinement’ should only be
set to ’none’ when the computation time is of primary concern and an approximate pose is sufficient.
In all other cases the pose should be determined through a least-squares adjustment, i.e., by minimiz-
ing the distances of the model points to their corresponding image points. In order to achieve a high

HALCON/C Reference Manual, 2008-5-13


661

accuracy, this refinement is directly performed in 3D. Therefore, the refinement requires additional com-
putation time. The different modes for least-squares adjustment (’least_squares’, ’least_squares_high’,
and ’least_squares_very_high’) can be used to determine the accuracy with which the minimum distance
is searched for. The higher the accuracy is chosen, the longer the pose refinement will take, however.
For most applications ’least_squares_high’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
List of values: ’none’, ’least_squares’, ’least_squares_high’, ’least_squares_very_high’
Default value: ’least_squares_high’
’outlier_suppression’: This parameter only takes effect if ’pose_refinement’ is set to a value other than
’none’, and hence, a least-squares adjustment is performed. Then, in some cases it might be useful
to apply a robust outlier suppression during the least-squares adjustment. This might be necessary, for
example, if a high degree of clutter is present in the image, which prevents the least-squares adjustment
from finding the optimum pose. In this case, ’outlier_suppression’ should be set to either ’medium’
(eliminates a medium proportion of outliers) or ’high’ (eliminates a high proportion of outliers). How-
ever, in most applications, no robust outlier suppression is necessary, and hence, ’pose_refinement’ can
be set to ’none’. It should be noted that activating the outlier suppression comes along with a signifi-
cantly increasing computation time.
List of values: ’none’, ’medium’, ’high’
Default value: ’none’
’cov_pose_mode’: This parameter only takes effect if ’pose_refinement’ is set to a value other than ’none’,
and hence, a least-squares adjustment is performed. ’cov_pose_mode’ determines the mode in which
the accuracies that are computed during the least-squares adjustment are returned in CovPose. If
’cov_pose_mode’ is set to ’standard_deviations’, the 6 standard deviations of the 6 pose parameters
are returned for each match. In contrast, if ’cov_pose_mode’ is set to ’covariances’, CovPose contains
the 36 values of the complete 6 × 6 covariance matrix of the 6 pose parameters.
List of values: ’standard_deviations’, ’covariances’
Default value: ’standard_deviations’
’border_model’: The model is searched within those points of the domain of the image in which the model
lies completely within the image. This means that the model will not be found if it extends beyond
the borders of the image, even if it would achieve a score greater than MinScore. This behavior can
be changed by setting ’border_model’ to ’true’, which will cause models that extend beyond the image
border to be found if they achieve a score greater than MinScore. Here, points lying outside the image
are regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the
search will increase in this mode.
List of values: ’false’, ’true’
Default value: ’false’

Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2


Input image in which the model should be found.
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; Htuple . Hlong
Handle of the 3D shape model.
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum score of the instances of the model to be found.
Default Value : 0.7
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MinScore ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. Greediness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
“Greediness” of the search heuristic (0: safe but slow; 1: fast but matches may be missed).
Default Value : 0.9
Suggested values : Greediness ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ Greediness ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05

HALCON 8.0.2
662 CHAPTER 8. MATCHING-3D

. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong


Number of pyramid levels used in the matching (and lowest pyramid level to use if |NumLevels| = 2).
Default Value : 0
List of values : NumLevels ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; Htuple . const char *
Names of (optional) parameters for controlling the behavior of the operator.
Default Value : []
List of values : GenParamNames ∈ {"longitude_min", "longitude_max", "latitude_min", "latitude_max",
"cam_roll_min", "cam_roll_max", "dist_min", "dist_max", "num_matches", "max_overlap",
"pose_refinement", "cov_pose_mode", "outlier_suppression", "border_model"}
. GenParamValues (input_control) . . . . . . . attribute.name(-array) ; Htuple . Hlong / double / const char *
Values of the optional generic parameters.
Default Value : []
Suggested values : GenParamValues ∈ {-0.78, -0.35, -0.17, 0.0, 0.17, 0.35, 0.78, 0.1, 0.2, 0.5, "none",
"false", "true", "least_squares", "least_squares_high", "least_squares_very_high", "standard_deviations",
"covariances", "medium", "high"}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
3D pose of the 3D shape model.
. CovPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
6 standard deviations or 36 covariances of the pose parameters.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Score of the found instances of the 3D shape model.
Example (Syntax: HDevelop)

read_object_model_3d_dxf (DXFModelFileName, ’m’, [], [], ObjectModel3DID,


DxfStatus)
CamParam := [0.01221,2791,7.3958e-6,7.4e-6,308.21,245.92,640,480]
create_shape_model_3d (ObjectModel3DID, CamParam, 0, 0, 0, ’gba’,
-rad(20), rad(20), -rad(20), rad(20), 0,
rad(360), 0.15, 0.2, 10, [], [], ShapeModel3DID)
grab_image_async (Image, FGHandle, -1)
find_shape_model_3d (Image, ShapeModel3DID, 0.6, 0.9, 0, [], [],
Pose, CovPose, Score)
project_shape_model_3d (ModelContours, ShapeModel3DID, CamParam,
Pose, ’true’, rad(15))

Result
If the parameter values are correct, the operator find_shape_model_3d returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_shape_model_3d is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d
Possible Successors
project_shape_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology

HALCON/C Reference Manual, 2008-5-13


663

get_object_model_3d_params ( Hlong ObjectModel3DID,


const char *GenParamNames, char *GenParamValues )

T_get_object_model_3d_params ( const Htuple ObjectModel3DID,


const Htuple GenParamNames, Htuple *GenParamValues )

Return the parameters of a 3D object model.


The operator get_object_model_3d_params allows to query parameters of the 3D object model. The
names of the desired parameters are passed in the generic parameter GenParamNames, the corresponding values
are returned in GenParamValues.
The following parameters can be queried:

’reference_point’: 3D coordinates of the reference point of the model. The reference point is the center of the
smallest enclosing axis-parallel cuboid (see parameter ’bounding_box1’).
’bounding_box1’: Smallest enclosing axis-parallel cuboid (min_x, min_y, min_z, max_x, max_y, max_z).

Parameter

. ObjectModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; (Htuple .) Hlong


Handle of the 3D object model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that are to be queried for the 3D object model.
Default Value : "reference_point"
List of values : GenParamNames ∈ {"reference_point", "bounding_box1"}
. GenParamValues (output_control) . . . . . . . attribute.name(-array) ; (Htuple .) char * / Hlong * / double *
Values of the generic parameters.
Result
The operator get_object_model_3d_params returns the value H_MSG_TRUE if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
get_object_model_3d_params is reentrant and processed without parallelization.
Possible Predecessors
read_object_model_3d_dxf
Possible Successors
affine_trans_object_model_3d
Module
3D Metrology

T_get_shape_model_3d_contours ( Hobject *ModelContours,


const Htuple ShapeModel3DID, const Htuple Level, const Htuple View,
Htuple *ViewPose )

Return the contour representation of a 3D shape model view.


The operator get_shape_model_3d_contours returns a representation of a single model view of the 3D
shape model ShapeModel3DID as XLD contours in ModelContours. The parameters Level and View
determine for which model view the contour representation should be returned, where Level denotes the pyramid
level and View denotes the model view on this pyramid level.
The permitted range of values for Level and View can previously be determined by using the operator
get_shape_model_3d_params and passing ’num_views_per_level’ for GenParamNames.
The contours can be used to visualize and rate the 3D shape model that was created with
create_shape_model_3d. With this it is possible, for example, to decide whether the number of pyra-
mid levels in the model is appropriate or not. If the contours on the highest pyramid do not show enough de-
tails to be representative for the model view, the number of pyramid levels that are used during the search with
find_shape_model_3d should be adjusted downwards. In contrast, if the contours show too many details

HALCON 8.0.2
664 CHAPTER 8. MATCHING-3D

even on the highest pyramid level, a higher number of pyramid levels should be chosen already during the creation
of the 3D shape model by using create_shape_model_3d.
Additionally, the pose of the selected view is returned in ViewPose. It can be used, for example, to project the
3D shape model according to the view pose by using project_shape_model_3d. The rating of the model
contours that was described above can then be performed by comparing the ModelContours to the projected
model. Note that the position of the contours of the projection and the position of the model contours may slightly
differ because of radial distortions.
Parameter

. ModelContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *


Contour representation of the model view.
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; Htuple . Hlong
Handle of the 3D shape model.
. Level (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Pyramid level for which the contour representation should be returned.
Default Value : 1
Suggested values : Level ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : Level ≥ 1
. View (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
View for which the contour representation should be returned.
Default Value : 1
Suggested values : View ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : View ≥ 1
. ViewPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
3D pose of the 3D shape model at the current view.
Result
If the parameters are valid, the operator get_shape_model_3d_contours returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
get_shape_model_3d_contours is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d, get_shape_model_3d_params
Possible Successors
create_shape_model_3d
Module
3D Metrology

get_shape_model_3d_params ( Hlong ShapeModel3DID,


const char *GenParamNames, char *GenParamValues )

T_get_shape_model_3d_params ( const Htuple ShapeModel3DID,


const Htuple GenParamNames, Htuple *GenParamValues )

Return the parameters of a 3D shape model.


The operator get_shape_model_3d_params allows to query parameters of the 3D shape model. The names
of the desired parameters are passed in the generic parameter GenParamNames, the corresponding values are
returned in GenParamValues.
The following parameters can be queried:

’cam_param’: Interior parameters of the camera that is used for the matching.
’ref_rot_x’: Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or
without unit).
’ref_rot_y’: Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or
without unit).

HALCON/C Reference Manual, 2008-5-13


665

’ref_rot_z’: Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or
without unit).
’order_of_rotation’: Meaning of the rotation values of the reference orientation.
’longitude_min’: Minimum longitude of the model views.
’longitude_max’: Maximum longitude of the model views.
’latitude_min’: Minimum latitude of the model views.
’latitude_max’: Maximum latitude of the model views.
’cam_roll_min’: Minimum camera roll angle of the model views.
’cam_roll_max’: Maximum camera roll angle of the model views.
’dist_min’: Minimum camera-object-distance of the model views.
’dist_max’: Maximum camera-object-distance of the model views.
’min_contrast’: Minimum contrast of the objects in the search images.
’num_levels’: User-specified number of pyramid levels.
’num_levels_max’: Maximum number of used pyramid levels over all model views.
’optimization’: Kind of optimization by reducing the number of model points.
’metric’: Match metric.
’min_face_angle’: Minimum 3D face angle for which 3D object model edges are included in the 3D shape model.
’min_size’: Minimum size of the projected 3D object model edge (in number of pixels) to include the projected
edge in the 3D shape model.
’model_tolerance’: Maximum acceptable tolerance of the projected 3D object model edges (in pixels).
’num_views_per_level’: Number of model views per pyramid level. For each pyramid level the number of views
that are stored in the 3D shape model are returned. Thus, the number of returned elements corresponds to the
number of used pyramid levels, which can be queried with ’num_levels_max’.
’reference_pose’: Reference position and orientation of the 3d shape model. The returned pose describes the pose
of the internally used reference coordinate system of the 3D shape model with respect to the coordinate
system that is used in the underlying 3D object model.
’reference_point’: 3D coordinates of the reference point of the underlying 3D object model.
’bounding_box1’: Smallest enclosing axis-parallel cuboid of the underlying 3D object model in the following
order: [min_x, min_y, min_z, max_x, max_y, max_z].

A detailed description of the parameters can be looked up with the operator create_shape_model_3d.
It is possible to query the values of several parameters with a single operator call by passing a tuple containing the
names of all desired parameters to GenParamNames. As a result a tuple of the same length with the correspond-
ing values is returned in GenParamValues. Note that this is solely possible for parameters that return only a
single value.
Parameter

. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; (Htuple .) Hlong


Handle of the 3D shape model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that are to be queried for the 3D shape model.
Default Value : "num_levels_max"
List of values : GenParamNames ∈ {"cam_param", "ref_rot_x", "ref_rot_y", "ref_rot_z",
"order_of_rotation", "longitude_min", "longitude_max", "latitude_min", "latitude_max", "cam_roll_min",
"cam_roll_max", "dist_min", "dist_max", "min_contrast", "num_levels", "num_levels_max", "optimization",
"metric", "min_face_angle", "min_size", "model_tolerance", "num_views_per_level", "reference_pose",
"reference_point", "bounding_box1"}
. GenParamValues (output_control) . . . . . . . attribute.name(-array) ; (Htuple .) char * / Hlong * / double *
Values of the generic parameters.

HALCON 8.0.2
666 CHAPTER 8. MATCHING-3D

Result
If the parameters are valid, the operator get_shape_model_3d_params returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
get_shape_model_3d_params is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d
Possible Successors
find_shape_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology

T_project_object_model_3d ( Hobject *ModelContours,


const Htuple ObjectModel3DID, const Htuple CamParam,
const Htuple Pose, const Htuple HiddenSurfaceRemoval,
const Htuple MinFaceAngle )

Project the edges of a 3D object model into image coordinates.


The operator project_object_model_3d projects the edges of a 3D object model into the image coordi-
nate system and returns the projected edges in ModelContours. The coordinates of the 3D object model are
given in the 3D world coordinate system. First, they are transformed into the camera coordinate system using the
given Pose. Then, these coordinates are projected into the image coordinate system based on the interior camera
parameters CamParam.
The interior camera parameters CamParam describe the projection characteristics of the camera (see
write_cam_par). The Pose describes the position and orientation of the world coordinate system with re-
spect to the camera coordinate system.
The parameter HiddenSurfaceRemoval can be used to switch on or to switch off the removal of hidden
surfaces. If HiddenSurfaceRemoval is set to ’true’, only those projected edges are returned that are not
hidden by faces of the 3D object model. If HiddenSurfaceRemoval is set to ’false’, all projected edges are
returned. This is faster than a projection with HiddenSurfaceRemoval set to ’true’.
3D edges are only projected if the angle between the two 3D faces that are incident with the 3D edge is at least
MinFaceAngle. If MinFaceAngle is set to 0.0, all edges are projected. If MinFaceAngle is set to π
(equivalent to 180 degrees), only the silhouette of the 3D object model is returned. This parameter can be used to
suppress edges within curved surfaces, e.g., the surface of a cylinder or cone.
Parameter
. ModelContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *
Projected model contours.
. ObjectModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; Htuple . Hlong
Handle of the 3D object model.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
3D pose of the world coordinate system in camera coordinates.
. HiddenSurfaceRemoval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Remove hidden surfaces?
Default Value : "true"
List of values : HiddenSurfaceRemoval ∈ {"true", "false"}
. MinFaceAngle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Smallest face angle for which the edge is projected.
Default Value : 0.261799
Suggested values : MinFaceAngle ∈ {0.17, 0.26, 0.35}

HALCON/C Reference Manual, 2008-5-13


667

Result
project_object_model_3d returns H_MSG_TRUE if all parameters are correct. If necessary, an exception
is raised.
Parallelization Information
project_object_model_3d is reentrant and processed without parallelization.
Possible Predecessors
read_object_model_3d_dxf, affine_trans_object_model_3d
Possible Successors
clear_object_model_3d
See also
project_shape_model_3d
Module
3D Metrology

T_project_shape_model_3d ( Hobject *ModelContours,


const Htuple ShapeModel3DID, const Htuple CamParam, const Htuple Pose,
const Htuple HiddenSurfaceRemoval, const Htuple MinFaceAngle )

Project the edges of a 3D shape model into image coordinates.


The operator project_shape_model_3d projects the edges of the 3D object model that was used to cre-
ate the 3D shape model ShapeModel3DID into the image coordinate system and returns the projected edges
in ModelContours. The coordinates of the 3D object model are given in the 3D world coordinate system.
First, they are transformed into the camera coordinate system using the given Pose. Then, these coordinates are
projected into the image coordinate system based on the interior camera parameters CamParam.
The interior camera parameters CamParam describe the projection characteristics of the camera (see
write_cam_par). The Pose describes the position and orientation of the world coordinate system with re-
spect to the camera coordinate system.
The parameter HiddenSurfaceRemoval can be used to switch on or to switch off the removal of hidden
surfaces. If HiddenSurfaceRemoval is set to ’true’, only those projected edges are returned that are not
hidden by faces of the 3D object model. If HiddenSurfaceRemoval is set to ’false’, all projected edges are
returned. This is faster than a projection with HiddenSurfaceRemoval set to ’true’.
3D edges are only projected if the angle between the two 3D faces that are incident with the 3D edge is at least
MinFaceAngle. If MinFaceAngle is set to 0.0, all edges are projected. If MinFaceAngle is set to π
(equivalent to 180 degrees), only the silhouette of the 3D object model is returned. This parameter can be used to
suppress edges within curved surfaces, e.g., the surface of a cylinder.
project_shape_model_3d and project_object_model_3d return the same result if the 3D object
model that was used to create the 3D shape model is passed to project_object_model_3d.
project_shape_model_3d is especially useful in order to visualize the matches that are returned by
find_shape_model_3d in the case that the underlying 3D object model is no longer available.
Parameter
. ModelContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Contour representation of the model view.
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; Htuple . Hlong
Handle of the 3D shape model.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : CamParam = 8
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
3D pose of the 3D shape model in the world coordinate system.
. HiddenSurfaceRemoval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Remove hidden surfaces?
Default Value : "true"
List of values : HiddenSurfaceRemoval ∈ {"true", "false"}

HALCON 8.0.2
668 CHAPTER 8. MATCHING-3D

. MinFaceAngle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong


Smallest face angle for which the edge is displayed
Default Value : 0.261799
Suggested values : MinFaceAngle ∈ {0.17, 0.26, 0.35}
Result
If the parameters are valid, the operator project_shape_model_3d returns the value H_MSG_TRUE. If
necessary an exception is raised.
Parallelization Information
project_shape_model_3d is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d, get_shape_model_3d_params,
find_shape_model_3d
Alternatives
project_object_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology

read_object_model_3d_dxf ( const char *FileName, const char *Scale,


const char *GenParamNames, double GenParamValues,
Hlong *ObjectModel3DID, char *DxfStatus )

T_read_object_model_3d_dxf ( const Htuple FileName,


const Htuple Scale, const Htuple GenParamNames,
const Htuple GenParamValues, Htuple *ObjectModel3DID,
Htuple *DxfStatus )

Read a 3D object model from a DXF file.


The operator read_object_model_3d_dxf reads the contents of the DXF file FileName (DXF version
AC1009, AutoCAD Release 12) and converts them to a 3D object model. The handle of the 3D object model
is returned in ObjectModel3DID. If no absolute path is given in FileName, the DXF file is searched in the
current directory of the HALCON process.
The output parameter DxfStatus contains information about the number of 3D faces that were read and, if
necessary, warnings that parts of the DXF file could not be interpreted.
The operator read_object_model_3d_dxf supports the following DXF entities:

• POLYLINE
– Polyface meshes
• 3DFACE
• LINE
• CIRCLE
• ARC
• ELLIPSE
• SOLID
• BLOCK
• INSERT

Two-dimensional linear elements like the DXF elements CIRCLE or ELLIPSE are interpreted as faces even if they
are not extruded. If necessary, they are closed. Two-dimensional linear elements that consist of just two points are
not used because they do not define a face. Thus, elements of the type LINE are only used if they are extruded.

HALCON/C Reference Manual, 2008-5-13


669

The curved surface of extruded DXF entities of the type CIRCLE, ARC, and ELLIPSE is approximated by planar
faces. The accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’
and ’max_approx_error’. The parameter ’min_num_points’ defines the minimum number of sampling points
that are used for the approximation of the DXF element CIRCLE, ARC, or ELLIPSE. Note that the parameter
’min_num_points’ always refers to the full circle or ellipse, respectively, even for ARCs or elliptical arcs, i.e., if
’min_num_points’ is set to 50 and a DXF entity of the type ARC is read that represents a semi-circle, this semi-
circle is approximated by at least 25 sampling points. The parameter ’max_approx_error’ defines the maximum
deviation of the XLD contour from the ideal circle or ellipse, respectively. The determination of this deviation
is carried out in the units used in the DXF file. For the determination of the accuracy of the approximation both
criteria are evaluated. Then, the criterion that leads to the more accurate approximation is used.
Internally, the following default values are used for the generic parameters:

’min_num_points’ = 20
’max_approx_error’ = 0.25

To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
One possible way to create a suitable DXF file is to create a 3D model of the object with the CAD program
AutoCAD. Ensure that the surface of the object is modelled, not only its edges. Lines that, e.g., define object
edges, will not be used by HALCON, because they do not define the surface of the object. Once the modelling is
completed, you can store the model in DWG format. To convert the DWG file into a DXF file that is suitable for
HALCON’s 3D matching, carry out the following steps:

• Export the 3D CAD model to a 3DS file using the 3dsout command of AutoCAD. This will triangulate the
object’s surface, i.e., the model will only consist of planes. (Users of AutoCAD 2007 or newer versions can
download this command utility from Autodesk’s web site.)
• Open a new empty sheet in AutoCAD.
• Import the 3DS file into this empty sheet with the 3dsin command of AutoCAD.
• Save the object into a DXF R12 file.

Users of other CAD programs should ensure that the surface of the 3D model is triangulated before it is exported
into the DXF file. If the CAD program is not able to carry out the triangulation, it is often possible to save the 3D
model in the proprietary format of the CAD program and to convert it into a suitable DXF file by using a CAD file
format converter that is able to perform the triangulation.
Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; (Htuple .) const char *


Name of the DXF file
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) const char * / Hlong / double
Scale or unit.
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Names of the generic parameters that can be adjusted for the DXF input.
Default Value : []
List of values : GenParamNames ∈ {"min_num_points", "max_approx_error"}
. GenParamValues (input_control) . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong / const char *
Values of the generic parameters that can be adjusted for the DXF input.
Default Value : []
Suggested values : GenParamValues ∈ {0.1, 0.25, 0.5, 1, 2, 5, 10, 20}
. ObjectModel3DID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; (Htuple .) Hlong *
Handle of the read 3D object model.
. DxfStatus (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Status information.

HALCON 8.0.2
670 CHAPTER 8. MATCHING-3D

Result
read_object_model_3d_dxf returns H_MSG_TRUE if all parameters are correct. If necessary, an excep-
tion is raised.
Parallelization Information
read_object_model_3d_dxf is processed completely exclusively without parallelization.
Possible Successors
affine_trans_object_model_3d, project_object_model_3d
Module
3D Metrology

read_shape_model_3d ( const char *FileName, Hlong *ShapeModel3DID )


T_read_shape_model_3d ( const Htuple FileName,
Htuple *ShapeModel3DID )

Read a 3D shape model from a file.


The operator read_shape_model_3d reads a 3D shape model, which has been written with
write_shape_model_3d, from the file FileName.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name.
. ShapeModel3DID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; Hlong *
Handle of the 3D shape model.
Result
If the file name is valid, the operator read_shape_model_3d returns the value H_MSG_TRUE. If necessary
an exception is raised.
Parallelization Information
read_shape_model_3d is processed completely exclusively without parallelization.
Possible Successors
find_shape_model_3d, get_shape_model_3d_params
See also
create_shape_model_3d, clear_shape_model_3d
Module
3D Metrology

T_trans_pose_shape_model_3d ( const Htuple ShapeModel3DID,


const Htuple PoseIn, const Htuple Transformation, Htuple *PoseOut )

Transform a pose that refers to the coordinate system of a 3D object model to a pose that refers to the reference
coordinate system of a 3D shape model and vice versa.
The operator trans_pose_shape_model_3d transforms the pose PoseIn into the pose PoseOut by using
the transformation direction specified in Transformation. In the majority of cases, the operator will be used
to transform a camera pose that is given with respect to the source coordinate system to a camera pose that refers
to the target coordinate system.
The pose can be transformed between two coordinate systems. The first coordinate system is the reference coordi-
nate system of the 3D shape model that is passed in ShapeModel3DID. The origin of the reference coordinate
system lies at the reference point of the underlying 3D object model. The orientation of the reference coordi-
nate system is determined by the reference orientation that was specified when creating the 3D shape model with
create_shape_model_3d.
The second coordinate system is the world coordiante system, i.e., the coordinate system of the 3D object model
that underlies the 3D shape model. This coordinate system is implicitly determined by the coordinates that are
stored in the DXF file that was read by using read_object_model_3d_dxf.

HALCON/C Reference Manual, 2008-5-13


671

If Transformation is set to ’ref_to_model’, it is assumed that PoseIn refers to the reference coordinate
system of the 3D shape model. The resulting output pose PoseOut in this case refers to the coordinate system of
the 3D object model.
If Transformation is set to ’model_to_ref’, it is assumed that PoseIn refers to the coordinate system of the
3D object model. The resulting output pose PoseOut in this case refers to the reference coordinate system of the
3D shape model.
The relative pose of the two coordinate systems can be queried by passing ’reference_pose’ for GenParamNames
in the operator get_shape_model_3d_params.
Parameter
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; Htuple . Hlong
Handle of the 3D shape model.
. PoseIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Pose to be transformed in the source system.
. Transformation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Direction of the transformation.
Default Value : "ref_to_model"
List of values : Transformation ∈ {"ref_to_model", "model_to_ref"}
. PoseOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Transformed 3D pose in the target system.
Result
If the parameters are valid, the operator trans_pose_shape_model_3d returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
trans_pose_shape_model_3d is reentrant and processed without parallelization.
Possible Predecessors
find_shape_model_3d
Alternatives
hom_mat3d_translate, hom_mat3d_rotate
Module
3D Metrology

write_shape_model_3d ( Hlong ShapeModel3DID, const char *FileName )


T_write_shape_model_3d ( const Htuple ShapeModel3DID,
const Htuple FileName )

Write a 3D shape model to a file.


The operator write_shape_model_3d writes a 3D shape model to the file FileName. The model can be
read again with read_shape_model_3d.
Parameter
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; Hlong
Handle of the 3D shape model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the file name is valid (write permission), the operator write_shape_model_3d returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
write_shape_model_3d is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d
Module
3D Metrology

HALCON 8.0.2
672 CHAPTER 8. MATCHING-3D

HALCON/C Reference Manual, 2008-5-13


Chapter 9

Morphology

9.1 Gray-Values

dual_rank ( const Hobject Image, Hobject *ImageRank,


const char *MaskType, Hlong Radius, Hlong ModePercent,
const char *Margin )

T_dual_rank ( const Hobject Image, Hobject *ImageRank,


const Htuple MaskType, const Htuple Radius, const Htuple ModePercent,
const Htuple Margin )

Opening, Median and Closing with circle or rectangle mask.


The operator dual_rank carries out a non-linear transformation of the gray values of all input images (Image).
Circles or squares can be used as structuring elements. The operator dual_rank effects two consecutive calls
of rank_image. At the first call the range gray value is calculated with the indicated range (ModePercent).
The result of this calculation is the input of a further call of rank_image, this time using the range value
100−ModePercent.
When filtering different parameters for border treatment (Margin) can be chosen:

gray value Pixels outside of the image edges are assumued


to be constant (with the indicated gray value).
’continued’ Continuation of edge pixels.
’cyclic’ Cyclic continuation of image edges.
’mirrored’ Reflection of pixels at the image edges.

A range filtering is calculated according to the following scheme: The indicated mask is put over the image to be
filtered in such a way that the center of the mask touches all pixels once. For each of these pixels all neighboring
pixels covered by the mask are sorted in an ascending sequence corresponding to their gray values. Each sorted
sequence of gray values contains the same number of gray values like the mask has image points. The n-th highest
element, (= ModePercent, rank values between 0...100 in percent) is selected and set as result gray value in the
corresponding result image.
If ModePercent is 0, then the operator equals to the gray value opening ( gray_opening). If ModePercent
is 50, the operator results in the median filter, which is applied twice ( median_image). The ModePercent
100 in dual_rank means that it calculates the gray value closing ( gray_closing). Choosing parameter
values inside this range results in a smooth transformation of these operators.
Parameter

. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real


Image to be filtered.
. ImageRank (output_object) . . . . . . multichannel-image(-array) ; Hobject * : byte / int2 / uint2 / int4 / real
Filtered Image.

673
674 CHAPTER 9. MORPHOLOGY

. MaskType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *


Shape of the mask.
Default Value : "circle"
List of values : MaskType ∈ {"circle", "rectangle"}
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Radius of the filter mask.
Default Value : 1
Suggested values : Radius ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 15, 19, 25, 31, 39, 47, 59}
Typical range of values : 1 ≤ Radius ≤ 101
Minimum Increment : 1
Recommended Increment : 2
. ModePercent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Filter Mode: 0 corresponds to a gray value opening , 50 corresponds to a median and 100 to a gray values
closing.
Default Value : 10
Suggested values : ModePercent ∈ {0, 2, 5, 10, 15, 20, 40, 50, 60, 80, 85, 90, 95, 98, 100}
Typical range of values : 0 ≤ ModePercent ≤ 100
Minimum Increment : 1
Recommended Increment : 2
. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Example

read_image(&Image,"fabrik");
dual_rank(Image,&ImageOpening,"circle",10,10,"mirrored");
disp_image(ImageOpening,WindowHandle);

√ Complexity
For each pixel: O( F ∗ 10) with F = area of the structuring element.
Result
If the parameter values are correct the operator dual_rank returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
dual_rank is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, sub_image, regiongrowing
Alternatives
rank_image, gray_closing, gray_opening, median_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect, sigma_image
References
W. Eckstein, O. Munkelt “Extracting Objects from Digital Terrain Model” Remote Sensing and Reconstruction for
Threedimensional Objects and Scenes, SPIE Symposium on Optical Science, Engeneering, and Instrumentation,
July 1995, San Diego
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


9.1. GRAY-VALUES 675

gen_disc_se ( Hobject *SE, Hlong Width, Hlong Height, Hlong Smax )


T_gen_disc_se ( Hobject *SE, const Htuple Width, const Htuple Height,
const Htuple Smax )

Generate ellipsoidal structuring elements for gray morphology.


gen_disc_se generates an ellipsoidal structuring element (SE) for gray morphology of images. The parameters
Width and Height determine the length of the two major axes of the ellipse. The value of Smax determines
the maximum gray value of the structuring element. For the generation of arbitrary structuring elements, see
read_gray_se.
Parameter
. SE (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte
Generated structuring element.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the structuring element.
Default Value : 5
Suggested values : Width ∈ {0, 1, 2, 3, 4, 5, 10, 15, 20}
Typical range of values : 0 ≤ Width ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the structuring element.
Default Value : 5
Suggested values : Height ∈ {0, 1, 2, 3, 4, 5, 10, 15, 20}
Typical range of values : 0 ≤ Height ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Smax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum gray value of the structuring element.
Default Value : 0
Suggested values : Smax ∈ {0, 1, 2, 5, 10, 20, 30, 40}
Typical range of values : 0 ≤ Smax ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 1
Result
gen_disc_se returns H_MSG_TRUE if all parameters are correct. If necessary, an exception is raised.
Parallelization Information
gen_disc_se is reentrant and processed without parallelization.
Possible Successors
gray_erosion, gray_dilation, gray_opening, gray_closing, gray_tophat,
gray_bothat
Alternatives
read_gray_se
See also
read_image, paint_region, paint_gray, crop_part
Module
Foundation

gray_bothat ( const Hobject Image, const Hobject SE,


Hobject *ImageBotHat )

T_gray_bothat ( const Hobject Image, const Hobject SE,


Hobject *ImageBotHat )

Perform a gray value bottom hat transformation on an image.

HALCON 8.0.2
676 CHAPTER 9. MORPHOLOGY

gray_bothat applies a gray value bottom hat transformation to the input image Image with the structuring
element SE. The gray value bottom hat transformation of an image i with a structuring element s is defined as

bothat(i, s) = (i • s) − i,

i.e., the difference of the closing of the image with s and the image (see gray_closing). For the generation of
structuring elements, see read_gray_se.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real


Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageBotHat (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / real
Bottom hat image.
Result
gray_bothat returns H_MSG_TRUE if the structuring element is not the empty region. Otherwise, an excep-
tion is raised.
Parallelization Information
gray_bothat is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se, gen_disc_se
Possible Successors
threshold
Alternatives
gray_closing
See also
gray_tophat, top_hat, gray_erosion_rect, sub_image
Module
Foundation

gray_closing ( const Hobject Image, const Hobject SE,


Hobject *ImageClosing )

T_gray_closing ( const Hobject Image, const Hobject SE,


Hobject *ImageClosing )

Perform a gray value closing on an image.


gray_closing applies a gray value closing to the input image Image with the structuring element SE. The
gray value closing of an image i with a structuring element s is defined as

i • s = (i ⊕ s) s ,

i.e., a dilation of the image with s followed by an erosion with s (see gray_dilation and gray_erosion).
For the generation of structuring elements, see read_gray_se.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real


Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageClosing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / real
Gray-closed image.

HALCON/C Reference Manual, 2008-5-13


9.1. GRAY-VALUES 677

Result
gray_closing returns H_MSG_TRUE if the structuring element is not the empty region. Otherwise, an excep-
tion is raised.
Parallelization Information
gray_closing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Alternatives
dual_rank
See also
closing, gray_dilation, gray_erosion
Module
Foundation

gray_closing_rect ( const Hobject Image, Hobject *ImageClosing,


Hlong MaskHeight, Hlong MaskWidth )

T_gray_closing_rect ( const Hobject Image, Hobject *ImageClosing,


const Htuple MaskHeight, const Htuple MaskWidth )

Perform a gray value closing with a rectangular mask.


gray_closing_rect applies a gray value closing to the input image Image with a rectangular mask of
size (MaskHeight, MaskWidth). The resulting image is returned in ImageClosing. If the parameters
MaskHeight or MaskWidth are even, they are changed to the next larger odd value. At the border of the image
the gray values are mirrored.
The gray value closing of an image i with a rectangular structuring element s is defined as

i ◦ s = (i ⊕ s) s ,

i.e., a dilation of the image with s followed by an erosion with s (see gray_dilation_rect and
gray_erosion_rect).
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Input image.
. ImageClosing (output_object) . . . . image(-array) ; Hobject * : byte / direction / cyclic / int2 / int4 / real
Gray-closed image.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)

HALCON 8.0.2
678 CHAPTER 9. MORPHOLOGY

Result
gray_closing_rect returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_closing_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_closing, gray_closing_shape
See also
closing_rectangle1, gray_dilation_rect, gray_erosion_rect
Module
Foundation

gray_closing_shape ( const Hobject Image, Hobject *ImageClosing,


double MaskHeight, double MaskWidth, const char *MaskShape )

T_gray_closing_shape ( const Hobject Image, Hobject *ImageClosing,


const Htuple MaskHeight, const Htuple MaskWidth,
const Htuple MaskShape )

Perform a grayvalue closing with a selected mask.


gray_closing_shape applies a gray value closing to the input image Image with the structuring element of
shape MaskShape. The mask’s offset values are 0 and its horizontal and vertical size is defined by MaskHeight
and MaskWidth. The resulting image is returned in ImageClosing.
If the parameters MaskHeight or MaskWidth are of the type integer and are even, they are changed to the next
larger odd value. In contrast, if at least one of the two parameters is of the type float, the input image Image is
transformed with both the next larger and the next smaller odd mask size, and the output image ImageClosing
is interpolated from the two intermediate images. Therefore, note that gray_closing_shape returns different
results for mask sizes of, e.g., 4 and 4.0!
In case of the values ’rhombus’ and ’octagon’ for the MaskShape control parameter, MaskHeight and
MaskWidth must be equal. The parameter value ’octagon’ for MaskShape denotes an equilateral octagonal
mask which is a suitable approximation for a circular structure. At the border of the image the gray values are
mirrored.
The gray value closing of an image i with a structuring element s is defined as

i • s = (i ⊕ s) s ,

i.e., a dilation of the image with s followed by an erosion with s (see gray_dilation_shape and
gray_erosion_shape).
Attention
Note that gray_closing_shape requires considerably more time for mask sizes of type float than for mask
sizes of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2


Image for which the minimum gray values are to be calculated.
. ImageClosing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2
Image containing the minimum gray values.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; double / Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskHeight ≤ 511.0

HALCON/C Reference Manual, 2008-5-13


9.1. GRAY-VALUES 679

. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; double / Hlong


Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskWidth ≤ 511.0
. MaskShape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Shape of the mask.
Default Value : "octagon"
List of values : MaskShape ∈ {"rectangle", "rhombus", "octagon"}
Result
gray_closing_shape returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
gray_closing_shape is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_closing
See also
gray_dilation_shape, gray_erosion_shape, closing
Module
Foundation

gray_dilation ( const Hobject Image, const Hobject SE,


Hobject *ImageDilation )

T_gray_dilation ( const Hobject Image, const Hobject SE,


Hobject *ImageDilation )

Perform a gray value dilation on an image.


gray_dilation applies a gray value dilation to the input image Image with the structuring element SE. The
gray value dilation of an image i with a structuring element s at the pixel position x is defined as:

(i ⊕ s)(x) = max{f (x − z) + s(z)|z ∈ S}

Here, S is the domain of the structuring element s, i.e., the pixels z where s(z) > 0 (see read_gray_se).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / real
Gray-dilated image.
Result
gray_dilation returns H_MSG_TRUE if the structuring element is not the empty region. Otherwise, an
exception is raised.
Parallelization Information
gray_dilation is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Possible Successors
sub_image, gray_erosion
Alternatives
gray_dilation_rect
See also
gray_opening, gray_closing, dilation1, gray_skeleton

HALCON 8.0.2
680 CHAPTER 9. MORPHOLOGY

Module
Foundation

gray_dilation_rect ( const Hobject Image, Hobject *ImageMax,


Hlong MaskHeight, Hlong MaskWidth )

T_gray_dilation_rect ( const Hobject Image, Hobject *ImageMax,


const Htuple MaskHeight, const Htuple MaskWidth )

Determine the maximum gray value within a rectangle.


gray_dilation_rect calculates the maximum gray value of the input image Image within a rectangular
mask of size (MaskHeight, MaskWidth) for each image point. The resulting image is returned in ImageMax.
If the parameters MaskHeight or MaskWidth are even, they are changed to the next larger odd value. At the
border of the image the gray values are mirrored.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Image for which the maximum gray values are to be calculated.
. ImageMax (output_object) . . . . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int2 / int4 / real
Image containing the maximum gray values.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_dilation_rect returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_dilation_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
See also
gray_skeleton
Module
Foundation

gray_dilation_shape ( const Hobject Image, Hobject *ImageMax,


double MaskHeight, double MaskWidth, const char *MaskShape )

T_gray_dilation_shape ( const Hobject Image, Hobject *ImageMax,


const Htuple MaskHeight, const Htuple MaskWidth,
const Htuple MaskShape )

Determine the maximum gray value within a selected mask.

HALCON/C Reference Manual, 2008-5-13


9.1. GRAY-VALUES 681

gray_dilation_shape calculates the maximum gray value of the input image Image within a mask of shape
MaskShape, vertical size MaskHeight and horizontal size MaskWidth for each image point. The resulting
image is returned in ImageMax.
If the parameters MaskHeight or MaskWidth are of the type integer and are even, they are changed to the next
larger odd value. In contrast, if at least one of the two parameters is of the type float, the input image Image is
transformed with both the next larger and the next smaller odd mask size, and the output image ImageMax is
interpolated from the two intermediate images. Therefore, note that gray_dilation_shape returns different
results for mask sizes of, e.g., 4 and 4.0!
In case of the values ’rhombus’ und ’octagon’ for the MaskShape control parameter, MaskHeight and
MaskWidth must be equal. The parameter value ’octagon’ for MaskShape denotes an equilateral octagonal
mask which is a suitable approximation for a circular structure. At the border of the image the gray values are
mirrored.
Attention
Note that gray_dilation_shape requires considerably more time for mask sizes of type float than for mask
sizes of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2


Image for which the maximum gray values are to be calculated.
. ImageMax (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2
Image containing the maximum gray values.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; double / Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskHeight
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; double / Hlong
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskWidth
. MaskShape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Shape of the mask.
Default Value : "octagon"
List of values : MaskShape ∈ {"rectangle", "rhombus", "octagon"}
Result
gray_dilation_shape returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
gray_dilation_shape is reentrant and automatically parallelized (on tuple level, channel level, domain
level).
Alternatives
gray_dilation, gray_dilation_rect
See also
gray_opening_shape, gray_closing_shape, gray_skeleton
Module
Foundation

gray_erosion ( const Hobject Image, const Hobject SE,


Hobject *ImageErosion )

T_gray_erosion ( const Hobject Image, const Hobject SE,


Hobject *ImageErosion )

Perform a gray value erosion on an image.

HALCON 8.0.2
682 CHAPTER 9. MORPHOLOGY

gray_erosion applies a gray value erosion to the input image Image with the structuring element SE. The
gray value erosion of an image i with a structuring element s at the pixel position x is defined as:

(i s)(x) = min{f (x + z) − s(z)|z ∈ S}

Here, S is the domain of the structuring element s, i.e., the pixels z where s(z) > 0 (see read_gray_se).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / real
Gray-eroded image.
Result
gray_erosion returns H_MSG_TRUE if the structuring element is not the empty region. Otherwise, an excep-
tion is raised.
Parallelization Information
gray_erosion is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Possible Successors
gray_dilation, sub_image
Alternatives
gray_erosion_rect
See also
gray_opening, gray_closing, erosion1, gray_skeleton
Module
Foundation

gray_erosion_rect ( const Hobject Image, Hobject *ImageMin,


Hlong MaskHeight, Hlong MaskWidth )

T_gray_erosion_rect ( const Hobject Image, Hobject *ImageMin,


const Htuple MaskHeight, const Htuple MaskWidth )

Determine the minimum gray value within a rectangle.


gray_erosion_rect calculates the minimum gray value of the input image Image within a rectangular mask
of size (MaskHeight, MaskWidth) for each image point. The resulting image is returned in ImageMin. If the
parameters MaskHeight or MaskWidth are even, they are changed to the next larger odd value. At the border
of the image the gray values are mirrored.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Image for which the minimum gray values are to be calculated.
. ImageMin (output_object) . . . . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int2 / int4 / real
Image containing the minimum gray values.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)

HALCON/C Reference Manual, 2008-5-13


9.1. GRAY-VALUES 683

. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong


Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_erosion_rect returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_erosion_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
See also
gray_dilation_rect
Module
Foundation

gray_erosion_shape ( const Hobject Image, Hobject *ImageMin,


double MaskHeight, double MaskWidth, const char *MaskShape )

T_gray_erosion_shape ( const Hobject Image, Hobject *ImageMin,


const Htuple MaskHeight, const Htuple MaskWidth,
const Htuple MaskShape )

Determine the minimum gray value within a selected mask.


gray_erosion_shape calculates the minimum gray value of the input image Image within a mask of shape
MaskShape, vertical size MaskHeight and horizontal size MaskWidth for each image point. The resulting
image is returned in ImageMin.
If the parameters MaskHeight or MaskWidth are of the type integer and are even, they are changed to the next
larger odd value. In contrast, if at least one of the two parameters is of the type float, the input image Image is
transformed with both the next larger and the next smaller odd mask size, and the output image ImageMin is
interpolated from the two intermediate images. Therefore, note that gray_erosion_shape returns different
results for mask sizes of, e.g., 4 and 4.0!
In case of the values ’rhombus’ and ’octagon’ for the MaskShape control parameter, MaskHeight and
MaskWidth must be equal. The parameter value ’octagon’ for MaskShape denotes an equilateral octagonal
mask which is a suitable approximation for a circular structure. At the border of the image the gray values are
mirrored.
Attention
Note that gray_erosion_shape requires considerably more time for mask sizes of type float than for mask
sizes of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2


Image for which the minimum gray values are to be calculated.
. ImageMin (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2
Image containing the minimum gray values.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; double / Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskHeight

HALCON 8.0.2
684 CHAPTER 9. MORPHOLOGY

. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; double / Hlong


Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskWidth
. MaskShape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Shape of the mask.
Default Value : "octagon"
List of values : MaskShape ∈ {"rectangle", "rhombus", "octagon"}
Result
gray_erosion_shape returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
gray_erosion_shape is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_erosion, gray_erosion_rect
See also
gray_opening_shape, gray_closing_shape, gray_skeleton
Module
Foundation

gray_opening ( const Hobject Image, const Hobject SE,


Hobject *ImageOpening )

T_gray_opening ( const Hobject Image, const Hobject SE,


Hobject *ImageOpening )

Perform a gray value opening on an image.


gray_opening applies a gray value opening to the input image Image with the structuring element SE. The
gray value opening of an image i with a structuring element s is defined as

i ◦ s = (i s) ⊕ s ,

i.e., an erosion of the image with s followed by a dilation with s (see gray_erosion and gray_dilation).
For the generation of structuring elements, see read_gray_se.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / real
Gray-opened image.
Result
gray_opening returns H_MSG_TRUE if the structuring element is not the empty region. Otherwise, an excep-
tion is raised.
Parallelization Information
gray_opening is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Alternatives
dual_rank
See also
opening, gray_dilation, gray_erosion
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


9.1. GRAY-VALUES 685

gray_opening_rect ( const Hobject Image, Hobject *ImageOpening,


Hlong MaskHeight, Hlong MaskWidth )

T_gray_opening_rect ( const Hobject Image, Hobject *ImageOpening,


const Htuple MaskHeight, const Htuple MaskWidth )

Perform a gray value opening with a rectangular mask.


gray_opening_rect applies a gray value opening to the input image Image with a rectangular mask of
size (MaskHeight, MaskWidth). The resulting image is returned in ImageOpening. If the parameters
MaskHeight or MaskWidth are even, they are changed to the next larger odd value. At the border of the image
the gray values are mirrored.
The gray value opening of an image i with a rectangular structuring element s is defined as

i ◦ s = (i s) ⊕ s ,

i.e., an erosion of the image with s followed by a dilation with s (see gray_erosion_rect and
gray_dilation_rect).
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Input image.
. ImageOpening (output_object) . . . . image(-array) ; Hobject * : byte / direction / cyclic / int2 / int4 / real
Gray-opened image.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_opening_rect returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_opening_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_opening, gray_opening_shape
See also
opening_rectangle1, gray_dilation_rect, gray_erosion_rect
Module
Foundation

HALCON 8.0.2
686 CHAPTER 9. MORPHOLOGY

gray_opening_shape ( const Hobject Image, Hobject *ImageOpening,


double MaskHeight, double MaskWidth, const char *MaskShape )

T_gray_opening_shape ( const Hobject Image, Hobject *ImageOpening,


const Htuple MaskHeight, const Htuple MaskWidth,
const Htuple MaskShape )

Perform a gray value opening with a selected mask.


gray_opening_shape applies a gray value opening to the input image Image with the structuring element of
shape MaskShape. The mask’s offset values are 0 and its horizontal and vertical size is defined by MaskHeight
and MaskWidth. The resulting image is returned in ImageOpening.
If the parameters MaskHeight or MaskWidth are of the type integer and are even, they are changed to the next
larger odd value. In contrast, if at least one of the two parameters is of the type float, the input image Image is
transformed with both the next larger and the next smaller odd mask size, and the output image ImageOpening
is interpolated from the two intermediate images. Therefore, note that gray_opening_shape returns different
results for mask sizes of, e.g., 4 and 4.0!
In case of the values ’rhombus’ and ’octagon’ for the MaskShape control parameter, MaskHeight and
MaskWidth must be equal. The parameter value ’octagon’ for MaskShape denotes an equilateral octagonal
mask which is a suitable approximation for a circular structure. At the border of the image the gray values are
mirrored.
The gray value opening of an image i with a structuring element s is defined as

i ◦ s = (i s) ⊕ s ,

i.e., an erosion of the image with s followed by a dilation with s (see gray_erosion_shape and
gray_dilation_shape).
Attention
Note that gray_opening_shape requires considerably more time for mask sizes of type float than for mask
sizes of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2


Image for which the minimum gray values are to be calculated.
. ImageOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2
Image containing the minimum gray values.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; double / Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskHeight ≤ 511.0
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; double / Hlong
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 1.0 ≤ MaskWidth ≤ 511.0
. MaskShape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Shape of the mask.
Default Value : "octagon"
List of values : MaskShape ∈ {"rectangle", "rhombus", "octagon"}
Result
gray_opening_shape returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
gray_opening_shape is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_opening

HALCON/C Reference Manual, 2008-5-13


9.1. GRAY-VALUES 687

See also
gray_dilation_shape, gray_erosion_shape, opening
Module
Foundation

gray_range_rect ( const Hobject Image, Hobject *ImageResult,


Hlong MaskHeight, Hlong MaskWidth )

T_gray_range_rect ( const Hobject Image, Hobject *ImageResult,


const Htuple MaskHeight, const Htuple MaskWidth )

Determine the gray value range within a rectangle.


gray_range_rect calculates the gray value range, i.e., the difference (max − min) of the maximum and
minimum gray values, of the input image Image within a rectangular mask of size (MaskHeight, MaskWidth)
for each image point. The resulting image is returned in ImageResult. If the parameters MaskHeight or
MaskWidth are even, they are changed to the next larger odd value. At the border of the image the gray values
are mirrored.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Image for which the gray value range is to be calculated.
. ImageResult (output_object) . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int2 / int4 / real
Image containing the gray value range.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_range_rect returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can
be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_range_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_dilation_rect, gray_erosion_rect, sub_image
Module
Foundation

HALCON 8.0.2
688 CHAPTER 9. MORPHOLOGY

gray_tophat ( const Hobject Image, const Hobject SE,


Hobject *ImageTopHat )

T_gray_tophat ( const Hobject Image, const Hobject SE,


Hobject *ImageTopHat )

Perform a gray value top hat transformation on an image.


gray_tophat applies a gray value top hat transformation to the input image Image with the structuring element
SE. The gray value top hat transformation of an image i with a structuring element s is defined as

tophat(i, s) = i − (i ◦ s),

i.e., the difference of the image and its opening with s (see gray_opening). For the generation of structuring
elements, see read_gray_se.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real


Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageTopHat (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / real
Top hat image.
Result
gray_tophat returns H_MSG_TRUE if the structuring element is not the empty region. Otherwise, an excep-
tion is raised.
Parallelization Information
gray_tophat is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se, gen_disc_se
Possible Successors
threshold
Alternatives
gray_opening
See also
gray_bothat, top_hat, gray_erosion_rect, sub_image
Module
Foundation

read_gray_se ( Hobject *SE, const char *FileName )


T_read_gray_se ( Hobject *SE, const Htuple FileName )

Load a structuring element for gray morphology.


read_gray_se loads a structuring element for gray morphology from a file. The file names of these struc-
turing elements must end in ’.gse’ (for gray-scale structuring element). This suffix is automatically appended by
read_gray_se to the passed file name, and thus must not be passed. The structuring element’s data must be
contained in the file in the following format: The first two numbers in the file determine the width and height of
the structuring element, and determine a rectangle enclosing the structuring element. Both values must be greater
than 0. Then, Width*Height integer numbers follow, with the following interpretation: Values smaller than 0 are
regarded as not belonging to the region of the structuring element, i.e., they are not considered in morphological
operations. This allows the creation of irregularly shaped, not connected structuring elements. All other values
are regarded as the corresponding values for gray morphology. Structuring elements are stored internally as byte-
images, with negative values being mapped to 0, and all other values increased by 1. Thus, normal byte-images

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 689

can also be used as structuring elements. However, care should be taken not to use too large images, since the
runtime is proportional to the area of the image times the area of the structuring element.
Parameter
. SE (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte
Generated structuring element.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of the file containing the structuring element.
Result
read_gray_se returns H_MSG_TRUE if all parameters are correct. Otherwise, an exception is raised.
Parallelization Information
read_gray_se is reentrant and processed without parallelization.
Possible Successors
gray_erosion, gray_dilation, gray_opening, gray_closing, gray_tophat,
gray_bothat
Alternatives
gen_disc_se
See also
read_image, paint_region, paint_gray, crop_part
Module
Foundation

9.2 Region

bottom_hat ( const Hobject Region, const Hobject StructElement,


Hobject *RegionBottomHat )

T_bottom_hat ( const Hobject Region, const Hobject StructElement,


Hobject *RegionBottomHat )

Compute the bottom hat of regions.


bottom_hat computes the closing of Region with StructElement. The difference between the result
of the closing and the original region is called the bottom hat. In contrast to closing, which merges regions
under certain circumstances, bottom_hat computes the regions generated by such a merge.
The position of StructElement is meaningless, since a closing operation is invariant with respect to the choice
of the reference point.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element (position independent).
. RegionBottomHat (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the bottom hat operator.
Example

threshold(Image,&Regions,128.0,255.0);
gen_circle(&Circle,128.0,128.0,16.0);
bottom_hat(Regions,Circle,&RegionBottomHat);
set_color(WindowHandle,"red");
disp_region(Regions,WindowHandle);
set_color(WindowHandle,"green");
disp_region(RegionBottomHat,WindowHandle);

HALCON 8.0.2
690 CHAPTER 9. MORPHOLOGY

Result
bottom_hat returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
bottom_hat is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
closing, difference
See also
top_hat, morph_hat, gray_bothat, opening
Module
Foundation

boundary ( const Hobject Region, Hobject *RegionBorder,


const char *BoundaryType )

T_boundary ( const Hobject Region, Hobject *RegionBorder,


const Htuple BoundaryType )

Reduce a region to its boundary.


boundary computes the boundary of a region by using morphological operations. The parameter
BoundaryType determines the type of boundary to compute:
’inner’, ’inner_filled’ and ’outer’.
boundary computes the contour of each input region. The resulting regions consist only of the minimal border
of the input regions. If BoundaryType is set to ’inner’, the contour lies within the original region, if it is set
to ’outer’, it is one pixel outside of the original region. If BoundaryType is set to ’inner_filled’, holes in the
interior of the input region are suppressed.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions for which the boundary is to be computed.
. RegionBorder (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Resulting boundaries.
. BoundaryType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Boundary type.
Default Value : "inner"
List of values : BoundaryType ∈ {"inner", "outer", "inner_filled"}
Example

/* Intersections of two circles: */


gen_circle(&Circle1,200.0,100.0,100.5);
gen_circle(&Circle2,200.0,150.0,100.5);
boundary(Circle1,&Margin1,"inner");
boundary(Circle2,&Margin2,"inner");

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 691

intersection(Margin1,Margin2,&Intersections);
connection(Intersections,&Single);
T_area_center(Single,_,&Rows,&Columns);

/* simulation of Mode ’inner’ */


void inner(Hobject Region, Hobject *Border)
{
Hobject Smaller;
erosion_circle(Region,&Smaller,1.5);
difference(Region,Smaller,Border);
clear_obj(Smaller);
}

Complexity
Let F be the area of the input region. Then the runtime complexity for one region is

O(3 F ) .

Result
boundary returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
boundary is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
dilation_circle, erosion_circle, difference
See also
fill_up
Module
Foundation

closing ( const Hobject Region, const Hobject StructElement,


Hobject *RegionClosing )

T_closing ( const Hobject Region, const Hobject StructElement,


Hobject *RegionClosing )

Close a region.
A closing operation is defined as a dilation followed by a Minkowsi subtraction. By applying closing
to a region, larger structures remain mostly intact, while small gaps between adjacent regions and holes smaller
than StructElement are closed, and the regions’ boundaries are smoothed. All closing variants share the
property that separate regions are not merged, but remain separate objects. The position of StructElement is
meaningless, since a closing operation is invariant with respect to the choice of the reference point.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.

HALCON 8.0.2
692 CHAPTER 9. MORPHOLOGY

Attention
closing is applied to each input region separately. If gaps between different regions are to be closed, union1
or union2 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be closed.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element (position-invariant).
. RegionClosing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Closed regions.
Example

my_closing(Hobject In, Hobject StructElement, Hobject *Out)


{
Hobject tmp;
dilation1(In,StructElement,&tmp,1);
minkowski_sub1(tmp,StructElement,Out,1);
clear_obj(tmp);
}

Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O(2 · F1 · F 2) .

Result
closing returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
closing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
closing_circle, closing_golay
See also
dilation1, erosion1, opening, minkowski_sub1
Module
Foundation

closing_circle ( const Hobject Region, Hobject *RegionClosing,


double Radius )

T_closing_circle ( const Hobject Region, Hobject *RegionClosing,


const Htuple Radius )

Close a region with a circular structuring element.

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 693

closing_circle behaves analogously to closing, i.e., the regions’ boundaries are smoothed and holes
within a region which are smaller than the circular structuring element of radius Radius are closed. The
closing_circle operation is defined as a dilation followed by a Minkowski subtraction, both with the same
circular structuring element.
Attention
closing_circle is applied to each input region separately. If gaps between different regions are to be closed,
union1 or union2 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be closed.
. RegionClosing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Closed regions.
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double / Hlong
Radius of the circular structuring element.
Default Value : 3.5
Suggested values : Radius ∈ {1.5, 2.5, 3.5, 4.5, 5.5, 7.5, 9.5, 12.5, 15.5, 19.5, 25.5, 33.5, 45.5, 60.5, 110.5}
Typical range of values : 0.5 ≤ Radius ≤ 511.5 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0
Example

my_closing_circle(Hobject In, double Radius, Hobject *Out)


{
Hobject tmp, StructElement;
gen_circle(StructElement,100.0,100.0,Radius);
dilation1(In,StructElement,&tmp,1);
minkowski_sub1(tmp,StructElement,Out,1);
clear_obj(tmp); clear_obj(StructElement);
}

Complexity
Let F 1 be the area of the input region. Then the runtime complexity for one region is:

O(4 · F 1 · Radius) .

Result
closing_circle returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
closing_circle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
rank_region, fill_up, closing, closing_circle, closing_golay
See also
dilation1, minkowski_sub1, erosion1, opening
Module
Foundation

HALCON 8.0.2
694 CHAPTER 9. MORPHOLOGY

closing_golay ( const Hobject Region, Hobject *RegionClosing,


const char *GolayElement, Hlong Rotation )

T_closing_golay ( const Hobject Region, Hobject *RegionClosing,


const Htuple GolayElement, const Htuple Rotation )

Close a region with an element from the Golay alphabet.


closing_golay is defined as a Minkowski addition followed by a Minkowski subtraction. First the Minkowski
addition of the input region (Region) with the structuring element from the Golay alphabet defined by
GolayElement and Rotation is computed. Then the Minkowski subtraction of the result and the structuring
element rotated by 180◦ is performed.
The following structuring elements are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The rotation number Rotation determines which rotation of the element should be used, and whether the fore-
ground (even) or background version (odd) of the selected element should be used. The Golay elements, together
with all possible rotations, are described with the operator golay_elements.
closing_golay serves to close holes smaller than the structuring element, and to smooth regions’ boundaries.
Attention
Not all values of Rotation are valid for any Golay element. For some of the values of Rotation, the resulting
regions are identical to the input regions.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be closed.
. RegionClosing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Closed regions.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

O(6 · F) .

Result
closing_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
closing_golay is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
closing

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 695

See also
erosion_golay, dilation_golay, opening_golay, hit_or_miss_golay,
thinning_golay, thickening_golay, golay_elements
Module
Foundation

closing_rectangle1 ( const Hobject Region, Hobject *RegionClosing,


Hlong Width, Hlong Height )

T_closing_rectangle1 ( const Hobject Region, Hobject *RegionClosing,


const Htuple Width, const Htuple Height )

Close a region with a rectangular structuring element.


closing_rectangle1 performs a dilation_rectangle1 followed by an erosion_rectangle1
on the input region Region. The size of the rectangular structuring element is determined by the parameters
Width and Height. As is the case for all closing variants, regions’ boundaries are smoothed and holes
within a region which are smaller than the rectangular structuring element are closed.
Attention
closing_rectangle1 is applied to each input region separately. If gaps between different regions are to be
closed, union1 or union2 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be closed.
. RegionClosing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Closed regions.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong / double
Width of the structuring rectangle.
Default Value : 10
Suggested values : Width ∈ {1, 2, 3, 4, 5, 7, 9, 12, 15, 19, 25, 33, 45, 60, 110, 150, 200}
Typical range of values : 1 ≤ Width ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong / double
Height of the structuring rectangle.
Default Value : 10
Suggested values : Height ∈ {1, 2, 3, 4, 5, 7, 9, 12, 15, 19, 25, 33, 45, 60, 110, 150, 200}
Typical range of values : 1 ≤ Height ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of an input region and H be the height of the rectangle. Then the runtime complexity for one
region is:

O(2 · F 1 · log_2(H)) .

Result
closing_rectangle1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or
no input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
closing_rectangle1 is reentrant and automatically parallelized (on tuple level).

HALCON 8.0.2
696 CHAPTER 9. MORPHOLOGY

Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
closing
See also
dilation_rectangle1, erosion_rectangle1, opening_rectangle1, gen_rectangle1
Module
Foundation

dilation1 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionDilation, Hlong Iterations )

T_dilation1 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionDilation, const Htuple Iterations )

Dilate a region.
dilation1 dilates the input regions with a structuring element. By applying dilation1 to a region, its
boundary gets smoothed. In the process, the area of the region is enlarged. Furthermore, disconnected regions
may be merged. Such regions, however, remain logically distinct region. The dilation is a set-theoretic region
operation. It uses the union operation.
Let M (StructElement) and R (Region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
[
dilation1(R, M ) := t−~vm (R)
m∈M

For each point m in M a translation of the region R is performed. The union of all these translations is the dilation
of R with M . dilation1 is similar to the operator minkowski_add1, the difference is that in dilation1
the structuring element is mirrored at the origin. The position of StructElement is meaningless, since the
displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that an
empty region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Attention
A dilation always results in enlarged regions. Closely spaced regions which may touch or overlap as a result of
the dilation are still treated as two separate regions. If the desired behavior is to merge them into one region, the
operator union1 has to be called first.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be dilated.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Dilated regions.

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 697

. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .

Result
dilation1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
dilation1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, add_channels, select_shape, area_center, connection
Alternatives
minkowski_add1, minkowski_add2, dilation2, dilation_golay, dilation_seq
See also
erosion1, erosion2, opening, closing
Module
Foundation

dilation2 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionDilation, Hlong Row, Hlong Column, Hlong Iterations )

T_dilation2 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionDilation, const Htuple Row, const Htuple Column,
const Htuple Iterations )

Dilate a region (using a reference point).


dilation2 dilates the input regions with a structuring element (StructElement) having the reference point
(Row,Column). dilation2 has a similar effect as dilation1, the difference is that the reference point of the
structuring element can be chosen arbitrarily. The parameter Iterations determines the number of iterations
which are to be performed with the structuring element. The result of iteration n − 1 is used as input for iteration
n.
An empty region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Attention
A dilation always results in enlarged regions. Closely spaced regions which may touch or overlap as a result of

HALCON 8.0.2
698 CHAPTER 9. MORPHOLOGY

the dilation are still treated as two separate regions. If the desired behavior is to merge them into one region, the
operator union1 has to be called first.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be dilated.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Dilated regions.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Default Value : 0
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 7, 11, 17, 25, 32, 64, 128}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .

Result
dilation2 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
dilation2 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, add_channels, select_shape, area_center, connection
Alternatives
minkowski_add1, minkowski_add2, dilation1, dilation_golay, dilation_seq
See also
erosion1, erosion2, opening, closing
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 699

dilation_circle ( const Hobject Region, Hobject *RegionDilation,


double Radius )

T_dilation_circle ( const Hobject Region, Hobject *RegionDilation,


const Htuple Radius )

Dilate a region with a circular structuring element.


dilation_circle applies a Minkowski addition with a circular structuring element to the input regions
Region. Because the circular mask is symmetrical, this is identical to a dilation. The size of the circle used
as structuring element is determined by Radius.
The operator results in enlarged regions, smoothed region boundaries, and the holes smaller than the circular mask
in the interior of the region are closed. It is useful to select only values like 3.5, 5.5, etc. for Radius in order
to avoid a translation of a region, because integer radii result in the circle having a non-integer center of gravity
which is rounded to the next integer.
Attention
dilation_circle is applied to each input region separately. If gaps between different regions are to be closed,
union1 or union2 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be dilated.
. RegionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Dilated regions.
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double / Hlong
Radius of the circular structuring element.
Default Value : 3.5
Suggested values : Radius ∈ {1.5, 2.5, 3.5, 4.5, 5.5, 7.5, 9.5, 12.5, 15.5, 19.5, 25.5, 33.5, 45.5, 60.5, 110.5}
Typical range of values : 0.5 ≤ Radius ≤ 511.5 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0
Example

my_dilation_circle(Hobject In, double Radius, Hobject *Out)


{
Hobject Circle;
gen_circle(&Circle,100.0,100.0,Radius);
minkowski_add1(In,Circle,Out,1);
clear_obj(Circle);
}

Complexity
Let F 1 be the area of an input region. Then the runtime complexity for one region is:

O(2 · Radius · F 1) .

Result
dilation_circle returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
dilation_circle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm

HALCON 8.0.2
700 CHAPTER 9. MORPHOLOGY

Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
minkowski_add1, minkowski_add2, expand_region, dilation1, dilation2,
dilation_rectangle1
See also
gen_circle, erosion_circle, closing_circle, opening_circle
Module
Foundation

dilation_golay ( const Hobject Region, Hobject *RegionDilation,


const char *GolayElement, Hlong Iterations, Hlong Rotation )

T_dilation_golay ( const Hobject Region, Hobject *RegionDilation,


const Htuple GolayElement, const Htuple Iterations,
const Htuple Rotation )

Dilate a region with an element from the Golay alphabet.


dilation_golay dilates a region with the selected element GolayElement from the Golay alphabet. The
following structuring elements are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The rotation number Rotation determines which rotation of the element should be used, and whether the fore-
ground (even) or background version (odd) of the selected element should be used. The Golay elements, together
with all possible rotations, are described with the operator golay_elements. The operator works by shifting
the structuring element over the region to be processed (Region). For all positions of the structuring element that
intersect the region, the corresponding reference point (relative to the structuring element) is added to the output
region. This means that the union of all translations of the structuring element within the region is computed.
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n.
Attention
Not all values of Rotation are valid for any Golay element. For some of the values of Rotation, the resulting
regions are identical to the input regions.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be dilated.
. RegionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Dilated regions.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 701


O(3 · F) .

Result
dilation_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Otherwise, an exception is raised.
Parallelization Information
dilation_golay is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
dilation1, dilation2, dilation_seq
See also
erosion_golay, opening_golay, closing_golay, hit_or_miss_golay, thinning_golay,
thickening_golay, golay_elements
Module
Foundation

dilation_rectangle1 ( const Hobject Region, Hobject *RegionDilation,


Hlong Width, Hlong Height )

T_dilation_rectangle1 ( const Hobject Region, Hobject *RegionDilation,


const Htuple Width, const Htuple Height )

Dilate a region with a rectangular structuring element.


dilation_rectangle1 applies a dilation with a rectangular structuring element to the input regions Region.
The size of the structuring rectangle is Width × Height. The operator results in enlarged regions, and the holes
smaller than the rectangular mask in the interior of the regions are closed.
dilation_rectangle1 is a very fast operation because the height of the rectangle enters only logarithmically
into the runtime complexity, while the width does not enter at all. This leads to excellent runtime efficiency, even
in the case of very large rectangles (edge length > 100).
Attention
dilation_rectangle1 is applied to each input region separately. If gaps between different regions are to be
closed, union1 or union2 has to be called first.
To enlarge a region by the same amount in all directions, Width and Height must be odd. If this is not the case,
the region is dilated by a larger amount at the right or at the bottom, respectively, than at the left or at the top.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be dilated.
. RegionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Dilated regions.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the structuring rectangle.
Default Value : 11
Suggested values : Width ∈ {1, 2, 3, 4, 5, 11, 15, 21, 31, 51, 71, 101, 151, 201}
Typical range of values : 1 ≤ Width ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10

HALCON 8.0.2
702 CHAPTER 9. MORPHOLOGY

. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong


Height of the structuring rectangle.
Default Value : 11
Suggested values : Height ∈ {1, 2, 3, 4, 5, 11, 15, 21, 31, 51, 71, 101, 151, 201}
Typical range of values : 1 ≤ Height ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Example

threshold(Image,&Light,220.0,255.0);
dilation_rectangle1(Light,&Wide,50,50);
set_color(WindowHandle,"red");
disp_region(Wide,WindowHandle);
set_color(WindowHandle,"white");
disp_region(Light,WindowHandle);

Complexity
Let F 1 be the area of an input region and H be the height of the rectangle. Then the runtime complexity for one
region is:

O( F 1 · ld(H)) .

Result
dilation_rectangle1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty
or no input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
dilation_rectangle1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
minkowski_add1, minkowski_add2, expand_region, dilation1, dilation2,
dilation_circle
See also
gen_rectangle1, gen_region_polygon_filled
Module
Foundation

dilation_seq ( const Hobject Region, Hobject *RegionDilation,


const char *GolayElement, Hlong Iterations )

T_dilation_seq ( const Hobject Region, Hobject *RegionDilation,


const Htuple GolayElement, const Htuple Iterations )

Dilate a region sequentially.


dilation_seq computes the sequential dilation of the input region Region with the selected structuring ele-
ment GolayElement from the Golay alphabet. This is done by executing the operator dilation_golay with
all rotations of the structuring element Iterations times. The following structuring elements can be selected:

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 703

’l’, ’d’, ’c’, ’f’, ’h’, ’k’.


In order to compute the skeleton of a region, usually the elements ’l’ and ’m’ are used. Only the “foreground
elements” (even rotation numbers) are used. The elements ’i’ and ’e’ result in unchanged output regions. The
elements ’l’, ’m’ and ’f2’ are identical for the foreground. The Golay elements, together with all possible rotations,
are described with the operator golay_elements.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be dilated.
. RegionDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Dilated regions.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "d", "c", "f", "h", "k"}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

O(Iterations · 20 · F) .

Result
dilation_seq returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
dilation_seq is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
dilation1, dilation2, dilation_golay
See also
erosion_seq, hit_or_miss_seq, thinning_seq
Module
Foundation

erosion1 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionErosion, Hlong Iterations )

T_erosion1 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionErosion, const Htuple Iterations )

Erode a region.

HALCON 8.0.2
704 CHAPTER 9. MORPHOLOGY

erosion1 erodes the input regions with a structuring element. By applying erosion1 to a region, its boundary
gets smoothed. In the process, the area of the region is reduced. Furthermore, connected regions may be split.
Such regions, however, remain logically one region. The erosion is a set-theoretic region operation. It uses the
intersection operation.
Let M (StructElement) and R (Region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
\
erosion1(R, M ) := t−~vm (R).
m∈M

For each point m in M a translation of the region R is performed. The intersection of all these translations is
the erosion of R with M . erosion1 is similar to the operator minkowski_sub1, the difference is that in
erosion1 the structuring element is mirrored at the origin. The position of StructElement is meaningless,
since the displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that the
maximum region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be eroded.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .

Result
erosion1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
erosion1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, watersheds, class_ndim_norm, gen_circle, gen_ellipse,
gen_rectangle1, gen_rectangle2, draw_region, gen_region_points,
gen_struct_elements, gen_region_polygon_filled

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 705

Possible Successors
connection, reduce_domain, select_shape, area_center
Alternatives
minkowski_sub1, minkowski_sub2, erosion2, erosion_golay, erosion_seq
See also
transpose_region
Module
Foundation

erosion2 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionErosion, Hlong Row, Hlong Column, Hlong Iterations )

T_erosion2 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionErosion, const Htuple Row, const Htuple Column,
const Htuple Iterations )

Erode a region (using a reference point).


erosion2 erodes the input regions with a structuring element (StructElement) having the reference point
(Row,Column). erosion2 has a similar effect as erosion1, the difference is that the reference point of the
structuring element can be chosen arbitrarily. The parameter Iterations determines the number of iterations
which are to be performed with the structuring element. The result of iteration n − 1 is used as input for iteration
n.
A maximum region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be eroded.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 0
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Default Value : 0
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:

HALCON 8.0.2
706 CHAPTER 9. MORPHOLOGY

√ √
O( F 1 · F 2 · Iterations) .

Result
erosion2 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
erosion2 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, watersheds, class_ndim_norm, gen_circle, gen_ellipse,
gen_rectangle1, gen_rectangle2, draw_region, gen_region_points,
gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
minkowski_sub2, minkowski_sub1, erosion1, erosion_golay, erosion_seq
See also
transpose_region, gen_circle, gen_rectangle2, gen_region_polygon
Module
Foundation

erosion_circle ( const Hobject Region, Hobject *RegionErosion,


double Radius )

T_erosion_circle ( const Hobject Region, Hobject *RegionErosion,


const Htuple Radius )

Erode a region with a circular structuring element.


erosion_circle applies a Minkowski subtraction with a circular structuring element to the input regions
Region. Because the circular mask is symmetrical, this is identical to an erosion. The size of the circle used as
structuring element is determined by Radius.
The operator results in reduced regions, smoothed region boundaries, and the regions smaller than the circular
mask are eliminated. It is useful to select only values like 3.5, 5.5, etc. for Radius in order to avoid a translation
of a region, because integer radii result in a circle having a non-integer center of gravity which is rounded to the
next integer.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be eroded.
. RegionErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double / Hlong
Radius of the circular structuring element.
Default Value : 3.5
Suggested values : Radius ∈ {1.5, 2.5, 3.5, 4.5, 5.5, 7.5, 9.5, 12.5, 15.5, 19.5, 25.5, 33.5, 45.5, 60.5, 110.5}
Typical range of values : 0.5 ≤ Radius ≤ 511.5 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 707

Example

my_erosion_circle(Hobject In, double Radius, Hobject *Out)


{
Hobject Circle;
gen_circle(&Circle,100.0,100.0,Radius);
minkowski_sub1(In,Circle,Out,1);
clear_obj(Circle);
}

Complexity
Let F 1 be the area of an input region. Then the runtime complexity for one region is:

O(2 · Radius · F 1) .

Result
erosion_circle returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
erosion_circle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, watersheds, class_ndim_norm
Possible Successors
connection, reduce_domain, select_shape, area_center
Alternatives
minkowski_sub1
See also
gen_circle, dilation_circle, closing_circle, opening_circle
Module
Foundation

erosion_golay ( const Hobject Region, Hobject *RegionErosion,


const char *GolayElement, Hlong Iterations, Hlong Rotation )

T_erosion_golay ( const Hobject Region, Hobject *RegionErosion,


const Htuple GolayElement, const Htuple Iterations,
const Htuple Rotation )

Erode a region with an element from the Golay alphabet.


erosion_golay erodes a region with the selected element GolayElement from the Golay alphabet. The
following structuring elements are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The rotation number Rotation determines which rotation of the element should be used, and whether the fore-
ground (even) or background version (odd) of the selected element should be used. The Golay elements, together
with all possible rotations, are described with the operator golay_elements. The operator works by shifting
the structuring element over the region to be processed (Region). For all positions of the structuring element
fully contained in the region, the corresponding reference point (relative to the structuring element) is added to the
output region. This means that the intersection of all translations of the structuring element within the region is
computed.

HALCON 8.0.2
708 CHAPTER 9. MORPHOLOGY

The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n.
Attention
Not all values of Rotation are valid for any Golay element. For some of the values of Rotation, the resulting
regions are identical to the input regions.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be eroded.
. RegionErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

O(3 · F) .

Result
erosion_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
erosion_golay is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
erosion_seq, erosion1, erosion2
See also
dilation_golay, opening_golay, closing_golay, hit_or_miss_golay,
thinning_golay, thickening_golay, golay_elements
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 709

erosion_rectangle1 ( const Hobject Region, Hobject *RegionErosion,


Hlong Width, Hlong Height )

T_erosion_rectangle1 ( const Hobject Region, Hobject *RegionErosion,


const Htuple Width, const Htuple Height )

Erode a region with a rectangular structuring element.


erosion_rectangle1 applies an erosion with a rectangular structuring element to the input regions Region.
The size of the structuring rectangle is Width × Height. The operator results in reduced regions, and the areas
smaller than the rectangular mask are eliminated.
erosion_rectangle1 is a very fast operation because the height of the rectangle enters only logarithmically
into the runtime complexity, while the width does not enter at all. This leads to excellent runtime efficiency, even
in the case of very large rectangles (edge length > 100).
Regions containing small connecting strips between large areas are separated only seemingly. They remain logi-
cally one region.
Attention
To reduce a region by the same amount in all directions, Width and Height must be odd. If this is not the case,
the region is eroded by a larger amount at the right or at the bottom, respectively, than at the left or at the top.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be eroded.
. RegionErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the structuring rectangle.
Default Value : 11
Suggested values : Width ∈ {1, 2, 3, 4, 5, 11, 15, 21, 31, 51, 71, 101, 151, 201}
Typical range of values : 1 ≤ Width ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the structuring rectangle.
Default Value : 11
Suggested values : Height ∈ {1, 2, 3, 4, 5, 11, 15, 21, 31, 51, 71, 101, 151, 201}
Typical range of values : 1 ≤ Height ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of an input region and H be the height of the rectangle. Then the runtime complexity for one
region is:

O( F 1 · ld(H)) .

Result
erosion_rectangle1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or
no input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
erosion_rectangle1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, watersheds, class_ndim_norm

HALCON 8.0.2
710 CHAPTER 9. MORPHOLOGY

Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
erosion1, minkowski_sub1
See also
gen_rectangle1
Module
Foundation

erosion_seq ( const Hobject Region, Hobject *RegionErosion,


const char *GolayElement, Hlong Iterations )

T_erosion_seq ( const Hobject Region, Hobject *RegionErosion,


const Htuple GolayElement, const Htuple Iterations )

Erode a region sequentially.


erosion_seq computes the sequential erosion of the input region Region with the selected structuring element
GolayElement from the Golay alphabet. This is done by executing the operator erosion_golay with all
rotations of the structuring element Iterations times. The following structuring elements can be selected:
’l’, ’d’, ’c’, ’f’, ’h’, ’k’.
Only the “foreground elements” (even rotation numbers) are used. The elements ’i’ and ’e’ result in unchanged
output regions. The elements ’l’, ’m’ and ’f2’ are identical for the foreground. The Golay elements, together with
all possible rotations, are described with the operator golay_elements.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be eroded.
. RegionErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "d", "c", "f", "h", "k"}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

O(Iterations · 20 · F) .

Result
erosion_seq returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
erosion_seq is reentrant and automatically parallelized (on tuple level).

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 711

Possible Predecessors
threshold, regiongrowing, watersheds, class_ndim_norm
Possible Successors
connection, reduce_domain, select_shape, area_center
Alternatives
erosion_golay, erosion1, erosion2
See also
dilation_seq, hit_or_miss_seq, thinning_seq
Module
Foundation

fitting ( const Hobject Region, const Hobject StructElements,


Hobject *RegionFitted )

T_fitting ( const Hobject Region, const Hobject StructElements,


Hobject *RegionFitted )

Perform a closing after an opening with multiple structuring elements.


fitting performs an opening and a closing successively on the input regions. The eight structuring ele-
ments normally used for this operation can be generated with the operator gen_struct_elements. However,
other user-defined structuring elements can also be used. Let R be the input region(s) and let Mi denote the struc-
turing elements. Furthermore, let P be the result of the opening and Q be the final result. Then the operator can
be formalized as follows:

n
[
P = (R ◦ Mi )
i=1
\n
Q = (P • Mi )
i=1

Regions larger than the structuring elements are preserved, while small gaps are closed.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be processed.
. StructElements (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Structuring elements.
. RegionFitted (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Fitted regions.
Result
fitting returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
fitting is reentrant and processed without parallelization.
Possible Predecessors
gen_struct_elements, gen_region_points
Possible Successors
reduce_domain, select_shape, area_center, connection

HALCON 8.0.2
712 CHAPTER 9. MORPHOLOGY

Alternatives
opening, closing, connection, select_shape
Module
Foundation

gen_struct_elements ( Hobject *StructElements, const char *Type,


Hlong Row, Hlong Column )

T_gen_struct_elements ( Hobject *StructElements, const Htuple Type,


const Htuple Row, const Htuple Column )

Generate standard structuring elements.


gen_struct_elements serves to generate eight structuring elements normally used in the operator
fitting. The default value ’noise’ of the parameter Type generates elements especially suited for the elim-
ination of noise.
h h h h x h h h x x h h
x x x h x h h x h h x h
h h h h x h x h h h h x

M1 M2 M3 M4

h h h h h h h x h h x h
x x h h x x x x h h x x
h x h h x h h h h h h h

M5 M6 M7 M8
Parameter
. StructElements (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Generated structuring elements.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of structuring element to generate.
Default Value : "noise"
List of values : Type ∈ {"noise"}
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 1
Suggested values : Row ∈ {0, 1, 10, 50, 100, 200, 300, 400}
Typical range of values : −∞ ≤ Row ≤ ∞ (lin)
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Default Value : 1
Suggested values : Column ∈ {0, 1, 10, 50, 100, 200, 300, 400}
Typical range of values : −∞ ≤ Column ≤ ∞ (lin)
Result
gen_struct_elements returns H_MSG_TRUE if all parameters are correct. Otherwise, an exception is
raised.
Parallelization Information
gen_struct_elements is reentrant and processed without parallelization.
Possible Successors
fitting, hit_or_miss, opening, closing, erosion2, dilation2
See also
golay_elements

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 713

Module
Foundation

golay_elements ( Hobject *StructElement1, Hobject *StructElement2,


const char *GolayElement, Hlong Rotation, Hlong Row, Hlong Column )

T_golay_elements ( Hobject *StructElement1, Hobject *StructElement2,


const Htuple GolayElement, const Htuple Rotation, const Htuple Row,
const Htuple Column )

Generate the structuring elements of the Golay alphabet.


golay_elements generates the structuring elements from the Golay alphabet. The parameter GolayElement
determines the name of the structuring element, while Rotation determines its rotation. The structuring elements
are intended for use in hit_or_miss: In StructElement1 the structuring element for the foreground is
returned, while in StructElement2 the structuring element for the background is returned. Row and Column
determine the reference point of the structuring element.
The rotations are numbered from 0 to 15. This does not mean, however, that there are 16 different rotations: Even
values denote rotations of the foreground elements, while odd values denote rotations of the background elements.
For golay_elements only even values are accepted, and determine the Golay element for
StructElement1. The next larger odd value is used for StructElement2. There are no rotations for
the Golay elements ’h’ and ’i’. Therefore, only the values 0 and 1 are possible as “rotations” (and hence only 0 for
golay_elements). The element ’e’ has only four possible rotations, and hence the rotation must be between 0
and 7 (for golay_elements the values 0, 2, 4, or 6 must be used).
The tables below show the elements of the Golay alphabet with all possible rotations. The characters used have
the following meaning:
• Foreground pixel
◦ Background pixel
· Don’t care pixel
The names of the elements and their rotation numbers are displayed below the respective element. The elements
with even number contain the foreground pixels, while the elements with odd numbers contain the background
pixels.
• • •
• • •
• • •
h(0,1)
◦ ◦ ◦
◦ ◦ ◦
◦ ◦ ◦
i(0,1)
· · · ◦ ◦ · ◦ ◦ ◦ · ◦ ◦
◦ • ◦ ◦ • · ◦ • ◦ · • ◦
◦ ◦ ◦ ◦ ◦ · · · · · ◦ ◦
e(0,1) e(2,3) e(4,5) e(6,7)
◦ •
· · ◦ • · ·
◦ ◦ ◦ • · • · ◦ • · ◦ • · • · ◦
· • · • · · • • ◦ · · ◦
• • • • • · ◦ ◦
l(0,1) l(2,3) l(4,5) l(6,7)

HALCON 8.0.2
714 CHAPTER 9. MORPHOLOGY

• ◦
· · • ◦ · ·
• • • ◦ · • · • ◦ · • ◦ · • •
· • · ◦ · · ◦ • • · · •
◦ ◦ ◦ ◦ ◦ · • •
l(8,9) l(10,11) l(12,13) l(14,15)
• •
• · · · · •
• · · • · • · · • • • · · • · •
• • ◦ · · ◦ · • · ◦ · ·
• · · · · ◦ · ·
m(0,1) m(2,3) m(4,5) m(6,7)
· ·
◦ · · · · ◦
· · • · · • · • · ◦ · • · • · ·
◦ • • · · • · • · • · ·
· · • • • • • •
m(8,9) m(10,11) m(12,13) m(14,15)
◦ ◦
◦ · · · · ◦
◦ · · ◦ · • · · ◦ ◦ ◦ · · • · ◦
◦ • • · · • · • · • · ·
◦ · · · · • · ·
d(0,1) d(2,3) d(4,5) d(6,7)
· ·
• · · · · •
· · ◦ · · • · ◦ · • · ◦ · • · ·
• • ◦ · · ◦ · • · ◦ · ·
· · ◦ ◦ ◦ ◦ ◦ ◦
d(8,9) d(10,11) d(12,13) d(14,15)
• •
◦ • ◦ ◦ • ◦
• ◦ ◦ • • • · ◦ • ◦ • ◦ · • • •
◦ • • ◦ · • ◦ • ◦ • · ◦
• ◦ ◦ ◦ ◦ • ◦ ◦
f(0,1) f(2,3) f(4,5) f(6,7)
◦ ◦
• · ◦ ◦ · •
◦ ◦ • ◦ · • • • ◦ • ◦ • • • · ◦
• • ◦ ◦ • ◦ ◦ • ◦ ◦ • ◦
◦ ◦ • • • ◦ • •
f(8,9) f(10,11) f(12,13) f(14,15)
• ◦
◦ · • ◦ · ◦
• • • ◦ · • · • ◦ ◦ • ◦ · • · •
◦ • ◦ ◦ · ◦ ◦ • • ◦ · •
◦ ◦ ◦ ◦ ◦ ◦ • •
f2(0,1) f2(2,3) f2(4,5) f2(6,7)
◦ •
◦ · ◦ • · ◦
◦ ◦ ◦ • · • · ◦ • ◦ ◦ • · • · ◦
◦ • ◦ • · ◦ • • ◦ ◦ · ◦
• • • • • ◦ ◦ ◦
f2(8,9) f2(10,11) f2(12,13) f2(14,15)

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 715

• ·
· · • · · ·
• • ◦ · · • · ◦ · · • · · • · •
· • · · · · · • • · · •
· · · · · · ◦ ◦
k(0,1) k(2,3) k(4,5) k(6,7)
· ◦
· · · • · ·
· · · ◦ · • · · ◦ · · • · • · ·
· • · • · · • • · · · ·
◦ • • • • · · ·
k(8,9) k(10,11) k(12,13) k(14,15)
• •
• · · · · •
• · · • · ◦ · · • • • · · ◦ · •
• ◦ · · · · · ◦ · · · ·
• · · · · · · ·
c(0,1) c(2,3) c(4,5) c(6,7)
· ·
· · · ·
· ·
· · • · · ◦ • · · · • ◦ · ·
· ◦ • · · • · ◦ · • · ·
· · • • • • • •
c(8,9) c(10,11) c(12,13) c(14,15)
Parameter

. StructElement1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Structuring element for the foreground.
. StructElement2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Structuring element for the background.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the structuring element.
Default Value : "l"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14}
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 16
Suggested values : Row ∈ {0, 16, 32, 128, 256}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Default Value : 16
Suggested values : Column ∈ {0, 16, 32, 128, 256}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Result
golay_elements returns H_MSG_TRUE if all parameters are correct. Otherwise, an exception is raised.
Parallelization Information
golay_elements is reentrant and processed without parallelization.

HALCON 8.0.2
716 CHAPTER 9. MORPHOLOGY

Possible Successors
hit_or_miss
Alternatives
gen_region_points, gen_struct_elements, gen_region_polygon_filled
See also
dilation_golay, erosion_golay, opening_golay, closing_golay, hit_or_miss_golay,
thickening_golay
References
J. Serra: "‘Image Analysis and Mathematical Morphology"’. Volume I. Academic Press, 1982
Module
Foundation

hit_or_miss ( const Hobject Region, const Hobject StructElement1,


const Hobject StructElement2, Hobject *RegionHitMiss, Hlong Row,
Hlong Column )

T_hit_or_miss ( const Hobject Region, const Hobject StructElement1,


const Hobject StructElement2, Hobject *RegionHitMiss,
const Htuple Row, const Htuple Column )

Hit-or-miss operation for regions.


hit_or_miss performs the hit-or-miss-transformation. First, an erosion with the structuring element
StructElement1 is done on the input region Region. Then an erosion with the structuring element
StructElement2 is performed on the complement of the input region. The intersection of the two resulting
regions is the result RegionHitMiss of hit_or_miss.
The hit-or-miss-transformation selects precisely the points for which the conditions given by the structuring ele-
ments StructElement1 and StructElement2 are fulfilled. StructElement1 determines the condition
for the foreground pixels, while StructElement2 determines the condition for the background pixels. In order
to obtain sensible results, StructElement1 and StructElement2 must fit like key and lock. In any case,
StructElement1 and StructElement2 must be disjunct. Row and Column determine the reference point
of the structuring elements.
Structuring elements (StructElement1, StructElement2) can be generated by calling operators like
gen_struct_elements, gen_region_points, etc.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be processed.
. StructElement1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Erosion mask for the input regions.
. StructElement2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Erosion mask for the complements of the input regions.
. RegionHitMiss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the hit-or-miss operation.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 16
Suggested values : Row ∈ {0, 16, 32, 128, 256}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 717

. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong


Column coordinate of the reference point.
Default Value : 16
Suggested values : Column ∈ {0, 16, 32, 128, 256}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region, F 1 the area of the structuring element 1, and F 2 the area of the structuring
element 2. Then the runtime complexity for one object is:
√ √ √ 
O F· F1 + F2 .

Result
hit_or_miss returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Otherwise, an exception is raised.
Parallelization Information
hit_or_miss is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
golay_elements, gen_struct_elements, threshold, regiongrowing, connection,
union1, watersheds, class_ndim_norm
Possible Successors
difference, reduce_domain, select_shape, area_center, connection
Alternatives
hit_or_miss_golay, hit_or_miss_seq, erosion2, dilation2
See also
thinning, thickening, gen_region_points, gen_region_polygon_filled
Module
Foundation

hit_or_miss_golay ( const Hobject Region, Hobject *RegionHitMiss,


const char *GolayElement, Hlong Rotation )

T_hit_or_miss_golay ( const Hobject Region, Hobject *RegionHitMiss,


const Htuple GolayElement, const Htuple Rotation )

Hit-or-miss operation for regions using the Golay alphabet.


hit_or_miss_golay performs the hit-or-miss-transformation for the input regions Region (using struc-
turing elements from the Golay alphabet). First, an erosion with the foreground of the structuring element
GolayElement is done on the input region Region. Then an erosion with the background of the structur-
ing element GolayElement is performed on the complement of the input region. The intersection of the two
resulting regions is the result RegionHitMiss of hit_or_miss_golay. The following structuring elements
are available:
’l’, ’m’, ’d’, ’c’, ’e’,’i’, ’f’, ’f2’, ’h’, ’k’.
The rotation number Rotation determines which rotation of the element should be used. The hit-or-miss-
transformation selects precisely the points for which the conditions given by the selected Golay element are ful-
filled.
Attention
Not all values of Rotation are valid for any Golay element.

HALCON 8.0.2
718 CHAPTER 9. MORPHOLOGY

Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. RegionHitMiss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the hit-or-miss operation.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

O(6 · F) .

Result
hit_or_miss_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or
no input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
hit_or_miss_golay is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
hit_or_miss_seq, hit_or_miss
See also
erosion_golay, dilation_golay, opening_golay, closing_golay, thinning_golay,
thickening_golay, golay_elements
Module
Foundation

hit_or_miss_seq ( const Hobject Region, Hobject *RegionHitMiss,


const char *GolayElement )

T_hit_or_miss_seq ( const Hobject Region, Hobject *RegionHitMiss,


const Htuple GolayElement )

Hit-or-miss operation for regions using the Golay alphabet (sequential).


hit_or_miss_golay performs the hit-or-miss-transformation for the input regions Region using all rotations
of a structuring element from the Golay alphabet. The result of the operator is the union of all intermediate results
of the respective rotations. The following structuring elements are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The Golay elements, together with all possible rotations, are described with the operator golay_elements.

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 719

Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. RegionHitMiss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the hit-or-miss operation.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
Complexity
Let F be the area of an input region, and R be the number of rotations. Then the runtime complexity for one region
is:

O(R · 6 · F) .

Result
hit_or_miss_seq returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
hit_or_miss_seq is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
hit_or_miss_golay, hit_or_miss
See also
thinning_seq, thickening_seq
Module
Foundation

minkowski_add1 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionMinkAdd, Hlong Iterations )

T_minkowski_add1 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionMinkAdd, const Htuple Iterations )

Perform a Minkowski addition on a region.


minkowski_add1 dilates the input regions with a structuring element. By applying minkowski_add1 to a
region, its boundary gets smoothed. In the process, the area of the region is enlarged. Furthermore, disconnected
regions may be merged. Such regions, however, remain logically distinct region. The Minkowski addition is a
set-theoretic region operation. It is based on translations and union operations.
Let M (StructElement) and R (Region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
[
minkowski_add1(R, M ) := t~vm (R)
m∈M

HALCON 8.0.2
720 CHAPTER 9. MORPHOLOGY

For each point m in M a translation of the region R is performed. The union of all these translations is the
Minkowski addition of R with M . minkowski_add1 is similar to the operator dilation1, the difference
is that in dilation1 the structuring element is mirrored at the origin. The position of StructElement is
meaningless, since the displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that an
empty region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Attention
A Minkowski addition always results in enlarged regions. Closely spaced regions which may touch or overlap as
a result of the dilation are still treated as two separate regions. If the desired behavior is to merge them into one
region, the operator union1 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be dilated.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionMinkAdd (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Dilated regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .

Result
minkowski_add1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
minkowski_add1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
minkowski_add2, dilation1
See also
transpose_region, minkowski_sub1
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 721

minkowski_add2 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionMinkAdd, Hlong Row, Hlong Column, Hlong Iterations )

T_minkowski_add2 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionMinkAdd, const Htuple Row, const Htuple Column,
const Htuple Iterations )

Dilate a region (using a reference point).


minkowski_add2 computes the Minkowski addition of the input regions with a structuring element
(StructElement) having the reference point (Row,Column). minkowski_add2 has a similar effect as
minkowski_add1, the difference is that the reference point of the structuring element can be chosen arbitrarily.
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n.
An empty region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Attention
A Minkowski addition always results in enlarged regions. Closely spaced regions which may touch or overlap as
a result of the dilation are still treated as two separate regions. If the desired behavior is to merge them into one
region, the operator union1 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be dilated.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionMinkAdd (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Dilated regions.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Typical range of values : 1 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Typical range of values : 1 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .

Result
minkowski_add2 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)

HALCON 8.0.2
722 CHAPTER 9. MORPHOLOGY

• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
minkowski_add2 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
minkowski_add1, dilation1
See also
transpose_region
Module
Foundation

minkowski_sub1 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionMinkSub, Hlong Iterations )

T_minkowski_sub1 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionMinkSub, const Htuple Iterations )

Erode a region.
minkowski_sub1 computes the Minkowski subtraction of the input regions with a structuring element. By
applying minkowski_sub1 to a region, its boundary gets smoothed. In the process, the area of the region is
reduced. Furthermore, connected regions may be split. Such regions, however, remain logically one region. The
Minkowski subtraction is a set-theoretic region operation. It uses the intersection operation.
Let M (StructElement) and R (Region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
\
minkowski_sub1(R, M ) := t~vm (R)
m∈M

For each point m in M a translation of the region R is performed. The intersection of all these translations is the
Minkowski subtraction of R with M . minkowski_sub1 is similar to the operator erosion1, the difference
is that in erosion1 the structuring element is mirrored at the origin. The position of StructElement is
meaningless, since the displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that the
maximum region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be eroded.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionMinkSub (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 723

. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .

Result
minkowski_sub1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
minkowski_sub1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
minkowski_sub2, erosion1
See also
transpose_region
Module
Foundation

minkowski_sub2 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionMinkSub, Hlong Row, Hlong Column, Hlong Iterations )

T_minkowski_sub2 ( const Hobject Region, const Hobject StructElement,


Hobject *RegionMinkSub, const Htuple Row, const Htuple Column,
const Htuple Iterations )

Erode a region (using a reference point).


minkowski_sub2 computes the Minkowski subtraction of the input regions with a structuring element
(StructElement) having the reference point (Row,Column). minkowski_sub2 has a similar effect as
minkowski_sub1, the difference is that the reference point of the structuring element can be chosen arbitrarily.
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n.
A maximum region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.

HALCON 8.0.2
724 CHAPTER 9. MORPHOLOGY

Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be eroded.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionMinkSub (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 0
Suggested values : Row ∈ {0, 10, 16, 32, 64, 100, 128}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Default Value : 0
Suggested values : Column ∈ {0, 10, 16, 32, 64, 100, 128}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .

Result
minkowski_sub2 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
minkowski_sub2 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, watersheds, class_ndim_norm, gen_circle, gen_ellipse,
gen_rectangle1, gen_rectangle2, draw_region, gen_region_points,
gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
minkowski_sub1, erosion1, erosion2, erosion_golay, erosion_seq
See also
gen_circle, gen_rectangle2, gen_region_polygon
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 725

morph_hat ( const Hobject Region, const Hobject StructElement,


Hobject *RegionMorphHat )

T_morph_hat ( const Hobject Region, const Hobject StructElement,


Hobject *RegionMorphHat )

Compute the union of bottom_hat and top_hat.


morph_hat computes the union of the regions that are removed by an opening operation with the regions that
are added by a closing operation. Hence this is the union of the results of top_hat and bottom_hat. The
position of StructElement does not influence the result.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Attention
The individual regions are processed separately.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element (position-invariant).
. RegionMorphHat (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Union of top hat and bottom hat.
Example

my_morph_hat(Hobject *In, Hobject StructElement, Hobject *Out)


{
Hobject top, bottom;
top_hat(In,StructElement,&top);
bottom_hat(In,StructElement,&bottom);
union2(top,bottom,Out);
clear_obj(top); clear_obj(bottom);
}

Result
morph_hat returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
morph_hat is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
top_hat, bottom_hat, union2
See also
opening, closing
Module
Foundation

HALCON 8.0.2
726 CHAPTER 9. MORPHOLOGY

morph_skeleton ( const Hobject Region, Hobject *RegionSkeleton )


T_morph_skeleton ( const Hobject Region, Hobject *RegionSkeleton )

Compute the morphological skeleton of a region.


morph_skeleton computes the skeleton of the input regions (Region) using morphological transformations.
The computation yields a disconnected skeleton (gaps in the diagonals) having a width of one or two pixels. The
calculation uses the Golay element ’h’, i.e., an 8-neighborhood. This is equivalent to the maximum-norm.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be processed.
. RegionSkeleton (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Resulting morphological skeleton.
Result
morph_skeleton returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
morph_skeleton is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
skeleton, reduce_domain, select_shape, area_center, connection
Alternatives
skeleton, thinning
See also
thinning_seq, morph_skiz
Module
Foundation

morph_skiz ( const Hobject Region, Hobject *RegionSkiz,


Hlong Iterations1, Hlong Iterations2 )

T_morph_skiz ( const Hobject Region, Hobject *RegionSkiz,


const Htuple Iterations1, const Htuple Iterations2 )

Thinning of a region.
morph_skiz first performs a sequential thinning ( thinning_seq) of the input region with the element ’l’ of
the Golay alphabet. The number of iterations is determined by the parameter Iterations1. Then a sequential
thinning of the resulting region with the element ’e’ of the Golay alphabet is carried out. The number of iterations
for this step is determined by the parameter Iterations2. The skiz operation serves to compute a kind of
skeleton of the input regions, and to prune the branches of the resulting skeleton. If the skiz operation is applied to
the complement of the region, the region and the resulting skeleton are separated.
If very large values or ’maximal’ are passed for Iterations1 or Iterations2, the processing stops if no
more changes occur.

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 727

Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be thinned.
. RegionSkiz (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Result of the skiz operator.
. Iterations1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Number of iterations for the sequential thinning with the element ’l’ of the Golay alphabet.
Default Value : 100
Suggested values : Iterations1 ∈ {"maximal", 0, 1, 2, 3, 5, 7, 10, 15, 20, 30, 40, 50, 70, 100, 150, 200,
300, 400}
Typical range of values : 0 ≤ Iterations1 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Iterations2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Number of iterations for the sequential thinning with the element ’e’ of the Golay alphabet.
Default Value : 1
Suggested values : Iterations2 ∈ {"maximal", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 0 ≤ Iterations2 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of the input region. Then the runtime complexity for one region is

O((Iterations1 + Iterations2) · 3 · F) .

Result
morph_skiz returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
morph_skiz is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
pruning, reduce_domain, select_shape, area_center, connection, background_seg,
complement
Alternatives
skeleton, thinning_seq, morph_skeleton, interjacent
See also
thinning, hit_or_miss_seq, difference
Module
Foundation

opening ( const Hobject Region, const Hobject StructElement,


Hobject *RegionOpening )

T_opening ( const Hobject Region, const Hobject StructElement,


Hobject *RegionOpening )

Open a region.

HALCON 8.0.2
728 CHAPTER 9. MORPHOLOGY

An opening operation is defined as an erosion followed by a Minkowsi addition. By applying opening to a


region, larger structures remain mostly intact, while small structures like lines or points are eliminated. In contrast,
a closing operation results in small gaps being retained or filled up (see closing).
opening serves to eliminate small regions (smaller than StructElement) and to smooth the boundaries of a
region. The position of StructElement is meaningless, since an opening operation is invariant with respect to
the choice of the reference point.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be opened.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element (position-invariant).
. RegionOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Opened regions.
Example

/* simulation of opening */
my_opening(Hobject In, Hobject StructElement, Hobject *Out)
{
Hobject H;
erosion1(In,StructElement,&H,1);
minkowski_add1(H,StructElement,Out,1);
clear_obj(H);
}

/* Large regions in an aerial picture (beech trees or meadows): */


read_image(&Image,"wald1");
threshold(Image,&Light,80.0,255.0);
gen_circle(&StructElement1,100.0,100.0,2.0);
gen_circle(&StructElement2,100.0,100.0,20.0);
/* close the small gap */
closing(Light,StructElement1,&H);
/* selecting the large regions */
opening(H,StructElement2,&Large);

/* Selecting of edges with certain orientation: */


read_image(&Image,"fabrik");
sobel_amp(Image,&Sobel,"sum_abs",3);
threshold(Sobel,Edges,30.0,255.0);
gen_rectangle2(&StructElement,100.0,100.0,3.07819,20.0,1.0);
opening(Edges,StructElement,&Direction);

Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O(2 · F1 · F 2) .

Result
opening returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 729

Otherwise, an exception is raised.


Parallelization Information
opening is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
minkowski_add1, erosion1, opening_circle
See also
gen_circle, gen_rectangle2, gen_region_polygon
Module
Foundation

opening_circle ( const Hobject Region, Hobject *RegionOpening,


double Radius )

T_opening_circle ( const Hobject Region, Hobject *RegionOpening,


const Htuple Radius )

Open a region with a circular structuring element.


opening_circle is defined as an erosion followed by a Minkowsi addition with a circular structuring element
(see example). opening serves to eliminate small regions (smaller than the circular structuring element) and to
smooth the boundaries of a region.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be opened.
. RegionOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Opened regions.
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double / Hlong
Radius of the circular structuring element.
Default Value : 3.5
Suggested values : Radius ∈ {1.5, 2.5, 3.5, 4.5, 5.5, 7.5, 9.5, 12.5, 15.5, 19.5, 25.5, 33.5, 45.5, 60.5, 110.5}
Typical range of values : 0.5 ≤ Radius ≤ 511.5 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0
Example

/* simulation of opening_circle */
my_opening_circle(Hobject In, double Radius, Hobject *Out)
{
Hobject Circle, tmp;
gen_circle(&Circle,100.0,100.0,Radius);
erosion1(Region,Circle,&tmp,1);
minkowski_add1(tmp,Circle,&Out,1);
clear_obj(Circle); clear_obj(tmp);
}

/* Large regions in an aerial picture (beech trees or meadows): */


read_image(&Image,"wald1");
threshold(Image,&Light,80.0,255.0);
/* close the small gap */

HALCON 8.0.2
730 CHAPTER 9. MORPHOLOGY

closing_circle(Light,&H,2.5);
/* selecting the large regions */
opening_circle(H,&Large,20.5);

Complexity
Let F 1 be the area of the input region. Then the runtime complexity for one region is:

O(4 · F 1 · Radius) .

Result
opening_circle returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
opening_circle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
opening, dilation1, minkowski_add1, gen_circle
See also
transpose_region
Module
Foundation

opening_golay ( const Hobject Region, Hobject *RegionOpening,


const char *GolayElement, Hlong Rotation )

T_opening_golay ( const Hobject Region, Hobject *RegionOpening,


const Htuple GolayElement, const Htuple Rotation )

Open a region with an element from the Golay alphabet.


opening_golay is defined as a Minkowski subtraction followed by a Minkowski addition. First the Minkowski
subtraction of the input region (Region) with the structuring element from the Golay alphabet defined by
GolayElement and Rotation is computed. Then the Minkowski addition of the result and the structuring
element rotated by 180◦ is performed.
The following structuring elements are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The rotation number Rotation determines which rotation of the element should be used, and whether the fore-
ground (even) or background version (odd) of the selected element should be used. The Golay elements, together
with all possible rotations, are described with the operator golay_elements.
opening_golay serves to eliminate regions smaller than the structuring element, and to smooth regions’ bound-
aries.
Attention
Not all values of Rotation are valid for any Golay element. For some of the values of Rotation, the resulting
regions are identical to the input regions.

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 731

Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be opened.
. RegionOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Opened regions.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

O(6 · F) .

Result
opening_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
opening_golay is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
opening_seg, opening
See also
erosion_golay, dilation_golay, closing_golay, hit_or_miss_golay,
thinning_golay, thickening_golay, golay_elements
Module
Foundation

opening_rectangle1 ( const Hobject Region, Hobject *RegionOpening,


Hlong Width, Hlong Height )

T_opening_rectangle1 ( const Hobject Region, Hobject *RegionOpening,


const Htuple Width, const Htuple Height )

Open a region with a rectangular structuring element.


opening_rectangle1 performs an erosion_rectangle1 followed by a dilation_rectangle1 on
the input region Region. The size of the rectangular structuring element is determined by the parameters Width
and Height. As is the case for all opening variants, larger structures are preserved, while small regions like
lines or points are eliminated.

HALCON 8.0.2
732 CHAPTER 9. MORPHOLOGY

Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be opened.
. RegionOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Opened regions.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong / double
Width of the structuring rectangle.
Default Value : 10
Suggested values : Width ∈ {1, 2, 3, 4, 5, 7, 9, 12, 15, 19, 25, 33, 45, 60, 110, 150, 200}
Typical range of values : 1 ≤ Width ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong / double
Height of the structuring rectangle.
Default Value : 10
Suggested values : Height ∈ {1, 2, 3, 4, 5, 7, 9, 12, 15, 19, 25, 33, 45, 60, 110, 150, 200}
Typical range of values : 1 ≤ Height ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of an input region and H be the height of the rectangle. Then the runtime complexity for one
region is:

O(2 · F 1 · ld(H)) .

Result
opening_rectangle1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or
no input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
opening_rectangle1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
opening, gen_rectangle1, dilation_rectangle1, erosion_rectangle1
See also
opening_seg, opening_golay
Module
Foundation

opening_seg ( const Hobject Region, const Hobject StructElement,


Hobject *RegionOpening )

T_opening_seg ( const Hobject Region, const Hobject StructElement,


Hobject *RegionOpening )

Separate overlapping regions.

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 733

The opening_seg operation is defined as a sequence of the following operators: erosion1, connection
and dilation1 (see example). Only one iteration is done in erosion1 and dilation1.
opening_seg serves to separate overlapping regions whose area of overlap is smaller than StructElement.
It should be noted that the resulting regions can overlap without actually merging (see expand_region).
opening_seg uses the center of gravity as the reference point of the structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be opened.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element (position-invariant).
. RegionOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Opened regions.
Example

/* Simulation of opening_seg */
my_opening_seg(Hobject Region, Hobject StructElement, Hobject *Opening)
{
Hobject H1,H2;
erosion1(Region,StructElement,&H1,1);
connection(H1,&H2);
dilation1(H2,StructElement,Opening,1);
clear_obj(H1); clear_obj(H2);
}

/* separation of circular objects */


gen_random_regions(&Regions,"circle",8.5,10.5,0.0,0.0,0.0,0.0,400,512,512);
union1(Regions,&UnionReg);
gen_circle(&Mask,100,100,8.5);
opening_seg(UnionReg,Mask,&RegionsNew);

Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:

√ √ √
q
O( F 1 · F 2 · F 1) .

Result
opening_seg returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
opening_seg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
expand_region, reduce_domain, select_shape, area_center, connection

HALCON 8.0.2
734 CHAPTER 9. MORPHOLOGY

Alternatives
erosion1, connection, dilation1
Module
Foundation

pruning ( const Hobject Region, Hobject *RegionPrune, Hlong Length )


T_pruning ( const Hobject Region, Hobject *RegionPrune,
const Htuple Length )

Prune the branches of a region.


pruning removes branches from a skeleton (Region) having a length less than Length. All other branches
are preserved.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be processed.
. RegionPrune (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the pruning operation.
. Length (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Length of the branches to be removed.
Default Value : 2
Suggested values : Length ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Length ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of the input region. Then the runtime complexity for one region is

O(Length · 3 · F) .

Result
pruning returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
pruning is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
morph_skiz, skeleton, thinning_seq
Possible Successors
reduce_domain, select_shape, area_center, connection
See also
morph_skeleton, junctions_skeleton
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 735

thickening ( const Hobject Region, const Hobject StructElement1,


const Hobject StructElement2, Hobject *RegionThick, Hlong Row,
Hlong Column, Hlong Iterations )

T_thickening ( const Hobject Region, const Hobject StructElement1,


const Hobject StructElement2, Hobject *RegionThick, const Htuple Row,
const Htuple Column, const Htuple Iterations )

Add the result of a hit-or-miss operation to a region.


thickening performs a thickening of the input regions using morphological operations. The operator first
applies a hit-or-miss-transformation to Region (cf. hit_or_miss), and then adds the detected points to the
input region. The parameter Iterations determines the number of iterations performed.
For the choice of the structuring elements StructElement1 and StructElement2, as well as for Row and
Column, the same restrictions described under hit_or_miss apply.
The structuring elements (StructElement1 and StructElement2) can be generated by calling
golay_elements, for example.
Attention
If the reference point is contained in StructElement1 the input region remains unchanged.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. StructElement1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element for the foreground.
. StructElement2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element for the background.
. RegionThick (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the thickening operator.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 16
Suggested values : Row ∈ {0, 2, 4, 8, 16, 32, 128}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Default Value : 16
Suggested values : Column ∈ {0, 2, 4, 8, 16, 32, 128}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50, 70, 100, 200, 400}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region, F 1 the area of the structuring element 1, and F 2 the area of the structuring
element 2. Then the runtime complexity for one object is:
 √ √ √ 
O Iterations · F · F1 + F2 .

Result
thickening returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

HALCON 8.0.2
736 CHAPTER 9. MORPHOLOGY

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
thickening is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
golay_elements, threshold, regiongrowing, connection, union1, watersheds,
class_ndim_norm, gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2,
draw_region, gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
thickening_golay, thickening_seq
See also
hit_or_miss
Module
Foundation

thickening_golay ( const Hobject Region, Hobject *RegionThick,


const char *GolayElement, Hlong Rotation )

T_thickening_golay ( const Hobject Region, Hobject *RegionThick,


const Htuple GolayElement, const Htuple Rotation )

Add the result of a hit-or-miss operation to a region (using a Golay structuring element).
thickening_golay performs a thickening of the input regions using morphological operations and structur-
ing elements from the Golay alphabet. The operator first applies a hit-or-miss-transformation to Region (cf.
hit_or_miss_golay), and then adds the detected points to the input region. The following structuring ele-
ments are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The rotation number Rotation determines which rotation of the element should be used. The Golay elements,
together with all possible rotations, are described with the operator golay_elements.
Attention
Not all values of Rotation are valid for any Golay element.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be processed.
. RegionThick (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the thickening operator.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

O(6 · F) .

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 737

Result
thickening_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
thickening_golay is reentrant and automatically parallelized (on tuple level).
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
thickening, thickening_seq
See also
erosion_golay, hit_or_miss_golay
Module
Foundation

thickening_seq ( const Hobject Region, Hobject *RegionThick,


const char *GolayElement, Hlong Iterations )

T_thickening_seq ( const Hobject Region, Hobject *RegionThick,


const Htuple GolayElement, const Htuple Iterations )

Add the result of a hit-or-miss operation to a region (sequential).


thickening_seq calculates the sequential thickening of the input regions with a structuring element from the
Golay alphabet (GolayElement). To do so, thickening_seq calls the operator thickening_golay
with all possible rotations of the structuring element Iterations times. The following structuring elements are
available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The Golay elements, together with all possible rotations, are described with the operator golay_elements.
For all elements of the Golay alphabet, except for ’c’, the foreground and background masks are exchanged in
order to have an effect for them on the outer boundary of the region. The element ’c’ can be used to generate the
convex hull of the input region if enough iterations are performed.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be processed.
. RegionThick (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the thickening operator.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50, 70, 100, 200}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

HALCON 8.0.2
738 CHAPTER 9. MORPHOLOGY


O(Iterations · 6 · F) .

Result
thickening_seq returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
thickening_seq is reentrant and automatically parallelized (on tuple level).
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
thickening_golay, thickening
See also
erosion_golay, thinning_seq
Module
Foundation

thinning ( const Hobject Region, const Hobject StructElement1,


const Hobject StructElement2, Hobject *RegionThin, Hlong Row,
Hlong Column, Hlong Iterations )

T_thinning ( const Hobject Region, const Hobject StructElement1,


const Hobject StructElement2, Hobject *RegionThin, const Htuple Row,
const Htuple Column, const Htuple Iterations )

Remove the result of a hit-or-miss operation from a region.


thinning performs a thinning of the input regions using morphological operations. The operator first applies a
hit-or-miss-transformation to Region (cf. hit_or_miss), and then removes the detected points from the input
region. The parameter Iterations determines the number of iterations performed.
For the choice of the structuring elements StructElement1 and StructElement2, as well as for Row and
Column, the same restrictions described under hit_or_miss apply.
Structuring elements (StructElement1, StructElement2) can be generated with operators such
as gen_circle, gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region,
gen_region_polygon, gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. StructElement1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element for the foreground.
. StructElement2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element for the background.
. RegionThin (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Result of the thinning operator.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 0
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 739

. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong


Column coordinate of the reference point.
Default Value : 0
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region, F 1 the area of the structuring element 1, and F 2 the area of the structuring
element 2. Then the runtime complexity for one object is:
 √ √ √ 
O Iterations · F · F1 + F2 .

Result
thinning returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
thinning is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
thinning_golay, thinning_seq
See also
hit_or_miss
Module
Foundation

thinning_golay ( const Hobject Region, Hobject *RegionThin,


const char *GolayElement, Hlong Rotation )

T_thinning_golay ( const Hobject Region, Hobject *RegionThin,


const Htuple GolayElement, const Htuple Rotation )

Remove the result of a hit-or-miss operation from a region (using a Golay structuring element).
thinning_golay performs a thinning of the input regions using morphological operations and structuring
elements from the Golay alphabet. The operator first applies a hit-or-miss-transformation to Region (cf.
hit_or_miss_golay), and then removes the detected points from the input region. The following structuring
elements are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.

HALCON 8.0.2
740 CHAPTER 9. MORPHOLOGY

The rotation number Rotation determines which rotation of the element should be used. The Golay elements,
together with all possible rotations, are described with the operator golay_elements.
Attention
Not all values of Rotation are valid for any Golay element.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. RegionThin (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Result of the thinning operator.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

O(6 · F) .

Result
thinning_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
thinning_golay is reentrant and automatically parallelized (on tuple level).
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
thinning_seq, thinning
See also
erosion_golay, hit_or_miss_golay
Module
Foundation

thinning_seq ( const Hobject Region, Hobject *RegionThin,


const char *GolayElement, Hlong Iterations )

T_thinning_seq ( const Hobject Region, Hobject *RegionThin,


const Htuple GolayElement, const Htuple Iterations )

Remove the result of a hit-or-miss operation from a region (sequential).


thinning_seq calculates the sequential thinning of the input regions with a structuring element from the Golay
alphabet (GolayElement). To do so, thinning_seq calls the operator thinning_golay with all possible
rotations of the structuring element Iterations times. If Iterations is chosen large enough, the operator
calculates the skeleton of a region if the structuring elements ’l’ or ’m’ are used. For the element ’c’ the background
and foreground are exchanged in order to have an effect on the interior boundary of a region. If a very large value
or ’maximal’ is passed for Iterations the iteration stops if no more changes occur. The following structuring
elements are available:

HALCON/C Reference Manual, 2008-5-13


9.2. REGION 741

’l’ Skeleton, similar to skeleton. This structuring element is also used in morph_skiz.
’m’ A skeleton with many “hairs” and multiple (parallel) branches.
’d’ A skeleton without multiple branches, but with many gaps, similar to morph_skeleton.
’c’ Uniform erosion of the region.
’e’ One pixel wide lines are shortened. This structuring element is also used in morph_skiz.
’i’ Isolated points are removed. (Only Iterations = 1 is useful.)
’f’ Y-junctions are eliminated. (Only Iterations = 1 is useful.)
’f2’ One pixel long branches and corners are removed. (Only Iterations = 1 is useful.)
’h’ A kind of inner boundary, which, however, is thicker than the result of boundary, is generated. (Only
Iterations = 1 is useful.)
’k’ Junction points are eliminated, but also new ones are generated.

The Golay elements, together with all possible rotations, are described with the operator golay_elements.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. RegionThin (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Result of the thinning operator.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "l"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Number of iterations. For ’f’, ’f2’, ’h’ and ’i’ the only useful value is 1.
Default Value : 20
Suggested values : Iterations ∈ {"maximal", 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 30, 40, 50, 70, 100, 150,
200}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:

O(Iterations · 6 · F) .

Result
thinning_seq returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
thinning_seq is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
pruning, reduce_domain, select_shape, area_center, connection, complement
Alternatives
skeleton, morph_skiz, expand_region

HALCON 8.0.2
742 CHAPTER 9. MORPHOLOGY

See also
hit_or_miss_seq, erosion_golay, difference, thinning_golay, thinning,
thickening_seq
Module
Foundation

top_hat ( const Hobject Region, const Hobject StructElement,


Hobject *RegionTopHat )

T_top_hat ( const Hobject Region, const Hobject StructElement,


Hobject *RegionTopHat )

Compute the top hat of regions.


top_hat computes the opening of Region with StructElement. The difference between the original
region and the result of the opening is called the top hat. In contrast to opening, which splits regions under
certain circumstances, top_hat computes the regions removed by such a splitting.
The position of StructElement is meaningless, since an opening operation is invariant with respect to the
choice of the reference point.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be processed.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element (position independent).
. RegionTopHat (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the top hat operator.
Result
top_hat returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
top_hat is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm,
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region,
gen_region_points, gen_struct_elements, gen_region_polygon_filled
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
opening, difference
See also
bottom_hat, morph_hat, gray_tophat, opening
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


Chapter 10

OCR

10.1 Hyperboxes

close_all_ocrs ( )
T_close_all_ocrs ( )

Destroy all OCR classificators.


close_all_ocrs deletes all OCR classificators and frees the used memory space. All the trained data will be
lost.
Attention
close_all_ocrs exists solely for the purpose of implementing the “reset program” functionality in HDevelop.
close_all_ocrs must not be used in any application.
Result
If it is possible to close the OCR classificators the operator close_all_ocrs returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
close_all_ocrs is processed completely exclusively without parallelization.
Alternatives
close_ocr
Module
OCR/OCV

close_ocr ( Hlong OcrHandle )


T_close_ocr ( const Htuple OcrHandle )

Deallocation of the memory of an OCR classifier.


The operator close_ocr deallocates the memory of the classifier having the number OcrHandle. Hereby
all corresponding data will be deleted. However, if necessary, they can be saved in advance using the operator
write_ocr. The number OcrHandle will be invalid after the call; but later the system can use it again for new
classifiers.
Attention
All data of the classifier will be deleted in main memory (not on the hard disk).
Parameter

. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; Hlong


ID of the OCR classifier to be deleted.

743
744 CHAPTER 10. OCR

Example

HTuple OcrHandle,Class,Confidence;
long orc_handle;

read_ocr("testnet",&orc_handle);
/* image processing */
create_tuple(&OcrHandle,1);
set_i(OcrHandle,orc_handle,0);
T_do_ocr_multi(Character,Image,OcrHandle,&Class,&Confidence);
close_ocr(orc_handle);

Result
If the parameter OcrHandle is valid, the operator close_ocr returns the value H_MSG_TRUE. Otherwise an
exception will be raised.
Parallelization Information
close_ocr is reentrant and processed without parallelization.
Possible Predecessors
write_ocr_trainf
Possible Successors
read_ocr
Module
OCR/OCV

T_create_ocr_class_box ( const Htuple WidthPattern,


const Htuple HeightPattern, const Htuple Interpolation,
const Htuple Features, const Htuple Character, Htuple *OcrHandle )

Create a new OCR-classifier.


The operator create_ocr_class_box creates a new OCR classifier. For a description of this classi-
fier see operator learn_class_box. This classifier must then be trained with the help of the operators
traind_ocr_class_box or trainf_ocr_class_box.
The parameters WidthPattern and HeightPattern indicate the size of the input-layer of the network. This
size is used for the features ’projection_horizontal’, ’projection_vertical’, ’pixel’, ’pixel_invar’, and ’pixel_binary’
to transform the character to a standard size. The bigger the standard size is, the more characters can be distin-
guished. Hereby the amount of time necessary for the training (as well as the number of training random samples)
and the time necessary for the recognition, however, will increase as well. The parameter Interpolation
indicates the interpolation mode concerning the adaptation of characters in the image to the network. For more
detailed information on this parameter see also affine_trans_image. The value 0 results in the same inter-
polation as ’none’ in affine_trans_image, i.e., no interpolation is performed. For 1, the same behavior as
’constant’ in affine_trans_image is obtained, i.e., equally weighted interpolation between adjacent pixels
is used. Finally, 2 results in the same interpolation as ’weighted’, i.e., Gaussian interpolation between adjacent
pixels is used. The parameter Interpolation must be chosen such that no aliasing occurs when the character
is scaled to the standard size. Typically, this means that Interpolation should be set to 1, except in cases
where the characters are scaled down by a large amount, in which case Interpolation = 2 should be chosen.
Interpolation = 0 should only be chosen if the characters will not be scaled.
The parameter Character determines all the characters which have to be recognized. Normally the transmitted
strings consist of one character (e.g. alphabet). But also strings of any length can be learned. The number of
distinguishable characters (number of strings in Character) is limited to 2048.
The parameter Features helps to chose additional features besides gray values in order to recognize characters.
By using ’default’ the features ’ratio’ ans ’pixel_invar’ will be set.
The following features are available:

’ratio’ Ratio of the character.

HALCON/C Reference Manual, 2008-5-13


10.1. HYPERBOXES 745

’width’ Width of the character (not invariant to scaling).


’height’ Height of the character (not invariant to scaling).
’zoom_factor’ Difference in size between the current character and the values of WidthPattern and
HeightPattern (not invariant to scaling).
’foreground’ Relative number of pixels in the foreground.
’foreground_grid_9’ Relative number of foreground pixels in a 3 × 3 grid within the surrounding rectangle of the
character.
’foreground_grid_16’ Relative number of foreground pixels in a 4 × 4 grid within the surrounding rectangle of
the character.
’anisometry’ Form feature anisometry.
’compactness’ Form feature compactness.
’convexity’ Form feature convexity.
’moments_region_2nd_invar’ Normed 2nd geometric moments of the region. See also
moments_region_2nd_invar.
’moments_region_2nd_rel_invar’ Normed 2nd relativ geometric moments of the region. See also
moments_region_2nd_rel_invar.
’moments_region_3rd_invar’ Normed 3rd geometric moments of the region. See also
moments_region_3rd_invar.
’moments_central’ Normed central geometric moments of the region. See also moments_region_central.
’phi’ Sinus and cosinus of the orientation (angle) of the character.
’num_connect’ Number of connecting components.
’num_holes’ Number of holes.
’projection_horizontal’ Horizontal projection of the gray values.
’projection_horizontal_invar’ Horizontal projection of the gray values with are automatically scaled to maximum
range.
’projection_vertical’ Vertical projection of the gray values.
’projection_vertical_invar’ Vertical projection of the gray values with are automatically scaled to maximum range.
’cooc’ Values of the binary cooccurrence matrix.
’moments_gray_plane’ Normed gray value moments and the angles of the gray value level.
’num_runs’ Number of chords in the region normed to the area.
’chord_histo’ Frequency of the chords per row.
’pixel’ Gray value of the character.
’pixel_invar’ Gray values of the character with automatic maximal scaling of the gray values.
’pixel_binary’ Region of the character as a binary image zoomed to a size of WidthPattern ×
HeightPattern.
’gradient_8dir’ Gradients are computed on the character image. The gradient directions are discretized into 8
directions. The amplitude image is decomposed into 8 channels according to these discretized directions. 25
samples on a 5 × 5 grid are extracted from each channel. These samples are used as features.

Parameter
. WidthPattern (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Width of the input layer of the network.
Default Value : 8
Suggested values : WidthPattern ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 1 ≤ WidthPattern ≤ 100
. HeightPattern (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Height of the input layer of the network.
Default Value : 10
Suggested values : HeightPattern ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 1 ≤ HeightPattern ≤ 100

HALCON 8.0.2
746 CHAPTER 10. OCR

. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Interpolation mode concerning scaling of characters.
Default Value : 1
List of values : Interpolation ∈ {0, 1, 2}
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Additional features.
Default Value : "default"
List of values : Features ∈ {"default", "zoom_factor", "ratio", "width", "height", "foreground",
"foreground_grid_9", "foreground_grid_16", "anisometry", "compactness", "convexity",
"moments_region_2nd_invar", "moments_region_2nd_rel_invar", "moments_region_3rd_invar",
"moments_central", "phi", "num_connect", "num_holes", "projection_horizontal", "projection_vertical",
"projection_horizontal_invar", "projection_vertical_invar", "chord_histo", "num_runs", "pixel", "pixel_invar",
"pixel_binary", "gradient_8dir", "cooc", "moments_gray_plane"}
. Character (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char *
All characters of a set.
Default Value : ["a","b","c"]
. OcrHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; Htuple . Hlong *
ID of the created OCR classifier.
Example

HTuple WidthPattern,HeightPattern,Interpolation,
Features,OcrHandle;
create_tuple(&WidthPattern,1);
set_i(WidthPattern,8,0);
create_tuple(&HeightPattern,1);
set_i(HeightPattern,10,0);
create_tuple(&Interpolation,1);
set_i(Interpolation,1,0);
create_tuple(&Features,1);
set_s(Features,"default",0);
create_tuple(&Character,26+26+10);
set_s(Character,"a",0);
set_s(Character,"b",1);
/* ... */
set_s(Character,"A",27);
set_s(Character,"B",28);
/* ... */
set_s(Character,"1",53);
set_s(Character,"2",54);
/* ... */
T_create_ocr_class_box(WidthPattern,HeightPattern,Interpolation,
Features,Character,&OcrHandle);

Result
If the parameters are correct, the operator create_ocr_class_box returns the value H_MSG_TRUE. Oth-
erwise an exception will be raised.
Parallelization Information
create_ocr_class_box is processed completely exclusively without parallelization.
Possible Predecessors
reset_obj_db
Possible Successors
traind_ocr_class_box, trainf_ocr_class_box, info_ocr_class_box, write_ocr,
ocr_change_char
Alternatives
create_ocr_class_mlp, create_ocr_class_svm
See also
affine_trans_image, ocr_change_char, moments_region_2nd_invar,

HALCON/C Reference Manual, 2008-5-13


10.1. HYPERBOXES 747

moments_region_2nd_rel_invar, moments_region_3rd_invar,
moments_region_central
Module
OCR/OCV

do_ocr_multi ( const Hobject Character, const Hobject Image,


Hlong OcrHandle, char *Class, double *Confidence )

T_do_ocr_multi ( const Hobject Character, const Hobject Image,


const Htuple OcrHandle, Htuple *Class, Htuple *Confidence )

Classify characters.
The operator do_ocr_multi assigns a class to every Character (character). For gray value features the
gray values from the surrounding rectangles of the regions are used. The gray values will be taken from the
parameter Image. For each character the corresponding class will be returned in Class and a confidence value
will be returned in Confidence. The confidence value indicates the similarity between the input pattern and the
assigned character.
Parameter
. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Characters to be recognized.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Gray values for the characters.
. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; (Htuple .) Hlong
ID of the OCR classifier.
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Class (name) of the characters.
Number of elements : Class = Character
. Confidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Confidence values of the characters.
Number of elements : Confidence = Character
Example

char Class[128];
long orc_handle;
read_ocr("testnet",&orc_handle);
read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
do_ocr_multi(SingleCharacter,Image,orc_handle,&Class,_);
smallest_rectangle1(SingleCharacter,_,&col,&row,);
set_tposition(row,col);
write_string(WindowHandle,Class);
}

Result
If the input parameters are set correctly, the operator do_ocr_multi returns the value H_MSG_TRUE. Other-
wise an exception will be raised.
Parallelization Information
do_ocr_multi is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
traind_ocr_class_box, trainf_ocr_class_box, read_ocr, connection, sort_region

HALCON 8.0.2
748 CHAPTER 10. OCR

Alternatives
do_ocr_single
See also
write_ocr
Module
OCR/OCV

T_do_ocr_single ( const Hobject Character, const Hobject Image,


const Htuple OcrHandle, Htuple *Classes, Htuple *Confidences )

Classify one character.


The operator do_ocr_single assigns classes to the Character (characters). For gray value features gray
values of the surrounding rectangles of the regions will be used. The gray values will be taken from the paramter
Image. For each character the two classes with the highest confidencenses will be returned in Classes. The
corresponding confidences will be returned in Confidences. The confidence value indicates the similarity
between the input pattern and the assigned character.
Parameter
. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Character to be recognized.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Gray values of the characters.
. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; Htuple . Hlong
ID of the OCR classifier.
. Classes (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Classes (names) of the characters.
Number of elements : 2
. Confidences (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Confidence values of the characters.
Number of elements : 2
Example

HTuple Classes,Confidences;
long orc_handle;
HTuple OcrHandle;

read_ocr("testnet",&orc_handle);
create_tuple(&OcrHandle,1);
set_i(OcrHandle,orc_handle,0);
read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
T_do_ocr_single(SingleCharacter,Image,
OcrHandle,&Classes,&Confidences);
printf("best = %s (%g)\n",
get_s(Classes,0),get_d(Confidences,0));
printf("second = %s (%g)\n\n",
get_s(Classes,1),get_d(Confidences,1));
}

Result
If the input parameters are correct, the operator do_ocr_single returns the value H_MSG_TRUE. Otherwise
an exception will be raised.

HALCON/C Reference Manual, 2008-5-13


10.1. HYPERBOXES 749

Parallelization Information
do_ocr_single is reentrant and processed without parallelization.
Possible Predecessors
traind_ocr_class_box, trainf_ocr_class_box, read_ocr, connection, sort_region
Alternatives
do_ocr_multi
See also
write_ocr
Module
OCR/OCV

T_info_ocr_class_box ( const Htuple OcrHandle, Htuple *WidthPattern,


Htuple *HeightPattern, Htuple *Interpolation, Htuple *WidthMaxChar,
Htuple *HeightMaxChar, Htuple *Features, Htuple *Characters )

Get information about an OCR classifier.


The operator info_ocr_class_box returns some information about an OCR classifier. The parameters are
equivalent to those of create_ocr_class_box. The parameters WidthMaxChar and HeightMaxChar
indicate the extension of the largest trained character. These values can be used to control the segmentation.
Parameter
. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; Htuple . Hlong
ID of the OCR classifier.
. WidthPattern (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Width of the scaled characters.
. HeightPattern (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Height of the scaled characters.
. Interpolation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Interpolation mode for scaling the characters.
. WidthMaxChar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Width of the largest trained character.
. HeightMaxChar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Height of the largest trained character.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Used features.
. Characters (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
All characters of the set.
Example

HTuple OcrHandle,WidthPattern,HeightPattern,Interpolation,
WidthMaxChar,HeightMaxChar,Features,Characters;

T_info_ocr_class_box(OcrHandle,&WidthPattern,&HeightPattern,&Interpolation,
&WidthMaxChar,&HeightMaxChar,&Features,&Characters);
printf("NetSize: %d x %d\n",get_i(WidthPattern,0),get_i(HeightPattern,0));
printf("MaxChar: %d x %d\n",get_i(WidthMaxChar,0),get_i(HeightMaxChar,0));
printf("Interpolation: %d\n",get_i(Interpolation,0));
printf("Features: ");
for (i=0; i<length_tuple(Features); i++)
printf("%s ",get_s(Features,i));
printf("\n");
printf("Characters: ");
for (i=0; i<length_tuple(Characters); i++)
printf(" %d %s\n",i,get_s(Characters,i));

HALCON 8.0.2
750 CHAPTER 10. OCR

Result
The operator info_ocr_class_box always returns H_MSG_TRUE.
Parallelization Information
info_ocr_class_box is reentrant and processed without parallelization.
Possible Predecessors
read_ocr, create_ocr_class_box
Possible Successors
write_ocr
Module
OCR/OCV

T_ocr_change_char ( const Htuple OcrHandle, const Htuple Character )

Define a new conversion table for the characters.


The operator ocr_change_char establishes a new look-up table for the characters. Hereby the number of
strings of Character must be the same as of the classifier OcrHandle. In order to enlarge the font, the
operator ocr_change_char may be used as follows: More characters than actually needed will be indicated
when creating a network using ( create_ocr_class_box). The last n characters will not be used so far. If
more characters are needed at a later stage, these unused characters will be allocated and then trained with the help
of the operator ocr_change_char.
Parameter

. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; Htuple . Hlong


ID of the OCR-network to be changed.
. Character (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char *
New assign of characters.
Default Value : ["a","b","c"]
Example

HTuple Character1, Character2, OcrHandle;


create_tuple(&Character1,26);
set_s(Character1,"a",0);
set_s(Character1,"b",1);
/* set parameter values */
T_create_ocr_net(WidthPattern,HeightPattern,Interpolation,
Features,HiddenLayer,Init,Character1,&OcrHandle);
/* later... */
create_tuple(&Character2,26);
set_s(Character2,"alpha",0);
set_s(Character2,"beta",1);
T_ocr_change_char(OcrHandle,Character2);

Result
If the number of characters in Character is identical with the number of the characters of the network, the
operator ocr_change_char returns the value H_MSG_TRUE. Otherwise an exception will be raised.
Parallelization Information
ocr_change_char is processed completely exclusively without parallelization.
Possible Predecessors
read_ocr
Possible Successors
do_ocr_multi, do_ocr_single, write_ocr
Module
OCR/OCV

HALCON/C Reference Manual, 2008-5-13


10.1. HYPERBOXES 751

T_ocr_get_features ( const Hobject Character, const Htuple OcrHandle,


Htuple *FeatureVector )

Access the features which correspond to a character.


The operator ocr_get_features calculates the features for the given Character. The type and number of
features is determined by the classifier OcrHandle. FeatureVector contains the same values which are used
inside operators like traind_ocr_class_box or trainf_ocr_class_box.
Parameter
. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Characters to be trained.
. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; Htuple . Hlong
ID of the desired OCR-classifier.
. FeatureVector (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Feature vector.
Result
If the parameters are correct, the operator ocr_get_features returns the value H_MSG_TRUE. Otherwise
an exception will be raised.
Parallelization Information
ocr_get_features is reentrant and processed without parallelization.
Possible Predecessors
create_ocr_class_box, read_ocr, reduce_domain, threshold, connection
Possible Successors
learn_class_box
See also
trainf_ocr_class_box, traind_ocr_class_box
Module
OCR/OCV

read_ocr ( const char *FileName, Hlong *OcrHandle )


T_read_ocr ( const Htuple FileName, Htuple *OcrHandle )

Read an OCR classifier from a file.


The operator read_ocr reads an OCR classifier from a file FileName. This file will hereby be searched in
the directory ($HALCONROOT/ocr/) as well as in the currently used directory. If too many classifiers have been
loaded, an error message will be displayed.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of the OCR classifier file.
Default Value : "testnet"
. OcrHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; Hlong *
ID of the read OCR classifier.
Result
If the indicated file is available and the format is correct, the operator read_ocr returns the value
H_MSG_TRUE. Otherwise an exception will be raised.
Parallelization Information
read_ocr is processed completely exclusively without parallelization.
Possible Predecessors
reset_obj_db
Possible Successors
do_ocr_multi, do_ocr_single, traind_ocr_class_box, trainf_ocr_class_box

HALCON 8.0.2
752 CHAPTER 10. OCR

See also
write_ocr, do_ocr_multi, traind_ocr_class_box, trainf_ocr_class_box
Module
OCR/OCV

testd_ocr_class_box ( const Hobject Character, const Hobject Image,


Hlong OcrHandle, const char *Class, double *Confidence )

T_testd_ocr_class_box ( const Hobject Character, const Hobject Image,


const Htuple OcrHandle, const Htuple Class, Htuple *Confidence )

Test an OCR classifier.


The operator testd_ocr_class_box tests the confidence with which a character belongs to a given class.
Any number of regions of an image can be passed. For each character (region) in Character the corresponding
name (class) Class must be specified. The gray values are passed in Image. When the operator has finished the
parameter Confidence provides information about how sure a character belongs to the (arbitrary chosen) class.
Parameter
. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Characters to be tested.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Gray values for the characters.
. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; (Htuple .) Hlong
ID of the desired OCR-classifier.
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Class (name) of the characters.
Default Value : "a"
. Confidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Confidence for the character to belong to the class.
Result
If the parameters are correct, the operator testd_ocr_class_box returns the value H_MSG_TRUE. Other-
wise an exception will be raised.
Parallelization Information
testd_ocr_class_box is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_ocr, trainf_ocr_class_box, traind_ocr_class_box
Module
OCR/OCV

traind_ocr_class_box ( const Hobject Character, const Hobject Image,


Hlong OcrHandle, const char *Class, double *AvgConfidence )

T_traind_ocr_class_box ( const Hobject Character, const Hobject Image,


const Htuple OcrHandle, const Htuple Class, Htuple *AvgConfidence )

Train an OCR classifier by the input of regions.


The operator traind_ocr_class_box trains the classifier directly via the input of regions in an image. Any
number of regions of an image can be passed. For each character (region) in Character the corresponding
name (class) Class must be specified. The gray values are passed in Image. When the procedure has finished
the parameter AvgConfidence provides information about the success of the training: It contains the average
confidence of the trained characters measured by a re-classification. The confidence of mismatched characters is
set to 0 (thus, the average confidence will be decreased significantly).

HALCON/C Reference Manual, 2008-5-13


10.1. HYPERBOXES 753

Parameter

. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Characters to be trained.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Gray values for the characters.
. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; (Htuple .) Hlong
ID of the desired OCR-classifier.
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Class (name) of the characters.
Default Value : "a"
. AvgConfidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double *
Average confidence during a re-classification of the trained characters.
Example

char name[128];
long orc_handle;

read_ocr("testnet",&orc_handle);
read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
set_color(WindowHandle,"red");
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
clear_window(WindowHandle);
disp_region(SingleCharacter,WindowHandle);
printf("class of character %d ?\n",i);
scanf("%s\n",name);
traind_ocr_class_box(SingleCharacter,Image,OcrHandle,name,&AvgConfidence);
}

Result
If the parameters are correct, the operator traind_ocr_class_box returns the value H_MSG_TRUE. Oth-
erwise an exception will be raised.
Parallelization Information
traind_ocr_class_box is processed completely exclusively without parallelization.
Possible Predecessors
create_ocr_class_box, read_ocr
Possible Successors
traind_ocr_class_box, write_ocr, do_ocr_multi, do_ocr_single
Alternatives
trainf_ocr_class_box
Module
OCR/OCV

trainf_ocr_class_box ( Hlong OcrHandle, const char *FileName,


double *AvgConfidence )

T_trainf_ocr_class_box ( const Htuple OcrHandle,


const Htuple FileName, Htuple *AvgConfidence )

Train an OCR classifier with the help of a training file.

HALCON 8.0.2
754 CHAPTER 10. OCR

The operator trainf_ocr_class_box trains the classifier OcrHandle via the indicated training files. Any
number of files can be indicated. The parameter AvgConfidence provides information about the success of
the training: It contains the average confidence of the trained characters measured by a re-classification. The
confidence of mismatched characters is set to 0 (thus, the average confidence will be decreased significantly).
Attention
The names of the characters in the file must fit the network.
Parameter

. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; (Htuple .) Hlong


ID of the desired OCR-network.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; (Htuple .) const char *
Name(s) of the training file(s).
Default Value : "train_ocr"
. AvgConfidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double *
Average confidence during a re-classification of the trained characters.
Example

HTuple FileName, OcrHandle, AvgConfidence;


T_create_ocr_class_box(WidthPattern,HeightPattern,Interpolation,
Features,\Character,&OcrHandle);
create_tuple(&FileName,2);
set_s(FileName,"data1",0);
set_s(FileName,"data2",1);
T_trainf_ocr_class_box(OcrHandle,FileName,&AvgConfidence);

Result
If the file name is correct and the data fit the network, the operator trainf_ocr_class_box returns the value
H_MSG_TRUE. Otherwise an exception will be raised.
Parallelization Information
trainf_ocr_class_box is processed completely exclusively without parallelization.
Possible Predecessors
create_ocr_class_box, read_ocr
Possible Successors
traind_ocr_class_box, write_ocr, do_ocr_multi, do_ocr_single
Alternatives
traind_ocr_class_box
Module
OCR/OCV

write_ocr ( Hlong OcrHandle, const char *FileName )


T_write_ocr ( const Htuple OcrHandle, const Htuple FileName )

Writing an OCR classifier into a file.


The operator write_ocr writes the OCR classifier OcrHandle into the file FileName. Since the data of
the classifier will be lost when the program is finished, they have to be stored after the training if the user wants
to use them again at a later execution of the program. The data can then be read with the help of the operator
read_ocr. The extension will be added automatically to the parameter FileName.
Attention
The output file FileName must be given without extension.

HALCON/C Reference Manual, 2008-5-13


10.2. LEXICA 755

Parameter

. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; Hlong


ID of the OCR classifier.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of the file for the OCR classifier (without extension).
Default Value : "my_ocr"
Result
If the parameter OcrHandle is valid and the indicated file can be written, the operator write_ocr returns the
value H_MSG_TRUE. Otherwise an exception will be raised.
Parallelization Information
write_ocr is reentrant and processed without parallelization.
Possible Predecessors
traind_ocr_class_box, trainf_ocr_class_box
Possible Successors
do_ocr_multi, do_ocr_single
See also
read_ocr, do_ocr_multi, traind_ocr_class_box, trainf_ocr_class_box
Module
OCR/OCV

10.2 Lexica
clear_all_lexica ( )
T_clear_all_lexica ( )

Clear all lexica.


clear_all_lexica clears all lexica and releases their resources. All existing lexicon handles are invalid after
this call, and referring to a lexicon by name in expressions is equally no longer possible.
Attention
clear_all_lexica exists solely for the purpose of implementing the “reset program” functionality in HDe-
velop. clear_all_lexica must not be used in any application.
Parallelization Information
clear_all_lexica is processed completely exclusively without parallelization.
See also
clear_lexicon
Module
OCR/OCV

clear_lexicon ( Hlong LexiconHandle )


T_clear_lexicon ( const Htuple LexiconHandle )

Clear a lexicon.
clear_lexicon clears a lexicon and releases its resources.
Parameter

. LexiconHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lexicon ; Hlong


Handle of the lexicon.

HALCON 8.0.2
756 CHAPTER 10. OCR

Parallelization Information
clear_lexicon is processed completely exclusively without parallelization.
See also
create_lexicon
Module
OCR/OCV

T_create_lexicon ( const Htuple Name, const Htuple Words,


Htuple *LexiconHandle )

Create a lexicon from a tuple of words.


create_lexicon creates a new lexicon based on a tuple of Words. By specifying a unique textual Name, you
can later refer to the lexicon from syntax expressions like those used, e.g., by do_ocr_word_mlp.
Note that lexicon support in HALCON is currently not aimed at natural languages. Rather, it is intended as a
post-processing step in OCR applications that only need to distinguish between a limited set of not more than a
few thousand valid words, e.g., country or product names. MVTec itself does not provide any lexica.
Parameter
. Name (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Unique name for the new lexicon.
Default Value : "lex1"
. Words (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char *
Word list for the new lexicon.
Default Value : ["word1","word2","word3"]
. LexiconHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lexicon ; Htuple . Hlong *
Handle of the lexicon.
Parallelization Information
create_lexicon is processed completely exclusively without parallelization.
Possible Successors
do_ocr_word_mlp, do_ocr_word_svm
Alternatives
import_lexicon
See also
lookup_lexicon, suggest_lexicon
Module
OCR/OCV

import_lexicon ( const char *Name, const char *FileName,


Hlong *LexiconHandle )

T_import_lexicon ( const Htuple Name, const Htuple FileName,


Htuple *LexiconHandle )

Create a lexicon from a text file.


import_lexicon creates a new lexicon based on a list of words in the file specified by FileName. The format
of the file is a simple text file with one word per line. By specifying a unique textual Name, you can later refer to
the lexicon from syntax expressions like those used, e.g., by do_ocr_word_mlp.
Note that lexicon support in HALCON is currently not aimed at natural languages. Rather, it is intended as a
post-processing step in OCR applications that only need to distinguish between a limited set of not more than a
few thousand valid words, e.g., country or product names. MVTec itself does not provide any lexica.

HALCON/C Reference Manual, 2008-5-13


10.2. LEXICA 757

Parameter
. Name (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Unique name for the new lexicon.
Default Value : "lex1"
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of a text file containing words for the new lexicon.
Default Value : "words.txt"
. LexiconHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lexicon ; Hlong *
Handle of the lexicon.
Parallelization Information
import_lexicon is processed completely exclusively without parallelization.
Possible Successors
do_ocr_word_mlp, do_ocr_word_svm
Alternatives
create_lexicon
See also
lookup_lexicon, suggest_lexicon
Module
OCR/OCV

inspect_lexicon ( Hlong LexiconHandle, char *Words )


T_inspect_lexicon ( const Htuple LexiconHandle, Htuple *Words )

Query all words from a lexicon.


inspect_lexicon returns a tuple of all words in the lexicon in the parameter Words.
Parameter
. LexiconHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lexicon ; (Htuple .) Hlong
Handle of the lexicon.
. Words (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
List of all words.
Parallelization Information
inspect_lexicon is reentrant and processed without parallelization.
Alternatives
lookup_lexicon
See also
create_lexicon
Module
OCR/OCV

lookup_lexicon ( Hlong LexiconHandle, const char *Word, Hlong *Found )


T_lookup_lexicon ( const Htuple LexiconHandle, const Htuple Word,
Htuple *Found )

Check if a word is contained in a lexicon.


lookup_lexicon checks whether Word is contained in the lexicon LexiconHandle, and returns 1 in Found
if the word is found, otherwise 0.

HALCON 8.0.2
758 CHAPTER 10. OCR

Parameter

. LexiconHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lexicon ; Hlong


Handle of the lexicon.
. Word (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Word to be looked up.
Default Value : "word"
. Found (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Result of the search.
Parallelization Information
lookup_lexicon is reentrant and processed without parallelization.
Alternatives
suggest_lexicon
See also
create_lexicon
Module
OCR/OCV

suggest_lexicon ( Hlong LexiconHandle, const char *Word,


char *Suggestion, Hlong *NumCorrections )

T_suggest_lexicon ( const Htuple LexiconHandle, const Htuple Word,


Htuple *Suggestion, Htuple *NumCorrections )

Find a similar word in a lexicon.


suggest_lexicon compares Word to all words in the lexicon and calculates the minimum number of edit
operations NumCorrections required to transform Word into a word from the lexicon. Valid edit operations
are the insertion, deletion and replacement of characters. The most similar word found in the lexicon is returned in
Suggestion. If there are multiple words with the same minimum number of corrections, only the first of those
words is returned.
Parameter

. LexiconHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lexicon ; Hlong


Handle of the lexicon.
. Word (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Word to be looked up.
Default Value : "word"
. Suggestion (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Most similar word found in the lexicon.
. NumCorrections (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Difference between the words in edit operations.
Parallelization Information
suggest_lexicon is reentrant and processed without parallelization.
Alternatives
lookup_lexicon
See also
create_lexicon
References
Vladimir I. Levenshtein, Binary codes capable of correcting deletions, insertions, and reversals, Doklady Akademii
Nauk SSSR, 163(4):845-848, 1965 (Russian). English translation in Soviet Physics Doklady, 10(8):707-710, 1966.
Module
OCR/OCV

HALCON/C Reference Manual, 2008-5-13


10.3. NEURAL-NETS 759

10.3 Neural-Nets
clear_all_ocr_class_mlp ( )
T_clear_all_ocr_class_mlp ( )

Clear all OCR classifiers.


clear_all_ocr_class_mlp clears all OCR classifiers that were created with create_ocr_class_mlp
and frees all memory required for the classifiers. After calling clear_all_ocr_class_mlp, no classifiers
can be used any longer.
Attention
clear_all_ocr_class_mlp exists solely for the purpose of implementing the “reset program” functionality
in HDevelop. clear_all_ocr_class_mlp must not be used in any application.
Result
clear_all_ocr_class_mlp always returns H_MSG_TRUE.
Parallelization Information
clear_all_ocr_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
do_ocr_single_class_mlp, evaluate_class_mlp
Alternatives
clear_ocr_class_mlp
See also
create_ocr_class_mlp, read_ocr_class_mlp, write_ocr_class_mlp,
trainf_ocr_class_mlp
Module
OCR/OCV

clear_ocr_class_mlp ( Hlong OCRHandle )


T_clear_ocr_class_mlp ( const Htuple OCRHandle )

Clear an OCR classifier.


clear_ocr_class_mlp clears the OCR classifier given by OCRHandle that was created
with create_ocr_class_mlp and frees all memory required for the classifier. After calling
clear_ocr_class_mlp, the classifier can no longer be used. The handle OCRHandle becomes invalid.
Parameter

. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Hlong


Handle of the OCR classifier.
Result
If OCRHandle is valid, the operator clear_ocr_class_mlp returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
clear_ocr_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
do_ocr_single_class_mlp, do_ocr_multi_class_mlp
See also
create_ocr_class_mlp, read_ocr_class_mlp, write_ocr_class_mlp,
trainf_ocr_class_mlp
Module
OCR/OCV

HALCON 8.0.2
760 CHAPTER 10. OCR

T_create_ocr_class_mlp ( const Htuple WidthCharacter,


const Htuple HeightCharacter, const Htuple Interpolation,
const Htuple Features, const Htuple Characters,
const Htuple NumHidden, const Htuple Preprocessing,
const Htuple NumComponents, const Htuple RandSeed, Htuple *OCRHandle )

Create an OCR classifier using a multilayer perceptron.


create_ocr_class_mlp creates an OCR classifier that uses a multilayer perceptron (MLP). The handle of
the OCR classifier is returned in OCRHandle.
For a description on how an MLP works, see create_class_mlp. create_ocr_class_mlp creates
an MLP with OutputFunction = ’softmax’. The length of the feature vector of the MLP (NumInput
in create_class_mlp) is determined from the features that are used for the OCR, which are passed in
Features. The features are described below. The number of units in the hidden layer is determined by
NumHidden. The number of output variables of the MLP (NumOutput in create_class_mlp) is deter-
mined from the names of the characters to be used in the OCR, which are passed in Characters. As described
with create_class_mlp, the parameters Preprocessing and NumComponents can be used to specify
a preprocessing of the data (i.e., the feature vectors). The OCR already approximately normalizes the features.
Hence, Preprocessing can typically be set to ’none’. The parameter RandSeed has the same meaning as in
create_class_mlp.
The features to be used for the classification are determined by Features. Features can contain a tuple
of several feature names. Each of these feature names results in one or more features to be calculated for the
classifier. Some of the feature names compute gray value features (e.g., ’pixel_invar’). Because a classifier requires
a constant number of features (input variables), a character to be classified is transformed to a standard size,
which is determined by WidthCharacter and HeightCharacter. The interpolation to be used for the
transformation is determined by Interpolation. It has the same meaning as in affine_trans_image.
The interpolation should be chosen such that no aliasing effects occur in the transformation. For most applications,
Interpolation = ’constant’ should be used. It should be noted that the size of the transformed character
is not chosen too large, because the generalization properties of the classifier may become bad for large sizes.
In particular, large sizes will lead to the fact that small segmentation errors will have a large influence on the
computed features if gray value features are used. This happens because segmentation errors will change the
smallest enclosing rectangle of the regions, which leads to the fact that the character is zoomed differently than the
characters in the training set. In most applications, sizes between 6 × 8 and 10 × 14 should be used.
The parameter Features can contain the following feature names for the classification of the characters. By
specifying ’default’, the features ’ratio’ and ’pixel_invar’ are selected.

’pixel’ Gray values of the character (WidthCharacter × HeightCharacter features).


’pixel_invar’ Gray values of the character with maximum scaling of the gray values (WidthCharacter ×
HeightCharacter features).
’pixel_binary’ Region of the character as a binary image zoomed to a size of WidthCharacter ×
HeightCharacter (WidthCharacter × HeightCharacter features).
’gradient_8dir’ Gradients are computed on the character image. The gradient directions are discretized into 8
directions. The amplitude image is decomposed into 8 channels according to these discretized directions. 25
samples on a 5 × 5 grid are extracted from each channel. These samples are used as features (200 features).
’projection_horizontal’ Horizontal projection of the gray values (see gray_projections,
HeightCharacter features).
’projection_horizontal_invar’ Maximally scaled horizontal projection of the gray values (HeightCharacter
features).
’projection_vertical’ Vertical projection of the gray values (see gray_projections, WidthCharacter
features).
’projection_vertical_invar’ Maximally scaled vertical projection of the gray values (WidthCharacter fea-
tures).
’ratio’ Aspect ratio of the character (1 feature).
’anisometry’ Anisometry of the character (see eccentricity, 1 feature).
’width’ Width of the character before scaling the character to the standard size (not scale-invariant, see
smallest_rectangle1, 1 feature).

HALCON/C Reference Manual, 2008-5-13


10.3. NEURAL-NETS 761

’height’ Height of the character before scaling the character to the standard size (not scale-invariant, see
smallest_rectangle1, 1 feature).
’zoom_factor’ Difference in size between the character and the values of WidthCharacter and
HeightCharacter (not scale-invariant, 1 feature).
’foreground’ Fraction of pixels in the foreground (1 feature).
’foreground_grid_9’ Fraction of pixels in the foreground in a 3 × 3 grid within the smallest enclosing rectangle of
the character (9 features).
’foreground_grid_16’ Fraction of pixels in the foreground in a 4 × 4 grid within the smallest enclosing rectangle
of the character (16 features).
’compactness’ Compactness of the character (see compactness, 1 feature).
’convexity’ Convexity of the character (see convexity, 1 feature).
’moments_region_2nd_invar’ Normalized 2nd moments of the character (see
moments_region_2nd_invar, 3 features).
’moments_region_2nd_rel_invar’ Normalized 2nd relative moments of the character (see
moments_region_2nd_rel_invar, 2 features).
’moments_region_3rd_invar’ Normalized 3rd moments of the character (see moments_region_3rd_invar,
4 features).
’moments_central’ Normalized central moments of the character (see moments_region_central, 4 fea-
tures).
’moments_gray_plane’ Normalized gray value moments and the angle of the gray value plane (see
moments_gray_plane, 4 features).
’phi’ Sinus and cosinus of the orientation (angle) of the character (see elliptic_axis, 2 feature).
’num_connect’ Number of connected components (see connect_and_holes, 1 feature).
’num_holes’ Number of holes (see connect_and_holes, 1 feature).
’cooc’ Values of the binary cooccurrence matrix (see gen_cooc_matrix, 8 features).
’num_runs’ Number of runs in the region normalized by the area (1 feature).
’chord_histo’ Frequency of the runs per row (HeightCharacter features).

After the classifier has been created, it is trained using trainf_ocr_class_mlp. After this, the classifier can
be saved using write_ocr_class_mlp. Alternatively, the classifier can be used immediately after training to
classify characters using do_ocr_single_class_mlp or do_ocr_multi_class_mlp.
HALCON provides a number of pretrained OCR classifiers (see Solution Guide I, chapter ’OCR’, section ’Pre-
trained OCR Fonts’). These pretrained OCR classifiers can be read directly with read_ocr_class_mlp and
make it possible to read a wide variety of different fonts without the need to train an OCR classifier. Therefore, it
is recommended to try if one of the pretrained OCR classifiers can be used successfully. If this is the case, it is not
necessary to create and train an OCR classifier.
A comparison of the MLP and the support vector machine (SVM) (see create_ocr_class_svm) typically
shows that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better
recognition rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical
applications. Please note that this guideline assumes optimal tuning of the parameters.
Parameter

. WidthCharacter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Width of the rectangle to which the gray values of the segmented character are zoomed.
Default Value : 8
Suggested values : WidthCharacter ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 4 ≤ WidthCharacter ≤ 20
. HeightCharacter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Height of the rectangle to which the gray values of the segmented character are zoomed.
Default Value : 10
Suggested values : HeightCharacter ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 4 ≤ HeightCharacter ≤ 20

HALCON 8.0.2
762 CHAPTER 10. OCR

. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Interpolation mode for the zooming of the characters.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Features to be used for classification.
Default Value : "default"
List of values : Features ∈ {"default", "pixel", "pixel_invar", "pixel_binary", "gradient_8dir",
"projection_horizontal", "projection_horizontal_invar", "projection_vertical", "projection_vertical_invar",
"ratio", "anisometry", "width", "height", "zoom_factor", "foreground", "foreground_grid_9",
"foreground_grid_16", "compactness", "convexity", "moments_region_2nd_invar",
"moments_region_2nd_rel_invar", "moments_region_3rd_invar", "moments_central",
"moments_gray_plane", "phi", "num_connect", "num_holes", "cooc", "num_runs", "chord_histo"}
. Characters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char *
All characters of the character set to be read.
Default Value : ["0","1","2","3","4","5","6","7","8","9"]
. NumHidden (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of hidden units of the MLP.
Default Value : 80
Suggested values : NumHidden ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150}
Restriction : NumHidden ≥ 1
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "none"
List of values : Preprocessing ∈ {"none", "normalization", "principal_components",
"canonical_variates"}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default Value : 10
Suggested values : NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumComponents ≥ 1
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Seed value of the random number generator that is used to initialize the MLP with random values.
Default Value : 42
. OCRHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Htuple . Hlong *
Handle of the OCR classifier.
Example (Syntax: HDevelop)

read_image (Image, ’letters’)


* Segment the image.
bin_threshold (Image, Region)
dilation_circle (Region, RegionDilation, 3.5)
connection (RegionDilation, ConnectedRegions)
intersection (ConnectedRegions, Region, RegionIntersection)
sort_region (RegionIntersection, Characters, ’character’, ’true’, ’row’)
* Generate the training file.
Number := |Characters|
Classes := []
for J := 0 to 25 by 1
Classes := [Classes,gen_tuple_const(20,chr(ord(’a’)+J))]
endfor
Classes := [Classes,gen_tuple_const(20,’.’)]
write_ocr_trainf (Characters, Image, Classes, ’letters.trf’)
* Generate and train the classifier.
read_ocr_trainf_names (’letters.trf’, CharacterNames, CharacterCount)
create_ocr_class_mlp (8, 10, ’constant’, ’default’, CharacterNames, 20,
’none’, 81, 42, OCRHandle)

HALCON/C Reference Manual, 2008-5-13


10.3. NEURAL-NETS 763

trainf_ocr_class_mlp (OCRHandle, ’letters.trf’, 100, 0.01, 0.01, Error,


ErrorLog)
* Re-classify the characters in the image.
do_ocr_multi_class_mlp (Characters, Image, OCRHandle, Class, Confidence)
clear_ocr_class_mlp (OCRHandle)

Result
If the parameters are valid, the operator create_ocr_class_mlp returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
create_ocr_class_mlp is processed completely exclusively without parallelization.
Possible Successors
trainf_ocr_class_mlp
Alternatives
create_ocr_class_svm, create_ocr_class_box
See also
do_ocr_single_class_mlp, do_ocr_multi_class_mlp, clear_ocr_class_mlp,
create_class_mlp, train_class_mlp, classify_class_mlp
Module
OCR/OCV

do_ocr_multi_class_mlp ( const Hobject Character, const Hobject Image,


Hlong OCRHandle, char *Class, double *Confidence )

T_do_ocr_multi_class_mlp ( const Hobject Character,


const Hobject Image, const Htuple OCRHandle, Htuple *Class,
Htuple *Confidence )

Classify multiple characters with an OCR classifier.


do_ocr_multi_class_mlp computes the best class for each of the characters given by the regions
Character and the gray values Image with the OCR classifier OCRHandle and returns the classes
in Class and the corresponding confidences (probabilities) of the classes in Confidence. In contrast
to do_ocr_single_class_mlp, do_ocr_multi_class_mlp can classify multiple characters in
one call, and therefore typically is faster than a loop that uses do_ocr_single_class_mlp to clas-
sify single characters. However, do_ocr_multi_class_mlp can only return the best class of each
character. Because the confidences can be interpreted as probabilities (see classify_class_mlp and
evaluate_class_mlp), and it is therefore easy to check whether a character has been classified with too
much uncertainty, this is usually not a disadvantage, except in cases where the classes overlap so much that in
many cases the second best class must be examined to be able to decide the class of the character. In these cases,
do_ocr_single_class_mlp should be used. Before calling do_ocr_multi_class_mlp, the classifier
must be trained with trainf_ocr_class_mlp.
Parameter

. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Characters to be recognized.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Gray values of the characters.
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; (Htuple .) Hlong
Handle of the OCR classifier.
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Result of classifying the characters with the MLP.
Number of elements : Class = Character
. Confidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Confidence of the class of the characters.
Number of elements : Confidence = Character

HALCON 8.0.2
764 CHAPTER 10. OCR

Result
If the parameters are valid, the operator do_ocr_multi_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
do_ocr_multi_class_mlp is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
trainf_ocr_class_mlp, read_ocr_class_mlp
Alternatives
do_ocr_word_mlp, do_ocr_single_class_mlp
See also
create_ocr_class_mlp, classify_class_mlp
Module
OCR/OCV

T_do_ocr_single_class_mlp ( const Hobject Character,


const Hobject Image, const Htuple OCRHandle, const Htuple Num,
Htuple *Class, Htuple *Confidence )

Classify a single character with an OCR classifier.


do_ocr_single_class_mlp computes the best Num classes of the character given by the region
Character and the gray values Image with the OCR classifier OCRHandle and returns the classes in Class
and the corresponding confidences (probabilities) of the classes in Confidence. Because multiple classes may
be returned by do_ocr_single_class_mlp, Character may only contain a single region (a single char-
acter). If multiple characters should be classified in a single call, do_ocr_multi_class_mlp must be used.
Because do_ocr_multi_class_mlp typically is faster than a loop with do_ocr_single_class_mlp
and because the confidences can be interpreted as probabilities (see classify_class_mlp and
evaluate_class_mlp), and it is therefore easy to check whether a character has been classified with too
much uncertainty, in most cases do_ocr_multi_class_mlp should be used, unless the second-best class
should be examined explicitly. Before calling do_ocr_single_class_mlp, the classifier must be trained
with trainf_ocr_class_mlp.
Parameter

. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Character to be recognized.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Gray values of the character.
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Htuple . Hlong
Handle of the OCR classifier.
. Num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Number of best classes to determine.
Default Value : 1
Suggested values : Num ∈ {1, 2, 3, 4, 5}
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . char *
Result of classifying the character with the MLP.
. Confidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Confidence(s) of the class(es) of the character.
Result
If the parameters are valid, the operator do_ocr_single_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
do_ocr_single_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_mlp, read_ocr_class_mlp

HALCON/C Reference Manual, 2008-5-13


10.3. NEURAL-NETS 765

Alternatives
do_ocr_multi_class_mlp
See also
create_ocr_class_mlp, classify_class_mlp
Module
OCR/OCV

do_ocr_word_mlp ( const Hobject Character, const Hobject Image,


Hlong OCRHandle, const char *Expression, Hlong NumAlternatives,
Hlong NumCorrections, char *Class, double *Confidence, char *Word,
double *Score )

T_do_ocr_word_mlp ( const Hobject Character, const Hobject Image,


const Htuple OCRHandle, const Htuple Expression,
const Htuple NumAlternatives, const Htuple NumCorrections,
Htuple *Class, Htuple *Confidence, Htuple *Word, Htuple *Score )

Classify a related group of characters with an OCR classifier.


do_ocr_word_mlp works like do_ocr_multi_class_mlp insofar as it computes the best class for each of
the characters given by the regions Character and the gray values Image with the OCR classifier OCRHandle,
and returns the classes in Class and the corresponding confidences (probabilities) of the classes in Confidence.
In contrast to do_ocr_multi_class_mlp, do_ocr_word_mlp treats the group of characters as an entity
which yields a Word by concatenating the class names for each character region. This allows to restrict the allowed
classification results on a textual level by specifying an Expression describing the expected word.
The Expression may restrict the word to belong to a predefined lexicon created using create_lexicon
or import_lexicon, by specifying the name of the lexicon in angular brackets as in ’<mylexicon>’. If the
Expression is of any other form, it is interpreted as a regular expression with the same syntax as specified for
tuple_regexp_match. Note that you will usually want to use an expression of the form ’^...$’ when using
variable quantifiers like ’*’, to ensure that the entire word is used in the expression. Also note that in contrast to
tuple_regexp_match, do_ocr_word_mlp does not support passing extra options in an expression tuple.
If the word derived from the best class for each character does not match the Expression,
do_ocr_word_mlp attempts to correct it by considering the NumAlternatives best classes for each char-
acter. The alternatives used are identical to those returned by do_ocr_single_class_mlp for a single
character. It does so by testing all possible corrections for which the classification result is changed for at most
NumCorrections character regions.
In case the Expression is a lexicon and the above procedure did not yield a result, the most similar word in
the lexicon is returned as long as it requires less than NumCorrections edit operations for the correction (see
suggest_lexicon).
The resulting word is graded by a Score between 0.0 (no correction found) and 1.0 (original word correct), which
is dominated by the number of corrected characters but also adds a minor penalty for ignoring the second best
class or even all best classes (in case of lexica). Note that this is a combinatorial score which does not reflect the
original Confidence of the best Class.
Parameter

. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Characters to be recognized.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Gray values of the characters.
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; (Htuple .) Hlong
Handle of the OCR classifier.
. Expression (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Expression describing the allowed word structure.

HALCON 8.0.2
766 CHAPTER 10. OCR

. NumAlternatives (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong


Number of classes per character considered for the internal word correction.
Default Value : 3
Suggested values : NumAlternatives ∈ {3, 4, 5}
Typical range of values : 1 ≤ NumAlternatives
. NumCorrections (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Maximum number of corrected characters.
Default Value : 2
Suggested values : NumCorrections ∈ {1, 2, 3, 4, 5}
Typical range of values : 0 ≤ NumCorrections
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Result of classifying the characters with the MLP.
Number of elements : Class = Character
. Confidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Confidence of the class of the characters.
Number of elements : Confidence = Character
. Word (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) char *
Word text after classification and correction.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double *
Measure of similarity between corrected word and uncorrected classification results.
Complexity
The complexity of checking all possible corrections is of magnitude O((n ∗ a)min(c,n) ), where a is the number
of alternatives, n is the number of character regions, and c is the number of allowed corrections. However, to
guard against a near-infinite loop in case of large n, c is internally clipped to 5, 3, or 1 if a ∗ n >= 30, 60, or 90,
respectively.
Result
If the parameters are valid, the operator do_ocr_multi_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
do_ocr_word_mlp is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_mlp, read_ocr_class_mlp
Alternatives
do_ocr_multi_class_mlp
See also
create_ocr_class_mlp, classify_class_mlp
Module
OCR/OCV

T_get_features_ocr_class_mlp ( const Hobject Character,


const Htuple OCRHandle, const Htuple Transform, Htuple *Features )

Compute the features of a character.


get_features_ocr_class_mlp computes the features of the character given by Character with the
OCR classifier OCRHandle and returns them in Features. In contrast to do_ocr_single_class_mlp
and do_ocr_multi_class_mlp, the character is passed as a single image object. Hence, before calling
get_features_ocr_class_mlp, reduce_domain must typically be called. The parameter Transform
determines whether the feature transformation specified with Preprocessing in create_ocr_class_mlp
for the classifier should be applied (Transform = ’true’) or whether the untransformed features should be re-
turned (Transform = ’false’). get_features_ocr_class_mlp can be used to inspect the features that
are used for the classification.

HALCON/C Reference Manual, 2008-5-13


10.3. NEURAL-NETS 767

Parameter
. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Input character.
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Htuple . Hlong
Handle of the OCR classifier.
. Transform (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should the feature vector be transformed with the preprocessing?
Default Value : "true"
List of values : Transform ∈ {"true", "false"}
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Feature vector of the character.
Result
If the parameters are valid, the operator get_features_ocr_class_mlp returns the value H_MSG_TRUE.
If necessary an exception handling is raised.
Parallelization Information
get_features_ocr_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_mlp
See also
create_ocr_class_mlp
Module
OCR/OCV

T_get_params_ocr_class_mlp ( const Htuple OCRHandle,


Htuple *WidthCharacter, Htuple *HeightCharacter,
Htuple *Interpolation, Htuple *Features, Htuple *Characters,
Htuple *NumHidden, Htuple *Preprocessing, Htuple *NumComponents )

Return the parameters of an OCR classifier.


get_params_ocr_class_mlp returns the parameters of an OCR classifier that were specified when the
classifier was created with create_ocr_class_mlp. This is particularly useful if the classifier was read
with read_ocr_class_mlp. The output of get_params_ocr_class_mlp can, for example, be used
to check whether a character to be read is contained in the classifier. For a description of the parameters, see
create_ocr_class_mlp.
Parameter
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Htuple . Hlong
Handle of the OCR classifier.
. WidthCharacter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Width of the rectangle to which the gray values of the segmented character are zoomed.
. HeightCharacter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Height of the rectangle to which the gray values of the segmented character are zoomed.
. Interpolation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
Interpolation mode for the zooming of the characters.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . char *
Features to be used for classification.
. Characters (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Characters of the character set to be read.
. NumHidden (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Number of hidden units of the MLP.
. Preprocessing (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
Type of preprocessing used to transform the feature vectors.
. NumComponents (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Preprocessing parameter: Number of transformed features.

HALCON 8.0.2
768 CHAPTER 10. OCR

Result
If the parameters are valid, the operator get_params_ocr_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
get_params_ocr_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
create_ocr_class_mlp, read_ocr_class_mlp
Possible Successors
do_ocr_single_class_mlp, do_ocr_multi_class_mlp
See also
trainf_ocr_class_mlp, get_params_class_mlp
Module
OCR/OCV

T_get_prep_info_ocr_class_mlp ( const Htuple OCRHandle,


const Htuple TrainingFile, const Htuple Preprocessing,
Htuple *InformationCont, Htuple *CumInformationCont )

Compute the information content of the preprocessed feature vectors of an OCR classifier.
get_prep_info_ocr_class_mlp computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’prin-
cipal_components’ or ’canonical_variates’. The OCR classifier OCRHandle must have been created with
create_ocr_class_mlp. The preprocessing methods are described with create_class_mlp. The in-
formation content is derived from the variations of the transformed components of the feature vector, i.e., it is
computed solely based on the training data, independent of any error rate on the training data. The informa-
tion content is computed for all relevant components of the transformed feature vectors (NumInput for ’princi-
pal_components’ and min(NumOutput−1, NumInput) for ’canonical_variates’, see create_class_mlp),
and is returned in InformationCont as a number between 0 and 1. To convert the information content into
a percentage, it simply needs to be multiplied by 100. The cumulative information content of the first n compo-
nents is returned in the n-th component of CumInformationCont, i.e., CumInformationCont contains the
sums of the first n elements of InformationCont. To use get_prep_info_ocr_class_mlp, a sufficient
number of samples must be stored in the training files given by TrainingFile (see write_ocr_trainf).
InformationCont and CumInformationCont can be used to decide how many components of
the transformed feature vectors contain relevant information. An often used criterion is to require that
the transformed data must represent x% (e.g., 90%) of the total data. This can be decided eas-
ily from the first value of CumInformationCont that lies above x%. The number thus obtained
can be used as the value for NumComponents in a new call to create_ocr_class_mlp. The
call to get_prep_info_ocr_class_mlp already requires the creation of a classifier, and hence
the setting of NumComponents in create_ocr_class_mlp to an initial value. However, if
get_prep_info_ocr_class_mlp is called it is typically not known how many components are rele-
vant, and hence how to set NumComponents in this call. Therefore, the following two-step approach should
typically be used to select NumComponents: In a first step, a classifier with the maximum number for
NumComponents is created (NumInput for ’principal_components’ and min(NumOutput − 1, NumInput)
for ’canonical_variates’). Then, the training samples are saved in a training file using write_ocr_trainf.
Subsequently, get_prep_info_ocr_class_mlp is used to determine the information content of the com-
ponents, and with this NumComponents. After this, a new classifier with the desired number of components is
created, and the classifier is trained with trainf_ocr_class_mlp.
Parameter

. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Htuple . Hlong


Handle of the OCR classifier.
. TrainingFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; Htuple . const char *
Name(s) of the training file(s).
Default Value : "ocr.trf"

HALCON/C Reference Manual, 2008-5-13


10.3. NEURAL-NETS 769

. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Type of preprocessing used to transform the feature vectors.
Default Value : "principal_components"
List of values : Preprocessing ∈ {"principal_components", "canonical_variates"}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Cumulative information content of the transformed feature vectors.
Example (Syntax: HDevelop)

* Create the initial OCR classifier.


read_ocr_trainf_names (’ocr.trf’, CharacterNames, CharacterCount)
create_ocr_class_mlp (8, 10, ’constant’, ’default’, CharacterNames, 80,
’canonical_variates’, |CharacterNames|, 42, OCRHandle)
* Get the information content of the transformed feature vectors.
get_prep_info_ocr_class_mlp (OCRHandle, ’ocr.trf’, ’canonical_variates’,
InformationCont, CumInformationCont)
* Determine the number of transformed components.
* NumComp = [...]
clear_ocr_class_mlp (OCRHandle)
* Create the final OCR classifier.
create_ocr_class_mlp (8, 10, ’constant’, ’default’, CharacterNames, 80,
’canonical_variates’, NumComp, 42, OCRHandle)
* Train the final classifier.
trainf_ocr_class_mlp (OCRHandle, ’ocr.trf’, 100, 1, 0.01, Error, ErrorLog)
write_ocr_class_mlp (OCRHandle, ’ocr.omc’)
clear_ocr_class_mlp (OCRHandle)

Result
If the parameters are valid, the operator get_prep_info_ocr_class_mlp returns the value
H_MSG_TRUE. If necessary an exception handling is raised.
get_prep_info_ocr_class_mlp may return the error 9211 (Matrix is not positive definite) if
Preprocessing = ’canonical_variates’ is used. This typically indicates that not enough training samples
have been stored for each class.
Parallelization Information
get_prep_info_ocr_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
create_ocr_class_mlp, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
clear_ocr_class_mlp, create_ocr_class_mlp
Module
OCR/OCV

read_ocr_class_mlp ( const char *FileName, Hlong *OCRHandle )


T_read_ocr_class_mlp ( const Htuple FileName, Htuple *OCRHandle )

Read an OCR classifier from a file.


read_ocr_class_mlp reads an OCR classifier that has been stored with write_ocr_class_mlp. Since
the training of an OCR classifier can consume a relatively long time, the classifier is typically trained in an offline
process and written to a file with write_ocr_class_mlp. In the online process the classifier is read with
read_ocr_class_mlp and subsequently used for classification with do_ocr_single_class_mlp or
do_ocr_multi_class_mlp.

HALCON 8.0.2
770 CHAPTER 10. OCR

HALCON provides a number of pretrained OCR classifiers (see Solution Guide I, chapter ’OCR’, section ’Pre-
trained OCR Fonts’). These pretrained OCR classifiers make it possible to read a wide variety of different fonts
without the need to train an OCR classifier. Note that the pretrained OCR classifiers were trained with symbols
that are printed dark on light.
Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *


File name.
Suggested values : FileName ∈ {"Document_0-9A-Z.omc", "Document_0-9.omc", "Document.omc",
"DotPrint_0-9A-Z.omc", "DotPrint_0-9.omc", "DotPrint_0-9+.omc", "DotPrint.omc",
"HandWritten_0-9.omc", "Industrial_0-9A-Z.omc", "Industrial_0-9.omc", "Industrial_0-9+.omc",
"Industrial.omc", "MICR.omc", "OCRA_0-9A-Z.omc", "OCRA_0-9.omc", "OCRA.omc",
"OCRB_0-9A-Z.omc", "OCRB_0-9.omc", "OCRB.omc", "Pharma_0-9A-Z.omc", "Pharma_0-9.omc",
"Pharma_0-9+.omc", "Pharma.omc"}
. OCRHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Hlong *
Handle of the OCR classifier.
Result
If the parameters are valid, the operator read_ocr_class_mlp returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
read_ocr_class_mlp is processed completely exclusively without parallelization.
Possible Successors
do_ocr_single_class_mlp, do_ocr_multi_class_mlp
See also
create_ocr_class_mlp, write_ocr_class_mlp, read_class_mlp, write_class_mlp
Module
OCR/OCV

T_trainf_ocr_class_mlp ( const Htuple OCRHandle,


const Htuple TrainingFile, const Htuple MaxIterations,
const Htuple WeightTolerance, const Htuple ErrorTolerance,
Htuple *Error, Htuple *ErrorLog )

Train an OCR classifier.


trainf_ocr_class_mlp trains the OCR classifier OCRHandle with the training characters stored in
the OCR training files given by TrainingFile. The training files must been created, e.g., using
write_ocr_trainf, before calling trainf_ocr_class_mlp. The remaining parameters have the same
meaning as in train_class_mlp and are described in detail with train_class_mlp.
Parameter

. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Htuple . Hlong


Handle of the OCR classifier.
. TrainingFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; Htuple . const char *
Name(s) of the training file(s).
Default Value : "ocr.trf"
. MaxIterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Maximum number of iterations of the optimization algorithm.
Default Value : 200
Suggested values : MaxIterations ∈ {20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 220, 240, 260, 280,
300}
. WeightTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm.
Default Value : 1.0
Suggested values : WeightTolerance ∈ {1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001}
Restriction : WeightTolerance ≥ 1.0e-8

HALCON/C Reference Manual, 2008-5-13


10.3. NEURAL-NETS 771

. ErrorTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double


Threshold for the difference of the mean error of the MLP on the training data between two iterations of the
optimization algorithm.
Default Value : 0.01
Suggested values : ErrorTolerance ∈ {1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001}
Restriction : ErrorTolerance ≥ 1.0e-8
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Mean error of the MLP on the training data.
. ErrorLog (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Mean error of the MLP on the training data as a function of the number of iterations of the optimization
algorithm.
Example (Syntax: HDevelop)

* Train an OCR classifier


read_ocr_trainf_names (’ocr.trf’, CharacterNames, CharacterCount)
create_ocr_class_mlp (8, 10, ’constant’, ’default’, CharacterNames, 80,
’none’, 81, 42, OCRHandle)
trainf_ocr_class_mlp (OCRHandle, ’ocr.trf’, 100, 1, 0.01, Error, ErrorLog)
write_ocr_class_mlp (OCRHandle, ’ocr.omc’)
clear_ocr_class_mlp (OCRHandle)

Result
If the parameters are valid, the operator trainf_ocr_class_mlp returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
trainf_ocr_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
trainf_ocr_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
create_ocr_class_mlp, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
do_ocr_single_class_mlp, do_ocr_multi_class_mlp, write_ocr_class_mlp
Alternatives
read_ocr_class_mlp
See also
train_class_mlp
Module
OCR/OCV

write_ocr_class_mlp ( Hlong OCRHandle, const char *FileName )


T_write_ocr_class_mlp ( const Htuple OCRHandle,
const Htuple FileName )

Write an OCR classifier to a file.


write_ocr_class_mlp writes the OCR classifier OCRHandle to the file given by FileName.
If a file extension is not specified in FileName the default extension ’.omc’ is appended to
FileName. write_ocr_class_mlp is typically called after the classifier has been trained with
trainf_ocr_class_mlp. The classifier can be read with read_ocr_class_mlp.

HALCON 8.0.2
772 CHAPTER 10. OCR

Parameter
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Hlong
Handle of the OCR classifier.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the parameters are valid, the operator write_ocr_class_mlp returns the value H_MSG_TRUE. If neces-
sary an exception handling is raised.
Parallelization Information
write_ocr_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_mlp
Possible Successors
clear_ocr_class_mlp
See also
create_ocr_class_mlp, read_ocr_class_mlp, write_class_mlp, read_class_mlp
Module
OCR/OCV

10.4 Support-Vector-Machines

clear_all_ocr_class_svm ( )
T_clear_all_ocr_class_svm ( )

Clear all SVM based OCR classifiers.


clear_all_ocr_class_svm clears all SVM-based OCR classifiers that were created with
create_ocr_class_svm and frees all memory required for the classifiers. After calling
clear_all_ocr_class_svm, no SVM-based classifiers can be used any longer.
Attention
clear_all_ocr_class_svm exists solely for the purpose of implementing the “reset program” functionality
in HDevelop. clear_all_ocr_class_svm must not be used in any application.
Result
clear_all_ocr_class_svm always returns H_MSG_TRUE.
Parallelization Information
clear_all_ocr_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
do_ocr_single_class_svm
Alternatives
clear_ocr_class_svm
See also
create_ocr_class_svm, read_ocr_class_svm, write_ocr_class_svm,
trainf_ocr_class_svm
Module
OCR/OCV

clear_ocr_class_svm ( Hlong OCRHandle )


T_clear_ocr_class_svm ( const Htuple OCRHandle )

Clear an SVM-based OCR classifier.

HALCON/C Reference Manual, 2008-5-13


10.4. SUPPORT-VECTOR-MACHINES 773

clear_ocr_class_svm clears the OCR classifier given by OCRHandle and frees all memory required for the
classifier. After calling clear_ocr_class_svm, the classifier can no longer be used. The handle OCRHandle
becomes invalid.
Parameter
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Hlong
Handle of the OCR classifier.
Result
If OCRHandle is valid the operator clear_ocr_class_svm returns the value H_MSG_TRUE. If necessary,
an exception handling is raised.
Parallelization Information
clear_ocr_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
do_ocr_single_class_svm, do_ocr_multi_class_svm
See also
create_ocr_class_svm, read_ocr_class_svm, write_ocr_class_svm,
trainf_ocr_class_svm
Module
OCR/OCV

T_create_ocr_class_svm ( const Htuple WidthCharacter,


const Htuple HeightCharacter, const Htuple Interpolation,
const Htuple Features, const Htuple Characters,
const Htuple KernelType, const Htuple KernelParam, const Htuple Nu,
const Htuple Mode, const Htuple Preprocessing,
const Htuple NumComponents, Htuple *OCRHandle )

Create an OCR classifier using a support vector machine.


create_ocr_class_svm creates an OCR classifier that uses a support vector machine (SVM). The handle of
the OCR classifier is returned in OCRHandle.
For a description on how an SVM works, see create_class_svm. create_ocr_class_svm creates an
SVM for classification with the classification mode given by Mode. The length of the feature vector of the SVM
(NumFeatures in create_class_svm) is determined from the features that are used for the OCR, which
are passed in Features. The features are described below. The kernel is parameterized with KernelType,
KernelParam and Nu like in create_class_svm. The number of classes of the SVM (NumClasses
in create_class_svm) is determined from the names of the characters to be used in the OCR, which are
passed in Characters. As described with create_class_svm, the parameters Preprocessing and
NumComponents can be used to specify a preprocessing of the data (i.e., the feature vectors). For the sake of
numerical stability, Preprocessing can typically be set to ’normalization’. In order to speed up classifica-
tion time, ’principal_components’ or ’canonical_variates’ can be used, as the number of input features can be
significantly reduced without deterioration of the recognition rate.
The features to be used for the classification are determined by Features. Features can contain a tuple of fea-
ture names. Each of these feature names results in one or more features to be calculated for the classifier. Some of
the feature names compute gray value features (e.g., ’pixel_invar’). Because a classifier requires a constant number
of features (input variables), a character to be classified is transformed to a standard size, which is determined by
WidthCharacter and HeightCharacter. The interpolation to be used for the transformation is determined
by Interpolation. It has the same meaning as in affine_trans_image. The interpolation should be
chosen such that no aliasing effects occur in the transformation. For most applications, Interpolation =
’constant’ should be used. It should be noted that the size of the transformed character is not chosen too large,
because the generalization properties of the classifier may become bad for large sizes. In particular, for large sizes
small segmentation errors will have a large influence on the computed features if gray value features are used. This
happens because segmentation errors will change the smallest enclosing rectangle of the regions, thus the character
is zoomed differently than the characters in the training set. In most applications, sizes between 6 × 8 and 10 × 14
should be used.
The parameter Features can contain the following feature names for the classification of the characters. By
specifying ’default’, the features ’ratio’ and ’pixel_invar’ are selected.

HALCON 8.0.2
774 CHAPTER 10. OCR

’pixel’ Gray values of the character (WidthCharacter × HeightCharacter features).


’pixel_invar’ Gray values of the character with maximum scaling of the gray values (WidthCharacter ×
HeightCharacter features).
’pixel_binary’ Region of the character as a binary image zoomed to a size of WidthCharacter ×
HeightCharacter (WidthCharacter × HeightCharacter features).
’gradient_8dir’ Gradients are computed on the character image. The gradient directions are discretized into 8
directions. The amplitude image is decomposed into 8 channels according to these discretized directions. 25
samples on a 5 × 5 grid are extracted from each channel. These samples are used as features (200 features).
’projection_horizontal’ Horizontal projection of the gray values (see gray_projections,
HeightCharacter features).
’projection_horizontal_invar’ Maximally scaled horizontal projection of the gray values (HeightCharacter
features).
’projection_vertical’ Vertical projection of the gray values (see gray_projections, WidthCharacter
features).
’projection_vertical_invar’ Maximally scaled vertical projection of the gray values (WidthCharacter fea-
tures).
’ratio’ Aspect ratio of the character (1 feature).
’anisometry’ Anisometry of the character (see eccentricity, 1 feature).
’width’ Width of the character before scaling the character to the standard size (not scale-invariant, see
smallest_rectangle1, 1 feature).
’height’ Height of the character before scaling the character to the standard size (not scale-invariant, see
smallest_rectangle1, 1 feature).
’zoom_factor’ Difference in size between the character and the values of WidthCharacter and
HeightCharacter (not scale-invariant, 1 feature).
’foreground’ Fraction of pixels in the foreground (1 feature).
’foreground_grid_9’ Fraction of pixels in the foreground in a 3 × 3 grid within the smallest enclosing rectangle of
the character (9 features).
’foreground_grid_16’ Fraction of pixels in the foreground in a 4 × 4 grid within the smallest enclosing rectangle
of the character (16 features).
’compactness’ Compactness of the character (see compactness, 1 feature).
’convexity’ Convexity of the character (see convexity, 1 feature).
’moments_region_2nd_invar’ Normalized 2nd moments of the character (see
moments_region_2nd_invar, 3 features).
’moments_region_2nd_rel_invar’ Normalized 2nd relative moments of the character (see
moments_region_2nd_rel_invar, 2 features).
’moments_region_3rd_invar’ Normalized 3rd moments of the character (see moments_region_3rd_invar,
4 features).
’moments_central’ Normalized central moments of the character (see moments_region_central, 4 fea-
tures).
’moments_gray_plane’ Normalized gray value moments and the angle of the gray value plane (see
moments_gray_plane, 4 features).
’phi’ Orientation (angle) of the character (see elliptic_axis, 1 feature).
’num_connect’ Number of connected components (see connect_and_holes, 1 feature).
’num_holes’ Number of holes (see connect_and_holes, 1 feature).
’cooc’ Values of the binary cooccurrence matrix (see gen_cooc_matrix, 12 features).
’num_runs’ Number of runs in the region normalized by the area (1 feature).
’chord_histo’ Frequency of the runs per row (HeightCharacter features).

HALCON/C Reference Manual, 2008-5-13


10.4. SUPPORT-VECTOR-MACHINES 775

After the classifier has been created, it is trained using trainf_ocr_class_svm. After this, the classifier can
be saved using write_ocr_class_svm. Alternatively, the classifier can be used immediately after training to
classify characters using do_ocr_single_class_svm or do_ocr_multi_class_svm.
A comparison of SVM and the multi-layer perceptron (MLP) (see create_ocr_class_mlp) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameter
. WidthCharacter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Width of the rectangle to which the gray values of the segmented character are zoomed.
Default Value : 8
Suggested values : WidthCharacter ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 4 ≤ WidthCharacter ≤ 20
. HeightCharacter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Height of the rectangle to which the gray values of the segmented character are zoomed.
Default Value : 10
Suggested values : HeightCharacter ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 4 ≤ HeightCharacter ≤ 20
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Interpolation mode for the zooming of the characters.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Features to be used for classification.
Default Value : "default"
List of values : Features ∈ {"default", "pixel", "pixel_invar", "pixel_binary", "gradient_8dir",
"projection_horizontal", "projection_horizontal_invar", "projection_vertical", "projection_vertical_invar",
"ratio", "anisometry", "width", "height", "zoom_factor", "foreground", "foreground_grid_9",
"foreground_grid_16", "compactness", "convexity", "moments_region_2nd_invar",
"moments_region_2nd_rel_invar", "moments_region_3rd_invar", "moments_central",
"moments_gray_plane", "phi", "num_connect", "num_holes", "cooc", "num_runs", "chord_histo"}
. Characters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char *
All characters of the character set to be read.
Default Value : ["0","1","2","3","4","5","6","7","8","9"]
. KernelType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
The kernel type.
Default Value : "rbf"
List of values : KernelType ∈ {"linear", "rbf", "polynomial_inhomogeneous",
"polynomial_homogeneous"}
. KernelParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Additional parameter for the kernel function.
Default Value : 0.02
Suggested values : KernelParam ∈ {0.01, 0.02, 0.05, 0.1, 0.5}
. Nu (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Regularization constant of the SVM.
Default Value : 0.05
Suggested values : Nu ∈ {0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.3}
Restriction : (Nu > 0.0) ∧ (Nu < 1.0)
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
The mode of the SVM.
Default Value : "one-versus-one"
List of values : Mode ∈ {"one-versus-all", "one-versus-one"}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "normalization"
List of values : Preprocessing ∈ {"none", "normalization", "principal_components",
"canonical_variates"}

HALCON 8.0.2
776 CHAPTER 10. OCR

. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default Value : 10
Suggested values : NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction : NumComponents ≥ 1
. OCRHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Htuple . Hlong *
Handle of the OCR classifier.
Example (Syntax: HDevelop)

read_image (Image, ’letters’)


* Segment the image.
bin_threshold (Image, Region)
dilation_circle (Region, RegionDilation, 3.5)
connection (RegionDilation, ConnectedRegions)
intersection (ConnectedRegions, Region, RegionIntersection)
sort_region (RegionIntersection, Characters, ’character’, ’true’, ’row’)
* Generate the training file.
Number := |Characters|
Classes := []
for J := 0 to 25 by 1
Classes := [Classes,gen_tuple_const(20,chr(ord(’a’)+J))]
endfor
Classes := [Classes,gen_tuple_const(20,’.’)]
write_ocr_trainf (Characters, Image, Classes, ’letters.trf’)
* Generate and train the classifier.
read_ocr_trainf_names (’letters.trf’, CharacterNames, CharacterCount)
create_ocr_class_svm (8, 10, ’constant’, ’default’, CharacterNames,
’rbf’, 0.01, 0.01, ’one-versus-all’,
’principal_components’, 10, OCRHandle)
trainf_ocr_class_svm (OCRHandle, ’letters.trf’, 0.001, ’default’)
* Re-classify the characters in the image.
do_ocr_multi_class_svm (Characters, Image, OCRHandle, Class)
clear_ocr_class_svm (OCRHandle)

Result
If the parameters are valid the operator create_ocr_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
create_ocr_class_svm is processed completely exclusively without parallelization.
Possible Successors
trainf_ocr_class_svm
Alternatives
create_ocr_class_mlp, create_ocr_class_box
See also
do_ocr_single_class_svm, do_ocr_multi_class_svm, clear_ocr_class_svm,
create_class_svm, train_class_svm, classify_class_svm
Module
OCR/OCV

do_ocr_multi_class_svm ( const Hobject Character, const Hobject Image,


Hlong OCRHandle, char *Class )

T_do_ocr_multi_class_svm ( const Hobject Character,


const Hobject Image, const Htuple OCRHandle, Htuple *Class )

Classify multiple characters with an SVM-based OCR classifier.

HALCON/C Reference Manual, 2008-5-13


10.4. SUPPORT-VECTOR-MACHINES 777

do_ocr_multi_class_svm computes the best class for each of the characters given by the regions
Character and the gray values Image with the SVM-based OCR classifier OCRHandle and returns the classes
in Class. In contrast to do_ocr_single_class_svm, do_ocr_multi_class_svm can classify multi-
ple characters in one call, and therefore typically is faster than a loop that uses do_ocr_single_class_svm
to classify single characters. However, do_ocr_multi_class_svm can only return the best class
of each character. Before calling do_ocr_multi_class_svm, the classifier must be trained with
trainf_ocr_class_svm.
Parameter

. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Characters to be recognized.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Gray values of the characters.
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; (Htuple .) Hlong
Handle of the OCR classifier.
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Result of classifying the characters with the SVM.
Result
If the parameters are valid the operator do_ocr_multi_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
do_ocr_multi_class_svm is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
trainf_ocr_class_svm, read_ocr_class_svm
Alternatives
do_ocr_single_class_svm
See also
create_ocr_class_svm, classify_class_svm
Module
OCR/OCV

T_do_ocr_single_class_svm ( const Hobject Character,


const Hobject Image, const Htuple OCRHandle, const Htuple Num,
Htuple *Class )

Classify a single character with an SVM-based OCR classifier.


do_ocr_single_class_svm computes the best Num classes of the character given by the region
Character and the gray values Image with the OCR classifier OCRHandle and returns the classes in
Class. Because multiple classes may be returned by do_ocr_single_class_svm, Character may
only contain a single region (a single character). If multiple characters should be classified in a single call,
do_ocr_multi_class_svm must be used. Before calling do_ocr_single_class_svm, the classifier
must be trained with trainf_ocr_class_svm.
Parameter

. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Character to be recognized.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Gray values of the character.
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Htuple . Hlong
Handle of the OCR classifier.
. Num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Number of best classes to determine.
Default Value : 1
Suggested values : Num ∈ {1, 2, 3, 4, 5}

HALCON 8.0.2
778 CHAPTER 10. OCR

. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . char *


Result of classifying the character with the SVM.
Result
If the parameters are valid the operator do_ocr_single_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
do_ocr_single_class_svm is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_svm, read_ocr_class_svm
Alternatives
do_ocr_multi_class_svm
See also
create_ocr_class_svm, classify_class_svm
Module
OCR/OCV

do_ocr_word_svm ( const Hobject Character, const Hobject Image,


Hlong OCRHandle, const char *Expression, Hlong NumAlternatives,
Hlong NumCorrections, char *Class, char *Word, double *Score )

T_do_ocr_word_svm ( const Hobject Character, const Hobject Image,


const Htuple OCRHandle, const Htuple Expression,
const Htuple NumAlternatives, const Htuple NumCorrections,
Htuple *Class, Htuple *Word, Htuple *Score )

Classify a related group of characters with an OCR classifier.


do_ocr_word_svm works like do_ocr_multi_class_svm insofar as it computes the best class for each of
the characters given by the regions Character and the gray values Image with the OCR classifier OCRHandle,
and returns the results in Class.
In contrast to do_ocr_multi_class_svm, do_ocr_word_svm treats the group of characters as an entity
which yields a Word by concatenating the class names for each character region. This allows to restrict the allowed
classification results on a textual level by specifying an Expression describing the expected word.
The Expression may restrict the word to belong to a predefined lexicon created using create_lexicon
or import_lexicon, by specifying the name of the lexicon in angular brackets as in ’<mylexicon>’. If the
Expression is of any other form, it is interpreted as a regular expression with the same syntax as specified for
tuple_regexp_match. Note that you will usually want to use an expression of the form ’^...$’ when using
variable quantifiers like ’*’, to ensure that the entire word is used in the expression. Also note that in contrast to
tuple_regexp_match, do_ocr_word_svm does not support passing extra options in an expression tuple.
If the word derived from the best class for each character does not match the Expression,
do_ocr_word_svm attempts to correct it by considering the NumAlternatives best classes for each char-
acter. The alternatives used are identical to those returned by do_ocr_single_class_svm for a single
character. It does so by testing all possible corrections for which the classification result is changed for at most
NumCorrections character regions.
In case the Expression is a lexicon and the above procedure did not yield a result, the most similar word in
the lexicon is returned as long as it requires less than NumCorrections edit operations for the correction (see
suggest_lexicon).
The resulting word is graded by a Score between 0.0 (no correction found) and 1.0 (original word correct), which
is dominated by the number of corrected characters but also adds a minor penalty for ignoring the second best class
or even all best classes (in case of lexica).
Parameter

. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Characters to be recognized.

HALCON/C Reference Manual, 2008-5-13


10.4. SUPPORT-VECTOR-MACHINES 779

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2


Gray values of the characters.
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; (Htuple .) Hlong
Handle of the OCR classifier.
. Expression (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Expression describing the allowed word structure.
. NumAlternatives (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Number of classes per character considered for the internal word correction.
Default Value : 3
Suggested values : NumAlternatives ∈ {3, 4, 5}
Typical range of values : 1 ≤ NumAlternatives
. NumCorrections (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Maximum number of corrected characters.
Default Value : 2
Suggested values : NumCorrections ∈ {1, 2, 3, 4, 5}
Typical range of values : 0 ≤ NumCorrections
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Result of classifying the characters with the SVM.
Number of elements : Class = Character
. Word (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) char *
Word text after classification and correction.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double *
Measure of similarity between corrected word and uncorrected classification results.
Complexity
The complexity of checking all possible corrections is of magnitude O((n ∗ a)min(c,n) ), where a is the number
of alternatives, n is the number of character regions, and c is the number of allowed corrections. However, to
guard against a near-infinite loop in case of large n, c is internally clipped to 5, 3, or 1 if a ∗ n >= 30, 60, or 90,
respectively.
Result
If the parameters are valid, the operator do_ocr_multi_class_svm returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
do_ocr_word_svm is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_svm, read_ocr_class_svm
Alternatives
do_ocr_multi_class_svm
See also
create_ocr_class_svm, classify_class_svm
Module
OCR/OCV

T_get_features_ocr_class_svm ( const Hobject Character,


const Htuple OCRHandle, const Htuple Transform, Htuple *Features )

Compute the features of a character.


get_features_ocr_class_svm computes the features of the character given by Character with the
OCR classifier OCRHandle and returns them in Features. In contrast to do_ocr_single_class_svm
and do_ocr_multi_class_svm, the character is passed as a single image object. Hence, before calling
get_features_ocr_class_svm, reduce_domain must typically be called. The parameter Transform
determines whether the feature transformation specified with Preprocessing in create_ocr_class_svm
for the classifier should be applied (Transform = ’true’) or whether the untransformed features should be re-
turned (Transform = ’false’). get_features_ocr_class_svm can be used to inspect the features that
are used for the classification.

HALCON 8.0.2
780 CHAPTER 10. OCR

Parameter

. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte


Input character.
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Htuple . Hlong
Handle of the OCR classifier.
. Transform (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should the feature vector be transformed with the preprocessing?
Default Value : "true"
List of values : Transform ∈ {"true", "false"}
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Feature vector of the character.
Result
If the parameters are valid the operator get_features_ocr_class_svm returns the value H_MSG_TRUE.
If necessary, an exception handling is raised.
Parallelization Information
get_features_ocr_class_svm is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_svm
See also
create_ocr_class_svm
Module
OCR/OCV

T_get_params_ocr_class_svm ( const Htuple OCRHandle,


Htuple *WidthCharacter, Htuple *HeightCharacter,
Htuple *Interpolation, Htuple *Features, Htuple *Characters,
Htuple *KernelType, Htuple *KernelParam, Htuple *Nu, Htuple *Mode,
Htuple *Preprocessing, Htuple *NumComponents )

Return the parameters of an OCR classifier.


get_params_ocr_class_svm returns the parameters of an OCR classifier that were specified when the
classifier was created with create_ocr_class_svm. This is particularly useful if the classifier was read
with read_ocr_class_svm. The output of get_params_ocr_class_svm can, for example, be used
to check whether a character to be read is contained in the classifier. For a description of the parameters, see
create_ocr_class_svm.
Parameter

. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Htuple . Hlong


Handle of the OCR classifier.
. WidthCharacter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Width of the rectangle to which the gray values of the segmented character are zoomed.
. HeightCharacter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Height of the rectangle to which the gray values of the segmented character are zoomed.
. Interpolation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
Interpolation mode for the zooming of the characters.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . char *
Features to be used for classification.
. Characters (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Characters of the character set to be read.
. KernelType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
The kernel type.
. KernelParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Additional parameters for the kernel function.

HALCON/C Reference Manual, 2008-5-13


10.4. SUPPORT-VECTOR-MACHINES 781

. Nu (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *


Regularization constant of the SVM.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
The mode of the SVM.
. Preprocessing (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
Type of preprocessing used to transform the feature vectors.
. NumComponents (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Result
If the parameters are valid the operator get_params_ocr_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
get_params_ocr_class_svm is reentrant and processed without parallelization.
Possible Predecessors
create_ocr_class_svm, read_ocr_class_svm
Possible Successors
do_ocr_single_class_svm, do_ocr_multi_class_svm
See also
trainf_ocr_class_svm, get_params_class_svm
Module
OCR/OCV

T_get_prep_info_ocr_class_svm ( const Htuple OCRHandle,


const Htuple TrainingFile, const Htuple Preprocessing,
Htuple *InformationCont, Htuple *CumInformationCont )

Compute the information content of the preprocessed feature vectors of an SVM-based OCR classifier.
get_prep_info_ocr_class_svm computes the information content of the training vectors that have
been transformed with the preprocessing given by Preprocessing. Preprocessing can be set to
’principal_components’ or ’canonical_variates’. The OCR classifier OCRHandle must have been created
with create_ocr_class_svm. The preprocessing methods are described with create_class_svm.
The information content is derived from the variations of the transformed components of the feature vec-
tor, i.e., it is computed solely based on the training data, independent of any error rate on the training
data. The information content is computed for all relevant components of the transformed feature vec-
tors (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’, see create_class_svm), and is returned in InformationCont as a number between 0 and
1. To convert the information content into a percentage, it simply needs to be multiplied by 100. The cumulative
information content of the first n components is returned in the n-th component of CumInformationCont,
i.e., CumInformationCont contains the sums of the first n elements of InformationCont. To use
get_prep_info_ocr_class_svm, a sufficient number of samples must be stored in the training files given
by TrainingFile (see write_ocr_trainf).
InformationCont and CumInformationCont can be used to decide how many components of
the transformed feature vectors contain relevant information. An often used criterion is to require that
the transformed data must represent x% (e.g., 90%) of the total data. This can be decided eas-
ily from the first value of CumInformationCont that lies above x%. The number thus obtained
can be used as the value for NumComponents in a new call to create_ocr_class_svm. The
call to get_prep_info_ocr_class_svm already requires the creation of a classifier, and hence
the setting of NumComponents in create_ocr_class_svm to an initial value. However, if
get_prep_info_ocr_class_svm is called it is typically not known how many components are relevant, and
hence how to set NumComponents in this call. Therefore, the following two-step approach should typically be
used to select NumComponents: In a first step, a classifier with the maximum number for NumComponents is
created (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’). Then, the training samples are saved in a training file using write_ocr_trainf. Subsequently,
get_prep_info_ocr_class_svm is used to determine the information content of the components, and with

HALCON 8.0.2
782 CHAPTER 10. OCR

this NumComponents. After this, a new classifier with the desired number of components is created, and the
classifier is trained with trainf_ocr_class_svm.
Parameter

. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Htuple . Hlong


Handle of the OCR classifier.
. TrainingFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; Htuple . const char *
Name(s) of the training file(s).
Default Value : "ocr.trf"
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "principal_components"
List of values : Preprocessing ∈ {"principal_components", "canonical_variates"}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Cumulative information content of the transformed feature vectors.
Example (Syntax: HDevelop)

* Create the initial OCR classifier.


read_ocr_trainf_names (’ocr.trf’, CharacterNames, CharacterCount)
create_ocr_class_svm (8, 10, ’constant’, ’default’, CharacterNames,
’rbf’, 0.01, 0.01, ’one-versus-one’,
’principal_components’, 81, OCRHandle)
* Get the information content of the transformed feature vectors.
get_prep_info_ocr_class_svm (OCRHandle, ’ocr.trf’, ’principal_components’,
InformationCont, CumInformationCont)
* Determine the number of transformed components.
* NumComp = [...]
clear_ocr_class_svm (OCRHandle)
* Create the final OCR classifier.
create_ocr_class_svm (8, 10, ’constant’, ’default’, CharacterNames,
’rbf’, 0.01, 0.01,’one-versus-one’,
’principal_components’, NumComp, OCRHandle)
* Train the final classifier.
trainf_ocr_class_svm (OCRHandle, ’ocr.trf’, 0.001, ’default’)
write_ocr_class_svm (OCRHandle, ’ocr.osc’)
clear_ocr_class_svm (OCRHandle)

Result
If the parameters are valid the operator get_prep_info_ocr_class_svm returns the value H_MSG_TRUE.
If necessary, an exception handling is raised.
get_prep_info_ocr_class_svm may return the error 9211 (Matrix is not positive definite) if
Preprocessing = ’canonical_variates’ is used. This typically indicates that not enough training samples
have been stored for each class.
Parallelization Information
get_prep_info_ocr_class_svm is reentrant and processed without parallelization.
Possible Predecessors
create_ocr_class_svm, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
clear_ocr_class_svm, create_ocr_class_svm
Module
OCR/OCV

HALCON/C Reference Manual, 2008-5-13


10.4. SUPPORT-VECTOR-MACHINES 783

T_get_support_vector_num_ocr_class_svm ( const Htuple OCRHandle,


Htuple *NumSupportVectors, Htuple *NumSVPerSVM )

Return the number of support vectors of an OCR classificator.


get_support_vector_num_ocr_class_svm returns in NumSupportVectors the num-
ber of support vectors that are stored in the support vector machine (SVM) given by OCRHandle.
get_support_vector_num_ocr_class_svm should be called before the labels of individual sup-
port vectors are read out with get_support_vector_ocr_class_svm, e.g., for visualizing which of
the training data become a SV (see get_support_vector_ocr_class_svm). The number of SVs
in each classifier is listed in NumSVPerSVM. The reason that its sum differs from the number obtained in
NumSupportVectors is that SV evaluations are reused throughout different binary sub-SVMs.
Parameter

. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Htuple . Hlong


OCR handle.
. NumSupportVectors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Total number of support vectors.
. NumSVPerSVM (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Number of SV of each sub-SVM.
Result
If the parameters are valid the operator get_sample_num_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
get_support_vector_num_ocr_class_svm is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_svm
Possible Successors
get_support_vector_ocr_class_svm
See also
create_ocr_class_svm
Module
OCR/OCV

T_get_support_vector_ocr_class_svm ( const Htuple OCRHandle,


const Htuple IndexSupportVector, Htuple *Index )

Return the index of a support vector from a trained OCR classifier that is based on support vector machines.
The operator get_support_vector_ocr_class_svm maps support vectors of a trained SVM-based
OCR classifier (given in OCRHandle) to the original training data set. The index of the SV is speci-
fied with IndexSupportVector. The index is counted from 0, i.e., IndexSupportVector must be
a number between 0 and IndexSupportVectors − 1, where IndexSupportVectors can be deter-
mined with get_support_vector_num_ocr_class_svm. The index of this SV in the training data
is returned in Index. get_support_vector_ocr_class_svm can, for example, be used to visu-
alize the support vectors. To do so, the train file that has been used to train the SVM must be read with
read_ocr_trainf. The value returned in Index must be incremented by 1 and can then be used to select
the support vectors with select_obj from the training characters. If more than one train file has been used
in trainf_ocr_class_svm Index behaves as if all train files had been merged into one train file with
concat_ocr_trainf.
Parameter

. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Htuple . Hlong


OCR handle.
. IndexSupportVector (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Number of stored support vectors.

HALCON 8.0.2
784 CHAPTER 10. OCR

. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *


Index of the support vector in the training set.
Result
If the parameters are valid the operator get_support_vector_ocr_class_svm returns the value
H_MSG_TRUE. If necessary, an exception handling is raised.
Parallelization Information
get_support_vector_ocr_class_svm is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_svm, get_support_vector_num_ocr_class_svm
See also
create_ocr_class_svm, read_ocr_trainf, append_ocr_trainf, concat_ocr_trainf
Module
OCR/OCV

read_ocr_class_svm ( const char *FileName, Hlong *OCRHandle )


T_read_ocr_class_svm ( const Htuple FileName, Htuple *OCRHandle )

Read an SVM-based OCR classifier from a file.


read_ocr_class_svm reads an OCR classifier that has been stored with write_ocr_class_svm. Since
the training of an OCR classifier can consume a relatively long time, the classifier is typically trained in an offline
process and written to a file with write_ocr_class_svm. In the online process the classifier is read with
read_ocr_class_svm and subsequently used for classification with do_ocr_single_class_svm or
do_ocr_multi_class_svm.
Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *


File name.
. OCRHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Hlong *
Handle of the OCR classifier.
Result
If the parameters are valid the operator read_ocr_class_svm returns the value H_MSG_TRUE. If necessary,
an exception handling is raised.
Parallelization Information
read_ocr_class_svm is processed completely exclusively without parallelization.
Possible Successors
do_ocr_single_class_svm, do_ocr_multi_class_svm
See also
create_ocr_class_svm, write_ocr_class_svm, read_class_svm, write_class_svm
Module
OCR/OCV

reduce_ocr_class_svm ( Hlong OCRHandle, const char *Method,


Hlong MinRemainingSV, double MaxError, Hlong *OCRHandleReduced )

T_reduce_ocr_class_svm ( const Htuple OCRHandle, const Htuple Method,


const Htuple MinRemainingSV, const Htuple MaxError,
Htuple *OCRHandleReduced )

Approximate a trained SVM-based OCR classifier by a reduced SVM.


reduce_ocr_class_svm reduces the classification time of an SVM based OCR classifier OCRHandle by
returning a reduced copy of it in OCRHandleReduced. The parameters Method, MinRemainingSV and

HALCON/C Reference Manual, 2008-5-13


10.4. SUPPORT-VECTOR-MACHINES 785

MaxError have the same meaning as in reduce_class_svm and are described there. Please note that classi-
fication time can also be significantly reduced with a preprocessing step in create_ocr_class_svm, which
possibly introduces less errors.
Parameter

. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Hlong


Original handle of SVM-based OCR-classifier.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of postprocessing to reduce number of SVs.
Default Value : "bottom_up"
List of values : Method ∈ {"bottom_up"}
. MinRemainingSV (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Minimum number of remaining SVs.
Default Value : 2
Suggested values : MinRemainingSV ∈ {2, 3, 4, 5, 7, 10, 15, 20, 30, 50}
Restriction : MinRemainingSV ≥ 2
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Maximum allowed error of reduction.
Default Value : 0.001
Suggested values : MaxError ∈ {0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05}
Restriction : MaxError > 0.0
. OCRHandleReduced (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Hlong *
SVMHandle of reduced OCR classifier.
Result
If the parameters are valid the operator reduce_ocr_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
reduce_ocr_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
trainf_ocr_class_svm, get_support_vector_num_ocr_class_svm
Possible Successors
do_ocr_single_class_svm, do_ocr_multi_class_svm,
get_support_vector_ocr_class_svm, get_support_vector_num_ocr_class_svm
See also
create_ocr_class_svm
References
Bernhard Schölkopf, Alexander J.Smola: “Lerning with Kernels”; The MIT Press, London; 1999.
Module
OCR/OCV

trainf_ocr_class_svm ( Hlong OCRHandle, const char *TrainingFile,


double Epsilon, const char *TrainMode )

T_trainf_ocr_class_svm ( const Htuple OCRHandle,


const Htuple TrainingFile, const Htuple Epsilon,
const Htuple TrainMode )

Train an OCR classifier.


trainf_ocr_class_svm trains the OCR classifier OCRHandle with the training characters stored
in the OCR training files given by TrainingFile. The training files must been created, e.g., us-
ing write_ocr_trainf, before calling trainf_ocr_class_svm. The parameters Epsilon and
TrainMode have the same meaning as in train_class_svm.

HALCON 8.0.2
786 CHAPTER 10. OCR

Parameter

. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; (Htuple .) Hlong


Handle of the OCR classifier.
. TrainingFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; (Htuple .) const char *
Name(s) of the training file(s).
Default Value : "ocr.trf"
. Epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Stop parameter for training.
Default Value : 0.001
Suggested values : Epsilon ∈ {0.00001, 0.0001, 0.001, 0.01, 0.1}
. TrainMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) const char * / Hlong
Mode of training.
Default Value : "default"
List of values : TrainMode ∈ {"default", "add_sv_to_train_set"}
Example (Syntax: HDevelop)

* Train an OCR classifier


read_ocr_trainf_names (’ocr.trf’, CharacterNames, CharacterCount)
create_ocr_class_svm (8, 10, ’constant’, ’default’, CharacterNames,
’rbf’, 0.01, 0.01, ’one-versus-one’,
’normalization’, 81, OCRHandle)
trainf_ocr_class_svm (OCRHandle, ’ocr.trf’, 0.001, ’default’)
write_ocr_class_svm (OCRHandle, ’ocr.osc’)
clear_ocr_class_svm (OCRHandle)

Result
If the parameters are valid the operator trainf_ocr_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
trainf_ocr_class_svm may return the error 9211 (Matrix is not positive definite) if Preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
trainf_ocr_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
create_ocr_class_svm, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
do_ocr_single_class_svm, do_ocr_multi_class_svm, write_ocr_class_svm
Alternatives
read_ocr_class_svm
See also
train_class_svm
Module
OCR/OCV

write_ocr_class_svm ( Hlong OCRHandle, const char *FileName )


T_write_ocr_class_svm ( const Htuple OCRHandle,
const Htuple FileName )

Write an OCR classifier to a file.


write_ocr_class_svm writes the OCR classifier OCRHandle to the file given by FileName.
If a file extension is not specified in FileName, the default extension ’.osc’ is appended to

HALCON/C Reference Manual, 2008-5-13


10.5. TOOLS 787

FileName. write_ocr_class_svm is typically called after the classifier has been trained with
trainf_ocr_class_svm. The classifier can be read with read_ocr_class_svm.
Parameter

. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Hlong


Handle of the OCR classifier.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the parameters are valid the operator write_ocr_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
write_ocr_class_svm is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_svm
Possible Successors
clear_ocr_class_svm
See also
create_ocr_class_svm, read_ocr_class_svm, write_class_svm, read_class_svm
Module
OCR/OCV

10.5 Tools
T_segment_characters ( const Hobject Region, const Hobject Image,
Hobject *ImageForeground, Hobject *RegionForeground,
const Htuple Method, const Htuple EliminateLines,
const Htuple DotPrint, const Htuple StrokeWidth,
const Htuple CharWidth, const Htuple CharHeight,
const Htuple ThresholdOffset, const Htuple Contrast,
Htuple *UsedThreshold )

Segments characters in a given region of an image.


This operator is used to segment characters in a given Region of an Image. The Region is only used to reduce
the working area. The segmented characters are returned in RegionForeground.
Two different methods to detect the characters are supplied. All segmentation methods assume that the text ist
darker than the background. If this is not the case, please invert the image with invert_image.
The parameter Method determines the algorithm for text segmentation. The possible values are

’local_contrast_best’ This method extracts text that differ locally from the background. Therefore, it is suited
for images with inhomogeneous illumination. The enhancment of the text borders, leads to a more accurate
determinaton of the outline of the text. Which is especially useful if the background is highly textured.
The parameter Contrast defines the minimum contrast,i.e., the minimum gray value difference between
symobls and background.
’local_auto_shape’ The minimum contrast is estimated automatically such that the number of very small regions
is reduced. This method is especially suitable for noisy images. The parameter ThresholdOffset can
be used to adjust the threshold. Let g(x, y) be the gray value at position (x, y) in the input Image. The
threshold condition is determined by:
g(x, y) ≤ UsedThreshold + ThresholdOffset.

Select EliminateLines if the extraction of characters is disturbed by lines that are horizontal or vertical with
respect to the lines of text and set its value to ’true’. The elimination is influenced by the maximum of CharWidth
and the maximum of CharHeight. For further information see the description of these parameters.
DotPrint: Should be set to ’true’ if dot prints should be read, else to ’false’.

HALCON 8.0.2
788 CHAPTER 10. OCR

StrokeWidth: Specifies the stroke width of the text. It is used to calculate internally used mask sizes to
determine the characters. This mask sizes are also influenced through the parameters DotPrint, the average
CharWidth, and the average CharHeight.
CharWidth: This can be a tuple with up to three values. The first value is the average width of a character. The
second is the minimum width of a character and the third is the maximum width of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on the average CharWidth. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character width to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character width to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] sets the average character width to 10, the minimum to 5, and the maximum to 20.
CharHeight: This can be a tuple with up to three values. The first value is the average height of a character. The
second is the minimum height of a character and the third is the maximum height of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on the average CharHeight. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character height to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character height to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] this sets the average character height to 10, the minimum to 5, and the maximum to 20.
ThresholdOffset: This parameter can be used to adjust the threshold, which is used when the segmentation
method ’local_auto_shape’ is chosen.
Contrast: Defines the minimum contrast between the text and the background. This parameter is used if the
segmentation method ’local_contrast_best’ is selected.
UsedThreshold: After the execution, this parameter returns the threshold used to segment the characters.
ImageForeground returns the image that was internally used for the segmentation.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Area in the image where the text lines are located.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. ImageForeground (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Image used for the segmentation.
. RegionForeground (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Region of characters.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method to segment the characters.
Default Value : "local_auto_shape"
List of values : Method ∈ {"local_contrast_best", "local_auto_shape"}
. EliminateLines (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Eliminate horizontal and vertical lines?
Default Value : "false"
List of values : EliminateLines ∈ {"true", "false"}
. DotPrint (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should dot print characters be detected?
Default Value : "false"
List of values : DotPrint ∈ {"true", "false"}
. StrokeWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Stroke width of a character.
Default Value : "medium"
List of values : StrokeWidth ∈ {"ultra_light", "light", "medium", "bold"}
. CharWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Width of a character.
Default Value : 25
Typical range of values : 1 ≤ CharWidth
Restriction : CharWidth ≥ 1

HALCON/C Reference Manual, 2008-5-13


10.5. TOOLS 789

. CharHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong


Height of a character.
Default Value : 25
Typical range of values : 1 ≤ CharHeight
Restriction : CharHeight ≥ 1
. ThresholdOffset (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Value to adjust the segmentation.
Default Value : 0
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Minimum gray value difference between text and background.
Default Value : 10
Typical range of values : 1 ≤ Contrast
Restriction : Contrast ≥ 1
. UsedThreshold (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Threshold used to segment the characters.
Example (Syntax: HDevelop)

read_image (Image, ’dot_print_rotated/dot_print_rotated_’+J$’02d’)


text_line_orientation (Image, Image, 50, rad(-30), rad(30), OrientationAngle)
rotate_image (Image, ImageRotate, -OrientationAngle/rad(180)*180, ’constant’)
segment_characters (ImageRotate, ImageRotate, ImageForeground, RegionForeground,
’local_auto_shape’, 0, 0, ’medium’, 25, 25, 0, 10, UsedThreshold)

Result
If the input parameters are set correctly, the operator segment_characters returns the value H_MSG_TRUE.
Otherwise an exception will be raised.
Parallelization Information
segment_characters is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
text_line_orientation
Possible Successors
select_characters, connection
Alternatives
threshold
Module
Foundation

T_select_characters ( const Hobject Region, Hobject *RegionCharacters,


const Htuple DotPrint, const Htuple StrokeWidth,
const Htuple CharWidth, const Htuple CharHeight,
const Htuple Punctuation, const Htuple DiacriticMarks,
const Htuple PartitionMethod, const Htuple PartitionLines,
const Htuple FragmentDistance, const Htuple ConnectFragments,
const Htuple ClutterSizeMax, const Htuple StopAfter )

Selects characters from a given region.


select_characters selects from a given Region the areas which might be characters and returns them
in RegionCharacters. This is done by using features like StrokeWidth, DotPrint, the size of the
characters and some more. The given Region should be united, else every Region is processed separately. Thus
do not call connection before calling select_characters, because then fragments or dots would not
be connected to a character. If you have more than one region with text, you can of course handle them without
merging them. The Region for select_characters typically comes from segment_characters but
also any other segmentation operators can be used.
The process of the selection can be partitioned into four parts. All steps are influenced by the parameters
StrokeWidth, CharHeight, and CharWidth. If you loose small objects like dots, adapt the minimum

HALCON 8.0.2
790 CHAPTER 10. OCR

CharWidth and the minimum CharHeight. But some parameters affect the result of a certain step in partic-
ular. A closer description follows below. With the parameter StopAfter you can terminate after a specified
step.
In the first step, ’step1_select_candidates’, CharWidth and the CharHeight are used to select the candidates.
The result of this step is also affected by ClutterSizeMax.
In the next step, ’step2_partition_characters’, the parameter PartitionMethod and the parameter
PartitionLines influence the result.
Step three, ’step3_connect_fragments’, uses the the parameters ConnectFragments and DotPrint. If dot-
printed characters have to be detected and some dots are not connected to the character, there are two ways to
overcome this problem: You can increase the FragmentDistance and/or decrease the StrokeWidth.
In the last step, ’step4_select_characters’, the result is affected by the parameters DiacriticMarks and
Punctuation.
DotPrint: Should be set to ’true’ if dot prints should be read, else to ’false’.
StrokeWidth: Specifies the stroke width of the text. It is used to calculate internally used mask sizes to
determine the characters. This mask sizes are also influenced through the parameters DotPrint, the average
CharWidth, and the average CharHeight.
CharWidth: This can be a tuple with up to three values. The first value is the average width of a character. The
second is the minimum width of a character and the third is the maximum width of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on average CharWidth. The same is
the case if the maximum is not set. Some examples:
[10] sets the average character width to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character width to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] sets the average character width to 10, the minimum to 5, and the maximum to 20.
CharHeight: This can be a tuple with up to three values. The first value is the average height of a character. The
second is the minimum height of a character and the third is the maximum height of a character. If the minimum
is not set or equal -1, the operator automatically sets these value depending on average CharHeight. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character height to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character height to 10 the minimum value is calculated by the system and the maximum
is set to 20.
[10,5,20] this sets the average character height to 10, the minimum to 5 and the maximum to 20.
Punctuation: Set this parameter to ’true’ if the operator also has to detect punctuation marks (e.g. .,:’‘"),
otherwise they will be suppressed.
DiacriticMarks: Set this parameter to ’true’ if the text in your application contains diacritic marks (e.g. â,é,ö),
or to ’false’ to suppress them.
PartitionMethod: If neighboring characters are printed close to each other, they may be partly merged. With
this parameter you can specify the method to partition such characters. The possible values are ’none’, which
means no partitioning is perfomed. ’fixed_width’ means that the partitioning assumes a constant character width.
If the width of the extracted region is well above the average CharWidth, the region ist split into parts that have
the given average CharWidth. The partitioning starts at the left border of the region. ’variable_width’ means
that the characters are partitioned at the position where they have the thinnest connection. This method can be
selected for characters that are printed with a variable-width font or if many consecutive characters are extracted as
one symbol. It could be helpful to call text_line_slant and/or use text_line_orientation before
calling select_characters.
PartitionLines: If some text lines or some characters of different text lines are connected, set this parameter
to ’true’.
FragmentDistance: This parameter influences the connection of character fragments. If too much is con-
nected, set the parameter to ’narrow’ or ’medium’. In the case that more fragments should be connected, set
the parameter to ’medium’ or ’wide’. The connection is also influenced by the maximum of CharWidth and
CharHeight. See also ConnectFragments.

HALCON/C Reference Manual, 2008-5-13


10.5. TOOLS 791

ConnectFragments: Set this parameter to ’true’ if the extracted symbols are fragmented, i.e., if a symbol is
not extracted as one region but broken up into several parts. See also FragmentDistance and StopAfter in
the step ’step3_connect_fragments’.
ClutterSizeMax: If the extracted characters contain clutter, i.e., small regions near the actual symbols, increase
this value. If parts of the symbols are missing, decrease this value.
StopAfter: Use this parameter in the case the operator does not produce the desired results. By modifying this
value the operator stops after the execution of the selected step and provides the corresponding results. To end on
completion, set StopAfter to ’completion’.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region of text lines in which to select the characters.
. RegionCharacters (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Selected characters.
. DotPrint (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should dot print characters be detected?
Default Value : "false"
List of values : DotPrint ∈ {"true", "false"}
. StrokeWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Stroke width of a character.
Default Value : "medium"
List of values : StrokeWidth ∈ {"ultra_light", "light", "medium", "bold"}
. CharWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Width of a character.
Default Value : 25
Typical range of values : 1 ≤ CharWidth
Restriction : CharWidth ≥ 1
. CharHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Height of a character.
Default Value : 25
Typical range of values : 1 ≤ CharHeight
Restriction : CharHeight ≥ 1
. Punctuation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Add punctuation?
Default Value : "false"
List of values : Punctuation ∈ {"true", "false"}
. DiacriticMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Exist diacritic marks?
Default Value : "false"
List of values : DiacriticMarks ∈ {"true", "false"}
. PartitionMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method to partition neighbored characters.
Default Value : "none"
List of values : PartitionMethod ∈ {"none", "fixed_width", "variable_width"}
. PartitionLines (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should lines be partitioned?
Default Value : "false"
List of values : PartitionLines ∈ {"true", "false"}
. FragmentDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Distance of fragments.
Default Value : "medium"
List of values : FragmentDistance ∈ {"narrow", "medium", "wide"}
. ConnectFragments (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Connect fragments?
Default Value : "false"
List of values : ConnectFragments ∈ {"true", "false"}

HALCON 8.0.2
792 CHAPTER 10. OCR

. ClutterSizeMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Maximum size of clutter.
Default Value : 0
Typical range of values : 0 ≤ ClutterSizeMax
Restriction : 0 < ClutterSizeMax
. StopAfter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Stop execution after this step.
Default Value : "completion"
List of values : StopAfter ∈ {"step1_select_candidates", "step2_partition_characters",
"step3_connect_fragments", "step4_select_characters", "completion"}
Example (Syntax: HDevelop)

read_image (Image, ’dot_print_rotated/dot_print_rotated_’+J$’02d’)


text_line_orientation (Image, Image, 50, rad(-30), rad(30), OrientationAngle)
rotate_image (Image, ImageRotate, -OrientationAngle/rad(180)*180, ’constant’)
segment_characters (ImageRotate, ImageRotate, ImageForeground, RegionForeground, ’local_
select_characters (RegionForeground, RegionCharacters, 1, ’ultra_light’, [60,1,100], [60

Result
If the input parameters are set correctly, the operator select_characters returns the value H_MSG_TRUE.
Otherwise an exception will be raised.
Parallelization Information
select_characters is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
segment_characters, text_line_slant
Possible Successors
do_ocr_single, do_ocr_multi
Alternatives
connection
Module
Foundation

text_line_orientation ( const Hobject Region, const Hobject Image,


Hlong CharHeight, double OrientationFrom, double OrientationTo,
double *OrientationAngle )

T_text_line_orientation ( const Hobject Region, const Hobject Image,


const Htuple CharHeight, const Htuple OrientationFrom,
const Htuple OrientationTo, Htuple *OrientationAngle )

Determines the orientation of a text line or paragraph.


text_line_orientation determines the orientation of a single text line or a paragraph in relation to
the horizontal image axis. If the orientation of a single text line should be determined, the range for the
OrientationFrom and OrientationTo should be in the interval from -pi/4 to pi/4.
The parameter Region specifies the area of the image in which the text lines are located. The Region is only
used to reduce the working area. To determine the slant, the gray values inside that area are used. The text lines are
segmented by the operator text_line_orientation itself. If more than one region is passed, the numerical
values of the orientation angle are stored in a tuple, the position of a value in the tuple corresponding to the position
of the region in the input tuple.
CharHeight specifies the approximately height of the existing text lines in the region Region. It´s assumed,
that the text lines are darker than the background.
The search area can be restricted by the parameters OrientationFrom and OrientationTo, whereby also
the runtime of the operator is influenced.

HALCON/C Reference Manual, 2008-5-13


10.5. TOOLS 793

With the calculated angle OrientationAngle and operators like affine_trans_image, the region
Region of the image Image can be rotated such, that the text lines lie horizontally in the image. This may
simplify the character segmentation for OCR applications.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Area of text lines.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. CharHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Height of the text lines.
Default Value : 25
Typical range of values : 1 ≤ CharHeight
Restriction : CharHeight ≥ 1
. OrientationFrom (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Minimum rotation of the text lines.
Default Value : -0.523599
Typical range of values : -1.570796 ≤ OrientationFrom ≤ 1.570796
Restriction : ((−pi/2) ≤ OrientationFrom) ∧ (OrientationFrom ≤ OrientationTo)
. OrientationTo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Maximum rotation of the text lines.
Default Value : 0.523599
Typical range of values : -1.570796 ≤ OrientationTo ≤ 1.570796
Restriction : ((−pi/2) ≤ OrientationTo) ∧ (OrientationTo ≤ (pi/2))
. OrientationAngle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Calculated rotation angle of the text lines.
Example (Syntax: HDevelop)

read_image(Image,’letters’)
text_line_orientation(Image,Image,50,rad(-80),rad(80),OrientationAngle)
rotate_image(Image,ImageRotate,-OrientationAngle/rad(180)*180,’constant’)

Result
If the input parameters are set correctly, the operator text_line_orientation returns the value
H_MSG_TRUE. Otherwise an exception will be raised.
Parallelization Information
text_line_orientation is reentrant and automatically parallelized (on tuple level).
Possible Successors
rotate_image, affine_trans_image, affine_trans_image_size
Module
Foundation

text_line_slant ( const Hobject Region, const Hobject Image,


Hlong CharHeight, double SlantFrom, double SlantTo,
double *SlantAngle )

T_text_line_slant ( const Hobject Region, const Hobject Image,


const Htuple CharHeight, const Htuple SlantFrom, const Htuple SlantTo,
Htuple *SlantAngle )

Determines the slant of characters of a text line or paragraph.


text_line_slant determines the slant of a single text line or a paragraph.
The parameter Region specifies the area of the image in which the text lines are located. The Region is only
used to reduce the working area. To determine the slant, the gray values inside that area are used. The text lines are
segmented by the operator text_line_slant itself. If more than one region is passed, the numerical values

HALCON 8.0.2
794 CHAPTER 10. OCR

of the orientation angle are stored in a tuple, the position of a value in the tuple corresponding to the position of
the region in the input tuple.
CharHeight specifies the approximately high of the existing text lines in the region Region. It´s assumed, that
the text lines are darker than the background.
The search area can be restricted by the parameters SlantFrom and SlantTo, whereby also the runtime of the
operator is influenced.
With the calculated slant angle SlantAngle and operators for affine transformations, the slant can be removed
from the characters. This may simplify the character separation for OCR applications. To work correctly all
characters of a region should have nearly the same slant.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Area of text lines.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. CharHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Height of the text lines.
Default Value : 25
Typical range of values : 1 ≤ CharHeight
Restriction : CharHeight ≥ 1
. SlantFrom (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Minimum slant of the characters
Default Value : -0.523599
Typical range of values : -0.785398 ≤ SlantFrom ≤ 0.785398
Restriction : ((−pi/4) ≤ SlantFrom) ∧ (SlantFrom ≤ SlantTo)
. SlantTo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Maximum slant of the characters
Default Value : 0.523599
Typical range of values : -0.785398 ≤ SlantTo ≤ 0.785398
Restriction : ((−pi/4) ≤ SlantTo) ∧ (SlantTo ≤ (pi/4))
. SlantAngle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Calculated slant of the characters in the region
Example (Syntax: HDevelop)

hom_mat2d_identity(HomMat2DIdentity)
read_image(Image,’dot_print_slanted’)
/* correct slant */
text_line_slant(Image,Image,50,rad(-45),rad(45),SlantAngle)
hom_mat2d_slant(HomMat2DIdentity,-SlantAngle,’x’,0,0,HomMat2DSlant)
affine_trans_image(Image,Image,HomMat2DSlant,’constant’,’true’)

Result
If the input parameters are set correctly, the operator text_line_slant returns the value H_MSG_TRUE.
Otherwise an exception will be raised.
Parallelization Information
text_line_slant is reentrant and automatically parallelized (on tuple level).
Possible Successors
hom_mat2d_slant, affine_trans_image, affine_trans_image_size
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


10.6. TRAINING-FILES 795

10.6 Training-Files

append_ocr_trainf ( const Hobject Character, const Hobject Image,


const char *Class, const char *FileName )

T_append_ocr_trainf ( const Hobject Character, const Hobject Image,


const Htuple Class, const Htuple FileName )

Add characters to a training file.


The operator append_ocr_trainf serves to prepare the training with the operator
trainf_ocr_class_box. Hereby regions, representing characters, including their gray values (region
and pixel) and the corresponding class name will be written into a file. An arbitrary number of regions within one
image is supported. For each character (region) in Character the corresponding class name must be specified in
Class. The gray values are passed via the parameter Image. In contrast to the operator write_ocr_trainf
the characters are appended to an existing file using the same training file format as this file. If the file does not
exist, a new file is generated. In this case, the file format can be chosen by the parameter ’ocr_trainf_version’ of
the operator set_system. If no file extension is specified in FileName, the extension ’.trf’ is appended to the
name.
Parameter
. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Characters to be trained.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Gray values of the characters.
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Class (name) of the characters.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .filename.write ; (Htuple .) const char *
Name of the training file.
Default Value : "train_ocr"
Example

char name[128];
char class[128];

read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
create_tuple(&Class,num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
set_color(WindowHandle,"red");
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
clear_window(WindowHandle);
disp_region(SingleCharacter,WindowHandle);
printf("class of character %d ?\n",i);
scanf("%s\n",class);
append_ocr_trainf(Character,Image,name,class);
}

Result
If the parameters are correct, the operator append_ocr_trainf returns the value H_MSG_TRUE. Otherwise
an exception will be raised.
Parallelization Information
append_ocr_trainf is processed completely exclusively without parallelization.
Possible Predecessors
threshold, connection, create_ocr_class_box, read_ocr

HALCON 8.0.2
796 CHAPTER 10. OCR

Possible Successors
trainf_ocr_class_box, info_ocr_class_box, write_ocr, do_ocr_multi,
do_ocr_single
Alternatives
write_ocr_trainf, write_ocr_trainf_image
Module
OCR/OCV

concat_ocr_trainf ( const char *SingleFiles, const char *ComposedFile )


T_concat_ocr_trainf ( const Htuple SingleFiles,
const Htuple ComposedFile )

Concat training files.


The operator concat_ocr_trainf stores all characters which are contained in the files SingleFiles into a
new file with the name ComposedFile. The file format can be defined by the parameter ’ocr_trainf_version’ of
the operator set_system. If no file extension is specified in ComposedFile, the extension ’.trf’ is appended
to the file name.
Parameter
. SingleFiles (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; (Htuple .) const char *
Name of the single training files.
Default Value : ""
. ComposedFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; (Htuple .) const char *
Name of the composed training file.
Default Value : "all_characters"
Result
If the parameters are correct, the operator concat_ocr_trainf returns the value H_MSG_TRUE. Otherwise
an exception will be raised.
Parallelization Information
concat_ocr_trainf is processed completely exclusively without parallelization.
Possible Predecessors
write_ocr_trainf, append_ocr_trainf
Possible Successors
trainf_ocr_class_box, info_ocr_class_box, write_ocr, do_ocr_multi,
do_ocr_single
Module
OCR/OCV

T_read_ocr_trainf ( Hobject *Characters, const Htuple TrainFileNames,


Htuple *CharacterNames )

Read training characters from files and convert to images.


read_ocr_trainf reads all characters from the specified file names and converts them into images. The
domain is defined according to the foreground of the characters (as specified in write_ocr_trainf). The
names of the characters are returned in CharacterNames. If more than one file name is given the files are
processed in the order the file names.
Parameter
. Characters (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image-array ; Hobject * : byte / uint2
Images read from file.
. TrainFileNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; Htuple . const char *
Names of the training files.
Default Value : ""

HALCON/C Reference Manual, 2008-5-13


10.6. TRAINING-FILES 797

. CharacterNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *


Names of the read characters.
Result
If the parameter values are correct the operator read_ocr_trainf returns the value H_MSG_TRUE. Other-
wise an exception handling is raised.
Parallelization Information
read_ocr_trainf is reentrant and processed without parallelization.
Possible Predecessors
write_ocr_trainf
Possible Successors
disp_image, select_obj, zoom_image_size
Alternatives
read_ocr_trainf_select
See also
trainf_ocr_class_box
Module
OCR/OCV

read_ocr_trainf_names ( const char *TrainFileNames,


char *CharacterNames, Hlong *CharacterCount )

T_read_ocr_trainf_names ( const Htuple TrainFileNames,


Htuple *CharacterNames, Htuple *CharacterCount )

Query which characters are stored in a training file.


read_ocr_trainf_names extracts the names and frequency of all characters in the specified training files.
Parameter

. TrainFileNames (input_control) . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; (Htuple .) const char *


Names of the training files.
Default Value : ""
. CharacterNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .string(-array) ; (Htuple .) char *
Names of the read characters.
. CharacterCount (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Number of the characters.
Result
If the parameter values are correct the operator read_ocr_trainf_names returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
read_ocr_trainf_names is reentrant and processed without parallelization.
Possible Predecessors
write_ocr_trainf
See also
trainf_ocr_class_box
Module
OCR/OCV

HALCON 8.0.2
798 CHAPTER 10. OCR

read_ocr_trainf_select ( Hobject *Characters,


const char *TrainFileNames, const char *SearchNames,
char *FoundNames )

T_read_ocr_trainf_select ( Hobject *Characters,


const Htuple TrainFileNames, const Htuple SearchNames,
Htuple *FoundNames )

Read training specific characters from files and convert to images.


read_ocr_trainf_select reads the characters given in SearchNames from the specified files and con-
verts them into images. It works simimalr to read_ocr_trainf but here the characters which are extracted
can be specified.
Parameter

. Characters (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image-array ; Hobject * : byte / uint2


Images read from file.
. TrainFileNames (input_control) . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; (Htuple .) const char *
Names of the training files.
Default Value : ""
. SearchNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Names of the characters to be extracted.
Default Value : "0"
. FoundNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Names of the read characters.
Result
If the parameter values are correct the operator read_ocr_trainf_select returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
read_ocr_trainf_select is reentrant and processed without parallelization.
Possible Predecessors
write_ocr_trainf
Possible Successors
disp_image, select_obj, zoom_image_size
Alternatives
read_ocr_trainf
See also
trainf_ocr_class_box
Module
OCR/OCV

write_ocr_trainf ( const Hobject Character, const Hobject Image,


const char *Class, const char *FileName )

T_write_ocr_trainf ( const Hobject Character, const Hobject Image,


const Htuple Class, const Htuple FileName )

Storing of trained characters into a file.


The operator write_ocr_trainf serves to prepare the training with the operator
trainf_ocr_class_box. Hereby regions, representing characters, including their gray values (region
and pixel) and the corresponding class name will be written into a file. An arbitrary number of regions within one
image is supported. For each character (region) in Character the corresponding class name must be specified
in Class. The gray values are passed via the parameter Image. If no file extension is specified in FileName
the extension ’.trf’ is appended to the file name. The version of the file format used for writing data can be defined
by the parameter ’ocr_trainf_version’ of the operator set_system.

HALCON/C Reference Manual, 2008-5-13


10.6. TRAINING-FILES 799

Parameter

. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Characters to be trained.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Gray values of the characters.
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Class (name) of the characters.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .filename.write ; (Htuple .) const char *
Name of the training file.
Default Value : "train_ocr"
Example

char name[128];
HTuple Class,Name;

read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
create_tuple(&Class,num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
set_color(WindowHandle,"red");
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
clear_window(WindowHandle);
disp_region(SingleCharacter,WindowHandle);
printf("class of character %d ?\n",i);
scanf("%s\n",name);
set_s(Class,name,i);
}
create_tuple(&Name,1);
set_s(Class,Name,"trainfile");
T_write_ocr_trainf(Character,Image,Class,Name);

Result
If the parameters are correct, the operator write_ocr_trainf returns the value H_MSG_TRUE. Otherwise
an exception will be raised.
Parallelization Information
write_ocr_trainf is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, create_ocr_class_box, read_ocr
Possible Successors
trainf_ocr_class_box, info_ocr_class_box, write_ocr, do_ocr_multi,
do_ocr_single
Module
OCR/OCV

write_ocr_trainf_image ( const Hobject Character, const char *Class,


const char *FileName )

T_write_ocr_trainf_image ( const Hobject Character,


const Htuple Class, const Htuple FileName )

Write characters into a training file.

HALCON 8.0.2
800 CHAPTER 10. OCR

The operator write_ocr_trainf_image is used to prepare the training with the operator
trainf_ocr_class_box. Hereby regions, representing characters, including their gray values (region and
pixel) and the corresponding class name will be written into a file. An arbitrary number of regions within one
image is supported. For each character (region) in Character the corresponding class name must be specified
in Class. If no file extension is specified in FileName the extension ’.trf’ is appended to the file name. In
contrast to write_ocr_trainf one image per character is passed. The domain of this image defines the pixels
which belong to the character. The file format can be defined by the parameter ’ocr_trainf_version’ of the operator
set_system.
Parameter

. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2


Characters to be trained.
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Class (name) of the characters.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .filename.write ; (Htuple .) const char *
Name of the training file.
Default Value : "train_ocr"
Result
If the parameters are correct, the operator write_ocr_trainf_image returns the value H_MSG_TRUE.
Otherwise an exception will be raised.
Parallelization Information
write_ocr_trainf_image is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, create_ocr_class_box, read_ocr
Possible Successors
trainf_ocr_class_box, info_ocr_class_box, write_ocr, do_ocr_multi,
do_ocr_single
Alternatives
write_ocr_trainf, append_ocr_trainf
Module
OCR/OCV

HALCON/C Reference Manual, 2008-5-13


Chapter 11

Object

11.1 Information
count_obj ( const Hobject Objects, Hlong *Number )
T_count_obj ( const Hobject Objects, Htuple *Number )

Number of objects in a tuple.


The operator count_obj determines for the object parameter Objects the number of objects it contains. In
this connection it should be noted that object is not the same as connection component (see connection). For
example, the number of objects of a region not consisting of three connected parts is 1.
Attention
In Prolog and Lisp the length of the list is not necessarily identical with the number of objects. This is the case
when object keys are contained which were created in the compact mode (keys from compact and normal mode
can be used as a mixture). See in this connection set_system(’compact_object’,<true/false>).
Parameter
. Objects (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object-array ; Hobject
Objects to be examined.
. Number (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of objects in the tuple Objects.
Complexity
Runtime complexity: O(|Objects|).
Result
If the surrogates are correct, i.e. all objects are present in the HALCON operator data base, the operator
count_obj returns the value H_MSG_TRUE. The behavior in case of empty input (no input objects available)
is set via the operator set_system(’no_object_result’,<Result>).
Parallelization Information
count_obj is reentrant and processed without parallelization.
See also
copy_obj, obj_to_integer, connection, set_system
Module
Foundation

get_channel_info ( const Hobject Object, const char *Request,


Hlong Channel, char *Information )

T_get_channel_info ( const Hobject Object, const Htuple Request,


const Htuple Channel, Htuple *Information )

Informations about the components of an image object.

801
802 CHAPTER 11. OBJECT

The operator get_channel_info gives information about the components of an image object. The following
requests (Request) are currently possible:

’creator’ Output of the names of the procedures which initially created the image components (not the object).
’type’ Output of the type of image component (’byte’, ’int1’, ’int2’, ’uint2’ ’int4’, ’real’, ’direction’, ’cyclic’,
’complex’, ’vector_field’). The component 0 is of type ’region’ or ’xld’.

In the tuple Channel the numbers of the components about which information is required are stated. After car-
rying out get_channel_info, Information contains a tuple of strings (one string per entry in Channel)
with the required information.
Parameter

. Object (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object ; Hobject


Image object to be examined.
. Request (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Required information about object components.
Default Value : "creator"
List of values : Request ∈ {"creator", "type"}
. Channel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . channel(-array) ; (Htuple .) Hlong
Components to be examined (0 for region/XLD).
Default Value : 0
Suggested values : Channel ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8}
. Information (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Requested information.
Result
If the parameters are correct the operator get_channel_info returns the value H_MSG_TRUE. Otherwise an
exception is raised.
Parallelization Information
get_channel_info is reentrant and processed without parallelization.
Possible Predecessors
read_image
See also
count_relation
Module
Foundation

get_obj_class ( const Hobject Object, char *Class )


T_get_obj_class ( const Hobject Object, Htuple *Class )

Name of the class of an image object.


get_obj_class returns the name of the corresponding class to each object. The following classes are possible:

’image’ Object with region (definition domain) and at least one channel.
’region’ Object with a region without gray values.
’xld_cont’ XLD object as contour
’xld_poly’ XLD object as polygon
’xld_parallel’ XLD object with parallel polygons

HALCON/C Reference Manual, 2008-5-13


11.1. INFORMATION 803

Parameter

. Object (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject


Image objects to be examined.
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Name of class.
Result
If the parameter values are correct the operator get_obj_class returns the value H_MSG_TRUE. Otherwise
an exception is raised.
Parallelization Information
get_obj_class is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image, disp_region, disp_xld
See also
get_channel_info, count_relation
Module
Foundation

test_equal_obj ( const Hobject Objects1, const Hobject Objects2,


Hlong *IsEqual )

T_test_equal_obj ( const Hobject Objects1, const Hobject Objects2,


Htuple *IsEqual )

Compare image objects regarding equality.


The operator test_equal_obj compares the regions and gray value components of all objects of the two
input parameters. The n-th object in Objects1 is compared to the n-th object in Objects2 (for all n). If
all corresponding regions are equal and the number of regions is also identical the parameter IsEqual is set to
TRUE, otherwise FALSE.
Attention
Image matrices and XLDs are not compared regarding their contents. Thus, two images or XLDs, respectively,
are “equal” if they are at the same place in the storage. If the input parameters are empty and the behavior was set
via the operator set_system(’no_object_result’,’true’), the parameter IsEqual is set to TRUE,
since all input (= empty set) is equal.
Parameter

. Objects1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object-array ; Hobject


Test objects.
. Objects2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object-array ; Hobject
Comparative objects.
. IsEqual (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
boolean result value.
Complexity √ √
If F is the area of a region the runtime complexity is O(1) or O( F ) if the result is TRUE and O( F ) if the
result is FALSE.
Result
The operator test_equal_obj returns the value H_MSG_TRUE if the parameters are correct. The
behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). If the number of objects differs an exception is raised. Else
test_equal_obj returns H_MSG_TRUE
Parallelization Information
test_equal_obj is reentrant and processed without parallelization.
See also
test_equal_region

HALCON 8.0.2
804 CHAPTER 11. OBJECT

Module
Foundation

test_obj_def ( const Hobject Object, Hlong *IsDefined )


T_test_obj_def ( const Hobject Object, Htuple *IsDefined )

Test whether an object is already deleted.


The operator test_obj_def checks whether the object still exists in the HALCON operator data base (i.e.
whether the surrogate is still valid). Is that the case IsDefined is set to TRUE, else FALSE. This check especially
makes sense before deleting an object if it is not sure that the object has already been deleted by a prior deleting
operator ( clear_obj).
Attention
The parameter IsDefined can be TRUE even if the object was already deleted because the surrogates of deleted
objects are re-used for new objects. In this context see the example.
Parameter

. Object (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object ; Hobject


Object to be checked.
. IsDefined (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
boolean result value.
Example

circle(&Circle,100.0,100.0,100.0);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_TRUE): %d\n",IsDefined);
clear_obj(Circle);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_FALSE): %d\n",IsDefined);
gen_rectangle1(&Rectangle,200.0,200.0,300.0,300.0);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_TRUE!!!): %d\n",IsDefined);

Complexity
The runtime complexity is O(1).
Result
The operator test_obj_def returns the value H_MSG_TRUE if the parameters are correct. The
behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>).
Parallelization Information
test_obj_def is reentrant and processed without parallelization.
Possible Predecessors
clear_obj, gen_circle, gen_rectangle1
See also
set_check, clear_obj, reset_obj_db
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


11.2. MANIPULATION 805

11.2 Manipulation

clear_obj ( const Hobject Objects )


T_clear_obj ( const Hobject Objects )

Delete an iconic object from the HALCON database.


clear_obj deletes iconic objects, which are no longer needed, from the HALCON database. It should be noted
that clear_obj is the only way to delete objects from the database, and hence to reclaim their memory, in
HALCON/C. In all other HALCON language interfaces, clear_obj must not be used because objects are
destroyed automatically through appropriate destructors.
Images and regions are normally used by several iconic objects at the same time (uses less memory!). This has the
consequence that a region or an image is only deleted if all objects using it have been deleted.
The operator reset_obj_db can be used to reset the system and clear all remaining iconic objects.
Attention
Regarding the use of local variables: Because only local variables are deleted on exit of a subroutine, while the
HALCON database is not updated, it is necessary to clear local objects before exiting the subroutine.
Parameter
. Objects (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject
Objects to be deleted.
Result
clear_obj returns H_MSG_TRUE if all objects are contained in the HALCON database. If not all objects are
valid (e.g., already cleared), an exception is raised, which also clears all valid objects. The operator set_check
(’˜clear’:) can be used to suppress the raising of this exception. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
clear_obj is reentrant and processed without parallelization.
Possible Predecessors
test_obj_def
Alternatives
reset_obj_db
See also
test_obj_def, set_check
Module
Foundation

concat_obj ( const Hobject Objects1, const Hobject Objects2,


Hobject *ObjectsConcat )

T_concat_obj ( const Hobject Objects1, const Hobject Objects2,


Hobject *ObjectsConcat )

Concatenate two iconic object tuples.


concat_obj concatenates the two tuples of iconic objects Objects1 and Objects2 into a new object tuple
ObjectsConcat. Hence, this tuple contains all the objects of the two input tuples:
ObjectsConcat = [Objects1,Objects2]
In ObjectsConcat the objects of Objects1 are stored first, followed by the objects of Objects2. The order
of the objects is preserved. As usual, only the objects are copied, and not the corresponding images and regions,
i.e., no new memory is allocated. concat_obj is designed especially for HALCON/C. In languages like C++
it is not needed.
concat_obj should not be confused with union1 or union2, in which regions are merged, i.e., in which the
number of objects is modified.

HALCON 8.0.2
806 CHAPTER 11. OBJECT

concat_obj can be used to concatenate objects of different image object types (e.g., images and XLD contours)
into a single object. This is only recommended if it is necessary to accumulate in a single object variable, for
example, the results of an image processing sequence. It should be noted that the only operators that can handle
such object tuples of mixed type are concat_obj, copy_obj, select_obj, and disp_obj. For technical
reasons, object tuples of mixed type must not be created in HDevelop.
Parameter
. Objects1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject
Object tuple 1.
. Objects2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject
Object tuple 2.
. ObjectsConcat (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object-array ; Hobject *
Concatenated objects.
Example

/* generate a tuple of a circle and a rectangle */

gen_circle(&Circle,200.0,400.0,23.0);
gen_rectangle1(&Rectangle,23.0,44.0,203.0,201.0);
concat_obj(Circle,Rectangle,&CirclAndRectangle);
clear_obj(Circle); clear_obj(Rectangle);
disp_region(CircleAndRectangle,WindowHandle);

Complexity
Runtime complexity: O(|Objects1| + |Objects2|);
Memory complexity of the result objects: O(|Objects1| + |Objects2|)
Result
concat_obj returns H_MSG_TRUE if all objects are contained in the HALCON database. If the input is empty
the behavior can be set via set_system(’no_object_result’,<Result>). If necessary, an exception
is raised.
Parallelization Information
concat_obj is reentrant and processed without parallelization.
See also
count_obj, copy_obj, select_obj, disp_obj
Module
Foundation

copy_obj ( const Hobject Objects, Hobject *ObjectsSelected, Hlong Index,


Hlong NumObj )

T_copy_obj ( const Hobject Objects, Hobject *ObjectsSelected,


const Htuple Index, const Htuple NumObj )

Copy an iconic object in the HALCON database.


copy_obj copies NumObj iconic objects beginning with index Index (starting with 1) from the iconic input
object tuple Objects to the output object ObjectsSelected. If -1 is passed for NumObj all objects beginning
with Index are copied. No new storage is allocated for the regions and images. Instead, new objects containing
references to the existing objects are created. The number of objects in an object tuple can be queried with the
operator count_obj.
Parameter
. Objects (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject
Objects to be copied.
. ObjectsSelected (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject *
Copied objects.

HALCON/C Reference Manual, 2008-5-13


11.2. MANIPULATION 807

. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Starting index of the objects to be copied.
Default Value : 1
Suggested values : Index ∈ {1, 2, 3, 4, 5, 10, 20, 50, 100, 200, 500, 1000, 2000, 5000}
Typical range of values : 1 ≤ Index
Restriction : Index ≤ number(Objects)
. NumObj (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of objects to be copied or -1.
Default Value : 1
Suggested values : NumObj ∈ {-1, 1, 2, 3, 4, 5, 10, 20, 50, 100, 200, 500, 1000, 2000, 5000}
Typical range of values : -1 ≤ NumObj
Restriction : (((NumObj + Index) − 1) ≤ number(Objects))∧)
Example

/* Access all regions */

count_obj(Regions,&Num);
for (i=1; i<=Num; i++);
{
copy_obj(Regions,&Single,i,1);
T_get_region_polygon(Single,5.0,&Row,&Column);
T_disp_polygon(WindowHandleTuple,Row,Column);
destroy_tuple(Row);
destroy_tuple(Column);
clear_obj(Single);
}

Complexity
Runtime complexity: O(|Objects| + NumObj);
Memory complexity of the result object: O(NumObj)
Result
copy_obj returns H_MSG_TRUE if all objects are contained in the HALCON database and all
parameters are correct. If the input is empty the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
copy_obj is reentrant and processed without parallelization.
Possible Predecessors
count_obj
Alternatives
select_obj
See also
count_obj, concat_obj, obj_to_integer, copy_image
Module
Foundation

gen_empty_obj ( Hobject *EmptyObject )


T_gen_empty_obj ( Hobject *EmptyObject )

Create an empty object tuple.


The operator gen_empty_obj creates an empty tuple. This means that the output parameter does not contain
any objects. Thus, the operator count_obj returns 0. However, clear_obj can be called for the output. It
should be noted that no objects must not be confused with an empty region. In case of an empty region, i.e. a
region with 0 pixels count_obj returns the value 1.

HALCON 8.0.2
808 CHAPTER 11. OBJECT

Parameter
. EmptyObject (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object ; Hobject *
No objects.
Parallelization Information
gen_empty_obj is reentrant and processed without parallelization.
Module
Foundation

integer_to_obj ( Hobject *Objects, Hlong SurrogateTuple )


T_integer_to_obj ( Hobject *Objects, const Htuple SurrogateTuple )

Convert an “integer number” into an iconic object.


integer_to_obj is the inverse operator to obj_to_integer. All surrogates of objects passed in
SurrogateTuple are stored as objects. In contrast to obj_to_integer, the objects are duplicated.
integer_to_obj is intended especially for use in HALCON/C, because iconic objects and control parame-
ters are treated differently in C.
Attention
The objects are duplicated in the database.
Parameter
. Objects (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject *
Created objects.
. SurrogateTuple (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer(-array) ; (Htuple .) Hlong
Tuple of object surrogates.
Result
integer_to_obj returns H_MSG_TRUE if all parameters are correct, i.e., if they are valid object keys. If the
input is empty the behavior can be set via set_system(’no_object_result’,<Result>). If necessary,
an exception is raised.
Parallelization Information
integer_to_obj is reentrant and processed without parallelization.
See also
obj_to_integer
Module
Foundation

obj_to_integer ( const Hobject Objects, Hlong Index, Hlong Number,


Hlong *SurrogateTuple )

T_obj_to_integer ( const Hobject Objects, const Htuple Index,


const Htuple Number, Htuple *SurrogateTuple )

Convert an iconic object into an “integer number.”


obj_to_integer stores Number, starting at index Index, of the database keys of the input object Objects
as integer numbers in the output parameter SurrogateTuple. If -1 is passed for Number all objects beginning
with Index are copied. This facilitates a direct access to an arbitrary element of Objects. In conjunction with
count_obj (returns the number of objects in Objects) the elements of Objects can be processed succes-
sively. The objects are not duplicated by obj_to_integer and thus must not be cleared by clear_obj.
Attention
The objects’ data is not duplicated.

HALCON/C Reference Manual, 2008-5-13


11.2. MANIPULATION 809

Parameter

. Objects (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject


Objects for which the surrogates are to be returned.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Starting index of the surrogates to be returned.
Default Value : 1
Typical range of values : 1 ≤ Index
. Number (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Number of surrogates to be returned.
Default Value : -1
Restriction : (Number = -1) ∨ ((Number + Index) ≤ number(Objects))
. SurrogateTuple (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer(-array) ; (Htuple .) Hlong *
Tuple containing the surrogates.
Example

/* Access the i-th element: */


long i,Surrogate;
obj_to_integer(Objects,i,1,&Surrogat);

Complexity
Runtime complexity: O(|Objects| + Number)
Result
obj_to_integer returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can
be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
obj_to_integer is reentrant and processed without parallelization.
Possible Predecessors
test_obj_def
Alternatives
copy_obj, select_obj, copy_image, gen_image_proto
See also
integer_to_obj, count_obj
Module
Foundation

select_obj ( const Hobject Objects, Hobject *ObjectSelected,


Hlong Index )

T_select_obj ( const Hobject Objects, Hobject *ObjectSelected,


const Htuple Index )

Select objects from an object tuple.


select_obj copies the iconic objects with the indices given by Index (starting with 1) from the iconic input
object tuple Objects to the output object ObjectSelected. No new storage is allocated for the regions and
images. Instead, new objects containing references to the existing objects are created. The number of objects in an
object tuple can be queried with the operator count_obj.
Parameter

. Objects (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject


Input objects.
. ObjectSelected (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject *
Selected objects.

HALCON 8.0.2
810 CHAPTER 11. OBJECT

. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong


Indices of the objects to be selected.
Default Value : 1
Suggested values : Index ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 50, 100, 200, 500, 1000, 2000, 5000}
Restriction : Index ≥ 1
Example

/* Access to all Regions */

count_obj(Regions,&Num);
for (i=1; i<=Num; i++)
{
select_obj(Regions,&Single,i);
T_get_region_polygon(Single,5.0,&Row,&Column);
T_disp_polygon(WindowHandleTuple,Row,Column);
destroy_tuple(Row);
destroy_tuple(Column);
clear_obj(Single);
}

Complexity
Runtime complexity: O(|Objects|)
Result
select_obj returns H_MSG_TRUE if all objects are contained in the HALCON database and
all parameters are correct. If the input is empty the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
select_obj is reentrant and processed without parallelization.
Possible Predecessors
count_obj
Alternatives
copy_obj
See also
count_obj, concat_obj, obj_to_integer
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


Chapter 12

Regions

12.1 Access
T_get_region_chain ( const Hobject Region, Htuple *Row,
Htuple *Column, Htuple *Chain )

Contour of an object as chain code.


The operator get_region_chain returns the contour of a region. A contour is a series of pixels describing
the outline of the region. The contour “lies on” the region. It starts at the smallest line number; in that line at the
pixel with the largest column index. The rotation occurs clockwise. Holes of the region are ignored. The direction
code (chain code) is defined as follows:

3 2 1
4 ∗ 0
5 6 7

The operator get_region_chain returns the code in the form of a tuple. In case of an empty region the
parameters Row and Column are zero and Chain is the empty tuple.
Attention
Holes of the region are ignored. Only one region may be passed, and it must have exactly one connection compo-
nent.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Region to be transformed.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chain.begin.y ; Htuple . Hlong *
Line of starting point.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chain.begin.x ; Htuple . Hlong *
Column of starting point.
. Chain (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chain.code-array ; Htuple . Hlong *
Direction code of the contour (from starting point).
Typical range of values : 0 ≤ Chain ≤ 7
Result
The operator get_region_chain normally returns the value H_MSG_TRUE. If more than one connec-
tion component is passed an exception handling is caused. The behavior in case of empty input (no in-
put regions available) is set via the operator set_system(’no_object_result’,<Result>). The
behavior in case of empty region (the region is the empty set) is set via the operator set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_region_chain is reentrant and processed without parallelization.

811
812 CHAPTER 12. REGIONS

Possible Predecessors
sobel_amp, threshold, skeleton, edges_image, gen_rectangle1, gen_circle
Possible Successors
approx_chain, approx_chain_simple
See also
copy_obj, get_region_contour, get_region_polygon
Module
Foundation

T_get_region_contour ( const Hobject Region, Htuple *Rows,


Htuple *Columns )

Access the contour of an object.


The operator get_region_contour returns the contour of a region. A contour is a result of line (Rows)
and column coordinates (Columns), describing the boundary of the region. The contour lies on the region. It
starts at the smallest line number. In that line at the pixel with the largest column index. The rotation direction is
clockwise. The first pixel of the contour is identical with the last. Holes of the region are ignored. The operator
get_region_contour returns the coordinates in the form of tuples. An empty region is passed as empty tuple.
Attention
Holes of the region are ignored. Only one region may be passed, and this region must have exactly one connection
component.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Output region.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.y-array ; Htuple . Hlong *
Line numbers of the contour pixels.
. Columns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.x-array ; Htuple . Hlong *
Column numbers of the contour pixels.
Number of elements : Columns = Rows
Result
The operator get_region_contour normally returns the value H_MSG_TRUE. If more than one connection
component is passed an exception handling is caused. The behavior in case of empty input (no input regions
available) is set via the operator set_system(’no_object_result’,<Result>).
Parallelization Information
get_region_contour is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, threshold, skeleton, edges_image, gen_rectangle1, gen_circle
See also
copy_obj, get_region_chain, get_region_polygon
Module
Foundation

T_get_region_convex ( const Hobject Region, Htuple *Rows,


Htuple *Columns )

Access convex hull as contour.


The operator get_region_convex returns the convex hull of a region as polygon. The polygon is the min-
imum result of line (Rows) and column coordinates (Columns) describing the hull of the region. The polygon
pixels lie on the region. The polygon starts at the smallest line number; in this line at the pixel with the largest
column index. The rotation direction is clockwise. The first pixel of the polygon is identical with the last. The
operator get_region_convex returns the coordinates in the form of tuples. An empty region is passed as
empty tuple.

HALCON/C Reference Manual, 2008-5-13


12.1. ACCESS 813

Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Output region.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.y-array ; Htuple . Hlong *
Line numbers of contour pixels.
. Columns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.x-array ; Htuple . Hlong *
Column numbers of the contour pixels.
Number of elements : Columns = Rows
Result
The operator get_region_convex returns the value H_MSG_TRUE.
Parallelization Information
get_region_convex is reentrant and processed without parallelization.
Possible Predecessors
threshold, skeleton, dyn_threshold
Possible Successors
disp_polygon
Alternatives
shape_trans
See also
select_obj, get_region_contour
Module
Foundation

T_get_region_points ( const Hobject Region, Htuple *Rows,


Htuple *Columns )

Access the pixels of a region.


The operator get_region_points returns the region data in the form of coordinate lists. The coordinates are
sorted in the following order:

(r1 , c1 ) ≤ (r2 , c2 ) := (r1 < r2 ) ∨ (r1 = r2 ) ∧ (c1 ≤ c2)

get_region_points returns the coordinates in the form of tuples. An empty region is passed as empty tuple.
Attention
Only one region may be passed.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
This region is accessed.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . Hlong *
Line numbers of the pixels in the region
. Columns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . Hlong *
Column numbers of the pixels in the region.
Number of elements : Columns = Rows
Result
The operator get_region_points normally returns the value H_MSG_TRUE. If more than one connection
component is passed an exception handling is caused. The behavior in case of empty input (no input regions
available) is set via the operator set_system(’no_object_result’,<Result>).
Parallelization Information
get_region_points is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, threshold, connection

HALCON 8.0.2
814 CHAPTER 12. REGIONS

Alternatives
get_region_runs
See also
copy_obj, gen_region_points
Module
Foundation

T_get_region_polygon ( const Hobject Region, const Htuple Tolerance,


Htuple *Rows, Htuple *Columns )

Polygon approximation of a region.


The operator get_region_polygon calculates a polygon to approximate the edge of a region. A polygon
is a sequence of line (Rows) and column coordinates (Columns). It describes the contour of the region. Only
the base points of the polygon are returned. The parameter Tolerance indicates how large the maximum dis-
tance between the polygon and the edge of the region may be. Holes of the region are ignored. The operator
get_region_polygon returns the coordinates in the form of tuples.
Attention
Holes of the region are ignored. Only one region may be passed, and this region must have exactly one connection
component.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region to be approximated.
. Tolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Maximum distance between the polygon and the edge of the region.
Default Value : 5.0
Suggested values : Tolerance ∈ {0.0, 2.0, 5.0, 10.0}
Typical range of values : 0.0 ≤ Tolerance (lin)
Minimum Increment : 0.01
Recommended Increment : 1.0
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.y-array ; Htuple . Hlong *
Line numbers of the base points of the contour.
. Columns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.x-array ; Htuple . Hlong *
Column numbers of the base points of the contour.
Number of elements : Columns = Rows
Result
The operator get_region_polygon normally returns the value H_MSG_TRUE. If more than one connection
component is passed an exception handling is caused. The behavior in case of empty input (no input regions
available) is set via the operator set_system(’no_object_result’,<Result>).
Parallelization Information
get_region_polygon is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, threshold, skeleton, edges_image
See also
copy_obj, gen_region_polygon, disp_polygon, get_region_chain,
get_region_contour, set_line_approx
Module
Foundation

T_get_region_runs ( const Hobject Region, Htuple *Row,


Htuple *ColumnBegin, Htuple *ColumnEnd )

Access the runlength coding of a region.

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 815

The operator get_region_runs returns the region data in the form of chord tuples. The chord representation
is caused by examining a region line by line with ascending line number (= from “top” to “bottom”). Every line is
passed from left to right (ascending column number); storing all starting and ending points of region segments (=
chords). Thus a region can be described by a sequence of chords, a chord being defined by line number, starting
and ending points (column number). The operator get_region_runs returns the three components of the
chords in the form of tuples. In case of an empty region three empty tuples are returned.
Attention
Only one region may be passed.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Output region.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chord.y-array ; Htuple . Hlong *
Line numbers of the chords.
. ColumnBegin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chord.x1-array ; Htuple . Hlong *
Column numbers of the starting points of the chords.
Number of elements : ColumnBegin = Row
. ColumnEnd (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chord.x2-array ; Htuple . Hlong *
Column numbers of the ending points of the chords.
Number of elements : ColumnEnd = Row
Result
The operator get_region_runs normally returns the value H_MSG_TRUE. If more than one region is passed
an exception handling is caused. The behavior in case of empty input (no input regions available) is set via the
operator set_system(’no_object_result’,<Result>).
Parallelization Information
get_region_runs is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection
Alternatives
get_region_points
See also
copy_obj, gen_region_runs
Module
Foundation

12.2 Creation
gen_checker_region ( Hobject *RegionChecker, Hlong WidthRegion,
Hlong HeightRegion, Hlong WidthPattern, Hlong HeightPattern )

T_gen_checker_region ( Hobject *RegionChecker,


const Htuple WidthRegion, const Htuple HeightRegion,
const Htuple WidthPattern, const Htuple HeightPattern )

Create a checkered region.


The operator gen_checker_region returns a checkered region. Every black field of the checkerboard belongs
to the region. The horizontal and vertical expansion of the region is limited by WidthRegion, HeightRegion
respectively, the size of the fields of the checkerboard by WidthPattern × HeightPattern.
Attention
If a very small pattern is chosen (WidthPattern < 4) the created region requires much storage.

HALCON 8.0.2
816 CHAPTER 12. REGIONS

Parameter

. RegionChecker (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Created checkerboard region.
. WidthRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Largest occurring x value of the region.
Default Value : 511
Suggested values : WidthRegion ∈ {10, 20, 31, 63, 127, 255, 300, 400, 511}
Typical range of values : 1 ≤ WidthRegion ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : WidthRegion ≥ 1
. HeightRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Largest occurring y value of the region.
Default Value : 511
Suggested values : HeightRegion ∈ {10, 20, 31, 63, 127, 255, 300, 400, 511}
Typical range of values : 1 ≤ HeightRegion ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : HeightRegion ≥ 1
. WidthPattern (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of a field of the checkerboard.
Default Value : 64
Suggested values : WidthPattern ∈ {1, 2, 4, 8, 16, 20, 32, 64, 100, 128, 200, 300, 500}
Typical range of values : 1 ≤ WidthPattern ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (WidthPattern > 0) ∧ (WidthPattern < WidthRegion)
. HeightPattern (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of a field of the checkerboard.
Default Value : 64
Suggested values : HeightPattern ∈ {1, 2, 4, 8, 16, 20, 32, 64, 100, 128, 200, 300, 500}
Typical range of values : 1 ≤ HeightPattern ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (HeightPattern > 0) ∧ (HeightPattern < HeightRegion)
Example

gen_checker_region(&Checker,512,512,32,64);
set_draw(WindowHandle,"fill");
set_part(WindowHandle,0,0,511,511);
disp_region(Checker,WindowHandle);

Complexity
The required storage (in bytes) for the region is:
O((WidthRegion ∗ HeightRegion)/WidthPattern)
Result
The operator gen_checker_region returns the value H_MSG_TRUE if the parameter values are correct.
Otherwise an exception handling is raised. The clipping according to the current image format is set via the
operator set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_checker_region is reentrant and processed without parallelization.
Possible Successors
paint_region
Alternatives
gen_grid_region, gen_region_polygon_filled, gen_region_points,

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 817

gen_region_runs, gen_rectangle1, concat_obj, gen_random_region,


gen_random_regions
See also
hamming_change_region, reduce_domain
Module
Foundation

gen_circle ( Hobject *Circle, double Row, double Column, double Radius )


T_gen_circle ( Hobject *Circle, const Htuple Row, const Htuple Column,
const Htuple Radius )

Create a circle.
The operator gen_circle generates one or more circles described by the center and Radius. If several circles
shall be generated the coordinates must be passed in the form of tuples.
gen_circle only creates symmetric circles. To achieve this, the radius is rounded internally to a multiple of 0.5.
If an integer number is specified for the radius (i.e., 1, 2, 3, ...) an even diameter is obtained, and hence the circle
can only be symmetric with respect to a center with coordinates that have a fractional part of 0.5. Consequently,
internally the coordinates of the center are adapted to the closest coordinates that have a fractional part of 0.5. Here,
integer coordinates are rounded down to the next smaller values with a fractional part of 0.5. For odd diameters
(i.e., radius = 1.5, 2.5, 3.5, ...), the circle can only be symmetric with respect to a center with integer coordinates.
Hence, internally the coordinates of the center are rounded to the nearest integer coordinates. It should be noted
that the above algorithm may lead to the fact that circles with an even diameter are not contained in circles with
the next larger odd diameter, even if the coordinates specified in Row and Column are identical.
If the circle extends beyond the image edge it is clipped to the current image format if the value of the system flag
’clip_region’ is set to ’true’ ( set_system).
Parameter
. Circle (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Generated circle.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y(-array) ; (Htuple .) double / Hlong
Line index of center.
Default Value : 200.0
Suggested values : Row ∈ {0.0, 10.0, 50.0, 100.0, 200.0, 300.0}
Typical range of values : 1.0 ≤ Row ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x(-array) ; (Htuple .) double / Hlong
Column index of center.
Default Value : 200.0
Suggested values : Column ∈ {0.0, 10.0, 50.0, 100.0, 200.0, 300.0}
Typical range of values : 1.0 ≤ Column ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius(-array) ; (Htuple .) double / Hlong
Radius of circle.
Default Value : 100.5
Suggested values : Radius ∈ {1.0, 1.5, 2.0, 2.5, 3, 3.5, 4, 4.5, 5.5, 6.5, 7.5, 9.5, 11.5, 15.5, 20.5, 25.5, 31.5,
50.5}
Typical range of values : 1.0 ≤ Radius ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Radius > 0.0
Example

open_window(0,0,-1,-1,"root","visible","",&WindowHandle);

HALCON 8.0.2
818 CHAPTER 12. REGIONS

read_image(&Image,"meer");
gen_circle(&Circle,300.0,200.0,150.5);
reduce_domain(Image,Circle,Mask);
disp_color(Mask,WindowHandle);

Complexity
Runtime complexity: O(Radius ∗ 2)
Storage complexity (byte): O(Radius ∗ 8)
Result
If the parameter values are correct, the operator gen_circle returns the value H_MSG_TRUE. Oth-
erwise an exception handling is raised. The clipping according to the current image format is set via
the operator set_system(’clip_region’,<’true’/’false’>). If an empty region is cre-
ated by clipping (the circle is completely outside of the image format) the operator set_system
(’store_empty_region’,<true/false>) determines whether the empty region is put out.
Parallelization Information
gen_circle is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain
Alternatives
gen_ellipse, gen_region_polygon_filled, gen_region_points, gen_region_runs,
draw_circle
See also
disp_circle, set_shape, smallest_circle, reduce_domain
Module
Foundation

gen_ellipse ( Hobject *Ellipse, double Row, double Column, double Phi,


double Radius1, double Radius2 )

T_gen_ellipse ( Hobject *Ellipse, const Htuple Row,


const Htuple Column, const Htuple Phi, const Htuple Radius1,
const Htuple Radius2 )

Create an ellipse.
The operator gen_ellipse generates one or more ellipses with the center (Row, Column), the orientation
Phi and the half-radii Radius1 and Radius2. The angle is indicated in arc measure according to the x axis in
mathematically positive direction. More than one region can be created by passing tuples of parameter values.
The center must be located within the image coordinates. The coordinate system runs from (0,0) (upper left corner)
to (Width-1,Height-1). See get_system and reset_obj_db in this context. If the ellipse reaches beyond the
edge of the image it is clipped to the current image format according to the value of the system flag ’clip_region’ (
set_system).
Parameter

. Ellipse (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *


Created ellipse(s).
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.y(-array) ; (Htuple .) double / Hlong
Line index of center.
Default Value : 200.0
Suggested values : Row ∈ {0.0, 10.0, 20.0, 50.0, 100.0, 256.0, 300.0, 400.0}
Typical range of values : 1.0 ≤ Row ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 819

. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ellipse.center.x(-array) ; (Htuple .) double / Hlong


Column index of center.
Default Value : 200.0
Suggested values : Column ∈ {0.0, 10.0, 20.0, 50.0, 100.0, 256.0, 300.0, 400.0}
Typical range of values : 1.0 ≤ Column ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.angle.rad(-array) ; (Htuple .) double / Hlong
Orientation of the longer radius (Radius1).
Default Value : 0.0
Suggested values : Phi ∈ {-1.178097, -0.785398, -0.392699, 0.0, 0.392699, 0.785398, 1.178097}
Typical range of values : -1.178097 ≤ Phi ≤ 1.178097 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. Radius1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius1(-array) ; (Htuple .) double / Hlong
Longer radius.
Default Value : 100.0
Suggested values : Radius1 ∈ {2.0, 5.0, 10.0, 20.0, 50.0, 100.0, 256.0, 300.0, 400.0}
Typical range of values : 1.0 ≤ Radius1 ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Radius1 > 0
. Radius2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius2(-array) ; (Htuple .) double / Hlong
Shorter radius.
Default Value : 60.0
Suggested values : Radius2 ∈ {1.0, 2.0, 4.0, 5.0, 10.0, 20.0, 50.0, 100.0, 256.0, 300.0, 400.0}
Typical range of values : 1.0 ≤ Radius2 ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : (Radius2 > 0) ∧ (Radius2 ≤ Radius1)
Example

open_window(0,0,-1,-1,"root","visible","",&WindowHandle);
set_insert(WindowHandle,"xor");
do {
get_mbutton(WindowHandle,&Row,&Column,&Button);
gen_ellipse(&Ellipse,(double)Row,(double)Column,Column / 300.0,
(Row % 100)+1.0,(Column % 50) + 1.0);
disp_region(Ellipse,WindowHandle);
clear_obj(Ellipse);
} while(Button != 1);

Complexity
Runtime complexity: O(Radius1 ∗ 2)
Storage complexity (byte): O(Radius1 ∗ 8)
Result
If the parameter values are correct, the operator gen_ellipse returns the value H_MSG_TRUE. Otherwise
an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_ellipse is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain
Alternatives
gen_circle, gen_region_polygon_filled, draw_ellipse
See also
disp_ellipse, set_shape, smallest_circle, reduce_domain

HALCON 8.0.2
820 CHAPTER 12. REGIONS

Module
Foundation

gen_empty_region ( Hobject *EmptyRegion )


T_gen_empty_region ( Hobject *EmptyRegion )

Create an empty region.


The operator gen_empty_region creates an empty region. This means that the output parameter contains an
object. Thus, count_obj returns 1. The area of the region is 0. Most of the shape features are undefined (0). It
should be noted that an empty region must not be confused with the empty tuple.
Parameter

. EmptyRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Empty region (no pixels).
Parallelization Information
gen_empty_region is reentrant and processed without parallelization.
Module
Foundation

gen_grid_region ( Hobject *RegionGrid, Hlong RowSteps,


Hlong ColumnSteps, const char *Type, Hlong Width, Hlong Height )

T_gen_grid_region ( Hobject *RegionGrid, const Htuple RowSteps,


const Htuple ColumnSteps, const Htuple Type, const Htuple Width,
const Htuple Height )

Create a region from lines or pixels.


The operator gen_grid_region creates a grid constructed of lines (Type = ’lines’) or pixels (Type =
’points’). In case of ’lines’ continuous lines are returned, in case of ’points’ only the intersections of the lines.
Starting from the pixel (0,0) to the pixel (Height-1,Width-1) the grid is built up at stepping width RowSteps
in line direction and ColumnSteps in column direction. In the ’lines’ mode RowSteps, ColumnSteps re-
spectively, can be set to zero. In this case only columns, lines respectively, are created.
Attention
If a very small pattern is chosen (RowSteps < 4 or ColumnSteps < 4) the created region requires much
storage.
In the ’points’ mode RowSteps and ColumnSteps must not be set to zero.
Parameter

. RegionGrid (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Created lines/pixel region.
. RowSteps (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong / double
Step width in line direction or zero.
Default Value : 10
Suggested values : RowSteps ∈ {0, 2, 3, 4, 5, 7, 10, 15, 20, 30, 50, 100}
Typical range of values : 0 ≤ RowSteps ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (RowSteps > 1) ∨ (RowSteps = 0)

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 821

. ColumnSteps (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong / double


Step width in column direction or zero.
Default Value : 10
Suggested values : ColumnSteps ∈ {0, 2, 3, 4, 5, 7, 10, 15, 20, 30, 50, 100}
Typical range of values : 0 ≤ ColumnSteps ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (ColumnSteps > 1) ∨ (ColumnSteps = 0)
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of created pattern.
Default Value : "lines"
List of values : Type ∈ {"lines", "points"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Maximum width of pattern.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Maximum height of pattern.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
Example

read_image(&Image,"fabrik");
gen_grid_region(&Raster,10,10,"lines",512,512);
reduce_domain(Image,Raster,&Mask);
sobel_amp(Mask,GridSobel,"sum_abs",3);
disp_image(GridSobel,WindowHandle);

Complexity
The necessary storage (in bytes) for the region is:
O((ImageW idth/ColumnSteps) ∗ (ImageHeight/RowSteps))
Result
If the parameter values are correct the operator gen_grid_region returns the value H_MSG_TRUE. Other-
wise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_grid_region is reentrant and processed without parallelization.
Possible Successors
reduce_domain, paint_region
Alternatives
gen_region_line, gen_region_polygon, gen_region_points, gen_region_runs
See also
gen_checker_region, reduce_domain
Module
Foundation

HALCON 8.0.2
822 CHAPTER 12. REGIONS

gen_random_region ( Hobject *RegionRandom, Hlong Width, Hlong Height )


T_gen_random_region ( Hobject *RegionRandom, const Htuple Width,
const Htuple Height )

Create a random region.


The operator gen_random_region returns a random region. During this process every pixel in the image area
[0 . . . Width − 1][0 . . . Height − 1] is adapted into the region with the probability 0.5. The created region can be
imagined as the threshold formation in an image with noise.
This procedure is particularly important for the creation of uncorrelated binary patterns. The random pattern is
created by the C function “nrand48()”.
Attention
If Width and Height are chosen large (> 100) the created region may require much storage space due to the
internally used runlength coding. The gray values of the output region are undefined.
Parameter

. RegionRandom (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Created random region with expansion Width x Height.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Maximum horizontal expansion of random region.
Default Value : 128
Suggested values : Width ∈ {16, 32, 50, 64, 100, 128, 256, 300, 400, 512}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width > 0
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Maximum vertical expansion of random region.
Default Value : 128
Suggested values : Height ∈ {16, 32, 50, 64, 100, 128, 256, 300, 400, 512}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height > 0
Complexity
The worst case for the storage complexity for the created region (in byte) is: O(W idth ∗ Height ∗ 2).
Result
If the parameter values are correct, the operator gen_random_region returns the value H_MSG_TRUE. Oth-
erwise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_random_region is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain
See also
gen_checker_region, hamming_change_region, add_noise_distribution,
add_noise_white, reduce_domain
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 823

gen_random_regions ( Hobject *Regions, const char *Type,


double WidthMin, double WidthMax, double HeightMin, double HeightMax,
double PhiMin, double PhiMax, Hlong NumRegions, Hlong Width,
Hlong Height )

T_gen_random_regions ( Hobject *Regions, const Htuple Type,


const Htuple WidthMin, const Htuple WidthMax, const Htuple HeightMin,
const Htuple HeightMax, const Htuple PhiMin, const Htuple PhiMax,
const Htuple NumRegions, const Htuple Width, const Htuple Height )

Create random regions like circles, rectangles and ellipses.


The operator gen_random_region generates circles, rectangles and ellipses whose parameters are determined
at random. In each case only one lower, upper limit respectively, is given. The position is always random and
cannot be determined by parameters. The parameter NumRegions indicates how many regions shall be created.
Parameter

. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *


Created regions.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of regions to be created.
Default Value : "circle"
List of values : Type ∈ {"circle", "ring", "ellipse", "rectangle1", "rectangle2"}
. WidthMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Minimum width of the region.
Default Value : 10.0
Suggested values : WidthMin ∈ {1.0, 3.0, 5.0, 10.0, 20.0, 40.0, 80.0}
Typical range of values : 1.0 ≤ WidthMin ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : WidthMin > 0
. WidthMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Maximum width of the region.
Default Value : 20.0
Suggested values : WidthMax ∈ {1.0, 3.0, 5.0, 10.0, 20.0, 40.0, 80.0}
Typical range of values : 1.0 ≤ WidthMax ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : WidthMax > 0
. HeightMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Minimum height of the region.
Default Value : 10.0
Suggested values : HeightMin ∈ {1.0, 3.0, 5.0, 10.0, 20.0, 40.0, 80.0}
Typical range of values : 1.0 ≤ HeightMin ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : HeightMin > 0
. HeightMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Maximum height of the region.
Default Value : 30.0
Suggested values : HeightMax ∈ {1.0, 3.0, 5.0, 10.0, 20.0, 40.0, 80.0}
Typical range of values : 1.0 ≤ HeightMax ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : HeightMax > 0

HALCON 8.0.2
824 CHAPTER 12. REGIONS

. PhiMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong


Minimum rotation angle of the region.
Default Value : -0.7854
Suggested values : PhiMin ∈ {0.0, 0.1, 0.3, 0.6, 0.9, 1.2, 1.5}
Typical range of values : 0.0 ≤ PhiMin ≤ 6.28 (lin)
Minimum Increment : 0.001
Recommended Increment : 0.10
Restriction : PhiMin > 0
. PhiMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Maximum rotation angle of the region.
Default Value : 0.7854
Suggested values : PhiMax ∈ {0.0, 0.1, 0.3, 0.6, 0.9, 1.2, 1.5}
Typical range of values : 0.0 ≤ PhiMax ≤ 6.28 (lin)
Minimum Increment : 0.001
Recommended Increment : 0.10
Restriction : PhiMax > 0
. NumRegions (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of regions.
Default Value : 100
Suggested values : NumRegions ∈ {1, 5, 20, 100, 200, 500, 1000, 2000}
Typical range of values : 1 ≤ NumRegions ≤ 2000 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : NumRegions > 0
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum horizontal expansion.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width > 0
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum vertical expansion.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height > 0
Result
If the parameter values are correct gen_random_regions returns the value H_MSG_TRUE. Otherwise an
exception handling is raised. The clipping according to the current image format is determined by the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_random_regions is reentrant and processed without parallelization.
Possible Successors
paint_region
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 825

gen_rectangle1 ( Hobject *Rectangle, double Row1, double Column1,


double Row2, double Column2 )

T_gen_rectangle1 ( Hobject *Rectangle, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2 )

Create a rectangle parallel to the coordinate axes.


The operator gen_rectangle1 generates one or more rectangles parallel to the coordinate axes which are
described by the upper left corner (Row1, Column1) and the lower right corner (Row2, Column2). More than
one region can be created by passing a tuple of corner points. The coordinate system runs from (0,0) (upper left
corner) to (Width-1,Height-1). See get_system and reset_obj_db in this context.
Parameter
. Rectangle (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Created rectangle.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; (Htuple .) double / Hlong
Line of upper left corner point.
Default Value : 30.0
Suggested values : Row1 ∈ {0.0, 10.0, 20.0, 50.0, 100.0, 200.0}
Typical range of values : −∞ ≤ Row1 ≤ ∞ (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; (Htuple .) double / Hlong
Column of upper left corner point.
Default Value : 20.0
Suggested values : Column1 ∈ {0.0, 10.0, 20.0, 50.0, 100.0, 200.0}
Typical range of values : −∞ ≤ Column1 ≤ ∞ (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; (Htuple .) double / Hlong
Line of lower right corner point.
Default Value : 100.0
Suggested values : Row2 ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0, 511.0}
Typical range of values : −∞ ≤ Row2 ≤ ∞ (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Row2 ≥ Row1
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . .rectangle.corner.x(-array) ; (Htuple .) double / Hlong
Column of lower right corner point.
Default Value : 200.0
Suggested values : Column2 ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0, 511.0}
Typical range of values : −∞ ≤ Column2 ≤ ∞ (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Column2 ≥ Column1
Example

/* Contrast improvement in a rectangular region of interest */

read_image(&Image,"fabrik");
open_window(0,0,-1,-1,"root","visible","",&WindowHandle);
disp_image(Image,WindowHandle);
draw_rectangle1(WindowHandle,&Row1,&Column1,&Row2,&Column2);
gen_rectangle1(&Rect,(double)Row1,(double)Column1,
(double)Row2,(double)Column2);
reduce_domain(Image,Rect,&Mask);
emphasize(Mask,&Emphasize,9,9,1.0);
disp_image(Emphasize,WindowHandle);

HALCON 8.0.2
826 CHAPTER 12. REGIONS

Result
If the parameter values are correct, the operator gen_rectangle1 returns the value H_MSG_TRUE. Other-
wise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_rectangle1 is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain
Alternatives
gen_rectangle2, gen_region_polygon, fill_up, gen_region_runs,
gen_region_points, gen_region_line
See also
draw_rectangle1, reduce_domain, smallest_rectangle1
Module
Foundation

gen_rectangle2 ( Hobject *Rectangle, double Row, double Column,


double Phi, double Length1, double Length2 )

T_gen_rectangle2 ( Hobject *Rectangle, const Htuple Row,


const Htuple Column, const Htuple Phi, const Htuple Length1,
const Htuple Length2 )

Create a rectangle of any orientation.


The operator gen_rectangle2 generates one or more rectangles with the center (Row, Column) , the orienta-
tion Phi and the half edge lengths Length1 and Length2. The orientation is given in arc measure and indicates
the angle between the horizontal axis and Length1 (mathematically positive). The coordinate system runs from
(0,0) (upper left corner) to (Width-1,Height-1). See get_system and reset_obj_db in this context. More
than one region can be created by passing one tuple of corner points.
Attention
The gray values of the output objects are undefined.
Parameter

. Rectangle (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *


Created rectangle.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; (Htuple .) double / Hlong
Line index of the center.
Default Value : 50.0
Suggested values : Row ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : −∞ ≤ Row ≤ ∞ (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; (Htuple .) double / Hlong
Column index of the center.
Default Value : 100.0
Suggested values : Column ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : −∞ ≤ Column ≤ ∞ (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 827

. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; (Htuple .) double / Hlong


Angle of longitudinal axis to the horizontal (in radians).
Default Value : 0.0
Suggested values : Phi ∈ {-1.178097, -0.785398, -0.392699, 0.0, 0.392699, 0.785398, 1.178097}
Typical range of values : -1.178097 ≤ Phi ≤ 1.178097 (lin)
Minimum Increment : 0.001
Recommended Increment : 0.1
Restriction : ((−pi/2) < Phi) ∧ (Phi ≤ (pi/2))
. Length1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth(-array) ; (Htuple .) double / Hlong
Half width.
Default Value : 200.0
Suggested values : Length1 ∈ {3.0, 5.0, 10.0, 15.0, 20.0, 50.0, 100.0, 200.0, 300.0, 500.0}
Typical range of values : −∞ ≤ Length1 ≤ ∞ (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Length2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight(-array) ; (Htuple .) double / Hlong
Half height.
Default Value : 100.0
Suggested values : Length2 ∈ {1.0, 2.0, 3.0, 5.0, 10.0, 15.0, 20.0, 50.0, 100.0, 200.0}
Typical range of values : −∞ ≤ Length2 ≤ ∞ (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Result
If the parameter values are correct the operator gen_rectangle2 returns the value H_MSG_TRUE. Other-
wise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_rectangle2 is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain
Alternatives
gen_rectangle1, gen_region_polygon_filled, gen_region_polygon,
gen_region_points, fill_up
See also
draw_rectangle2, reduce_domain, smallest_rectangle2, gen_ellipse
Module
Foundation

gen_region_contour_xld ( const Hobject Contour, Hobject *Region,


const char *Mode )

T_gen_region_contour_xld ( const Hobject Contour, Hobject *Region,


const Htuple Mode )

Create a region from an XLD contour.


gen_region_contour_xld creates a region Region from a subpixel XLD contour Contour. The contour
is sampled according to the Bresenham algorithm and influenced by the parameter neighborhood of the operator
set_system. Open contours are closed before converting them to regions. Finally, the parameter Mode defines
whether the region is filled up (filled) or returned by its contour (margin).
Please note that the coordinates of the contour points are rounded to their nearest integer pixel coordi-
nates during the conversion. This may lead to unexpected results when passing the contour obtained by
the operator gen_contour_region_xld to gen_region_contour_xld: When setting Mode of
gen_contour_region_xld to border, the input region of gen_contour_region_xld and the out-
put region of gen_region_contour_xld differ. For example, let us assume that the input region of
gen_contour_region_xld consists of the single pixel (1,1). Then, the resulting contour that is ob-
tained when calling gen_contour_region_xld with Mode set to border consists of the five points

HALCON 8.0.2
828 CHAPTER 12. REGIONS

(0.5,0.5), (0.5,1.5), (1.5,1.5), (1.5,0.5), and (0.5,0.5). Consequently, when passing this contour again to
gen_region_contour_xld, the resulting region consists of the points (1,1), (1,2), (2,2), and (2,1).
Parameter
. Contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject
Input contour.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Created region.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Fill mode of the region.
Default Value : "filled"
Suggested values : Mode ∈ {"filled", "margin"}
Parallelization Information
gen_region_contour_xld is reentrant and processed without parallelization.
Possible Predecessors
gen_contour_polygon_xld, gen_contour_polygon_rounded_xld
Alternatives
gen_region_polygon, gen_region_polygon_xld
See also
set_system
Module
Foundation

T_gen_region_histo ( Hobject *Region, const Htuple Histogram,


const Htuple Row, const Htuple Column, const Htuple Scale )

Convert a histogram into a region.


gen_region_histo converts a histogram created with gray_histo into a region. The effect of the three
control parameters is the same as in disp_image and set_paint.
Parameter
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Region containing the histogram.
. Histogram (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .histogram-array ; Htuple . Hlong
Input histogram.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . Hlong
Row coordinate of the center of the histogram.
Default Value : 255
Suggested values : Row ∈ {100, 200, 255, 300, 400}
Typical range of values : 0 ≤ Row ≤ 511
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . Hlong
Column coordinate of the center of the histogram.
Default Value : 255
Suggested values : Column ∈ {100, 200, 255, 300, 400}
Typical range of values : 0 ≤ Column ≤ 511
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Scale factor for the histogram.
Default Value : 1
List of values : Scale ∈ {1, 2, 3, 4, 5, 6, 7}
Typical range of values : 1 ≤ Scale ≤ 10 (lin)
Minimum Increment : 1
Recommended Increment : 1
Result
gen_region_histo returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling
is raised.

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 829

Parallelization Information
gen_region_histo is reentrant and processed without parallelization.
Possible Predecessors
gray_histo
See also
disp_channel, set_paint
Module
Foundation

gen_region_hline ( Hobject *Regions, double Orientation,


double Distance )

T_gen_region_hline ( Hobject *Regions, const Htuple Orientation,


const Htuple Distance )

Store input lines described in Hesse normal shape as regions.


The operator gen_region_hline stores the lines described in Hesse normal shape as regions. A line is
determined by the distance from the line to the origin (Distance, corresponds to the length of the normal vector)
and the direction of the normal vector (Orientation, corresponds to the orientation of the line ±π/2). The
directions were defined in such a way that at Orientation = 0 the normal vector lies in the direction of the X
axis, which corresponds to a vertical line. At Orientation = π/2 the normal vector points in the direction of
the Y axis, i.e. a horizontal line is described.
Attention
The lines are clipped to the current maximum image format.
Parameter
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Created regions (one for every line), clipped to maximum image format.
Number of elements : Regions = Distance
. Orientation (input_control) . . . . . . . . . . . . . . . . . . hesseline.angle.rad(-array) ; (Htuple .) double / Hlong
Orientation of the normal vector in radians.
Default Value : 0.0
Suggested values : Orientation ∈ {-0.78, 0.0, 0.78, 1.57}
Typical range of values : −∞ ≤ Orientation ≤ ∞ (lin)
Recommended Increment : 0.02
Number of elements : Orientation = Distance
. Distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . hesseline.distance(-array) ; (Htuple .) double / Hlong
Distance from the line to the coordinate origin (0.0).
Default Value : 200
Suggested values : Distance ∈ {10, 50, 100, 200, 300, 400}
Typical range of values : −∞ ≤ Distance ≤ ∞ (lin)
Recommended Increment : 1
Result
The operator gen_region_hline always returns the value H_MSG_TRUE.
Parallelization Information
gen_region_hline is reentrant and processed without parallelization.
Alternatives
gen_region_line
See also
hough_lines
Module
Foundation

HALCON 8.0.2
830 CHAPTER 12. REGIONS

gen_region_line ( Hobject *RegionLines, Hlong BeginRow, Hlong BeginCol,


Hlong EndRow, Hlong EndCol )

T_gen_region_line ( Hobject *RegionLines, const Htuple BeginRow,


const Htuple BeginCol, const Htuple EndRow, const Htuple EndCol )

Store input lines as regions.


The operator gen_region_line stores the given lines (with starting point [BeginRow,BeginCol] and end-
ing point [EndRow, EndCol]) as region.
Parameter

. RegionLines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *


Created regions.
. BeginRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y(-array) ; (Htuple .) Hlong
Line coordinates of the starting points of the input lines.
Default Value : 100
Suggested values : BeginRow ∈ {10, 50, 100, 200, 300, 400}
Typical range of values : −∞ ≤ BeginRow ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
. BeginCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x(-array) ; (Htuple .) Hlong
Column coordinates of the starting points of the input lines.
Default Value : 50
Suggested values : BeginCol ∈ {10, 50, 100, 200, 300, 400}
Typical range of values : −∞ ≤ BeginCol ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
. EndRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y(-array) ; (Htuple .) Hlong
Line coordinates of the ending points of the input lines.
Default Value : 150
Suggested values : EndRow ∈ {50, 100, 200, 300, 400, 500}
Typical range of values : −∞ ≤ EndRow ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
. EndCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x(-array) ; (Htuple .) Hlong
Column coordinates of the ending points of the input lines.
Default Value : 250
Suggested values : EndCol ∈ {50, 100, 200, 300, 400, 500}
Typical range of values : −∞ ≤ EndCol ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
Result
The operator gen_region_line always returns the value H_MSG_TRUE. The clipping according to the cur-
rent image format is determined by the operator set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_region_line is reentrant and processed without parallelization.
Possible Predecessors
split_skeleton_lines
Alternatives
gen_region_hline
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 831

gen_region_points ( Hobject *Region, Hlong Rows, Hlong Columns )


T_gen_region_points ( Hobject *Region, const Htuple Rows,
const Htuple Columns )

Store individual pixels as image region.


The operator gen_region_points creates a region described by a number of pixels. The pixels do not have to
be stored in a fixed order, but the best runtime behavior is obtained when the pixels are stored in ascending order.
The order is as follows:

(l1 , c1 ) ≤ (l2 , c2 ) := (l1 < l2 ) ∨ (l1 = l2 ) ∧ (c1 ≤ c2 )

The indicated coordinates stand for two consecutive pixels in the tupel.
Parameter

. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Created region.
. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y(-array) ; (Htuple .) Hlong
Lines of the pixels in the region.
Default Value : 100
Suggested values : Rows ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Rows ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
. Columns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x(-array) ; (Htuple .) Hlong
Columns of the pixels in the region.
Default Value : 100
Suggested values : Columns ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Columns ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : Columns = Rows
Complexity
F shall be the number of pixels. If the pixels are sorted in ascending order the runtime complexity is: O(F ),
otherwise O(log(F ) ∗ F ).
Result
The operator gen_region_points returns the value H_MSG_TRUE if the pixels are located within the image
format. Otherwise an exception handling is raised. The clipping according to the current image format is set via
the operator set_system(’clip_region’,<’true’/’false’>). If an empty region is created (by the
clipping or by an empty input) the operator set_system(’store_empty_region’,<true/false>)
determines whether the region is returned or an empty object tuple.
Parallelization Information
gen_region_points is reentrant and processed without parallelization.
Possible Predecessors
get_region_points
Possible Successors
paint_region, reduce_domain
Alternatives
gen_region_polygon, gen_region_runs, gen_region_line
See also
reduce_domain
Module
Foundation

HALCON 8.0.2
832 CHAPTER 12. REGIONS

T_gen_region_polygon ( Hobject *Region, const Htuple Rows,


const Htuple Columns )

Store a polygon as an image object.


The operator gen_region_polygon creates a region from a polygon row described by a series of line and
column coordinates. The created region consists of the pixels of the routes defined thereby, wherein it is linearily
interpolated between the base points.
Attention
The region is not automatically closed and not filled. The gray values of the output regions are undefined.
Parameter

. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Created region.
. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.y-array ; Htuple . Hlong
Line indices of the base points of the region contour.
Default Value : 100
Suggested values : Rows ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Rows ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
. Columns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.x-array ; Htuple . Hlong
Colum indices of the base points of the region contour.
Default Value : 100
Suggested values : Columns ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Columns ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : Columns = Rows
Example

/* Polygon-approximation */
T_get_region_polygon(Region,7,&Row,&Column);
/* store it as a region */
T_gen_region_polygon(&Pol,Row,Column);
destroy_tuple(Row);
destroy_tuple(Column);
/* fill up the hole */
fill_up(Pol,&Filled);

Result
If the base points are correct the operator gen_region_polygon returns the value H_MSG_TRUE. Other-
wise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>). If an empty region is created (by the clipping or
by an empty input) the operator set_system(’store_empty_region’,<true/false>) determines
whether the region is returned or an empty object tuple.
Parallelization Information
gen_region_polygon is reentrant and processed without parallelization.
Possible Predecessors
get_region_polygon, draw_polygon
Alternatives
gen_region_polygon_filled, gen_region_points, gen_region_runs
See also
fill_up, reduce_domain, get_region_polygon, draw_polygon
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 833

T_gen_region_polygon_filled ( Hobject *Region, const Htuple Rows,


const Htuple Columns )

Store a polygon as a “filled” region.


The operator gen_region_polygon_filled creates a region from a polygon containing the cor-
ner points of the region (line and column coordinates) either clockwise or anti-clockwise. Contrary to
gen_region_polygon a “filled” region is returned here.
Parameter

. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Created region.
. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.y-array ; Htuple . Hlong
Line indices of the base points of the region contour.
Default Value : 100
Suggested values : Rows ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Rows ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
. Columns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . polygon.x-array ; Htuple . Hlong
Column indices of the base points of the region contour.
Default Value : 100
Suggested values : Columns ∈ {0, 10, 30, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Columns ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : Columns = Rows
Example

/* Polygon approximation */
T_get_region_polygon(Region,7,&Row,&Column);
T_gen_region_polygon_filled(&Pol,Row,Column);
/* fill up with original gray value */
reduce_domain(Image,Pol,&New);

Result
If the base points are correct the operator gen_region_polygon_filled returns the value H_MSG_TRUE.
Otherwise an exception handling is raised. The clipping according to the current image format is set via the
operator set_system(’clip_region’,<’true’/’false’>). If an empty region is created (by the
clipping or by an empty input) the operator set_system(’store_empty_region’,<true/false>)
determines whether the region is returned or an empty object tuple.
Parallelization Information
gen_region_polygon_filled is reentrant and processed without parallelization.
Possible Predecessors
get_region_polygon, draw_polygon
Alternatives
gen_region_polygon, gen_region_points, draw_polygon
See also
gen_region_polygon, reduce_domain, get_region_polygon, gen_region_runs
Module
Foundation

HALCON 8.0.2
834 CHAPTER 12. REGIONS

gen_region_polygon_xld ( const Hobject Polygon, Hobject *Region,


const char *Mode )

T_gen_region_polygon_xld ( const Hobject Polygon, Hobject *Region,


const Htuple Mode )

Create a region from an XLD polygon.


gen_region_polygon_xld creates a region Region from a subpixel XLD polygon Polygon. The polygon
is sampled according to the Bresenham algorithm and influenced by the parameter neighborhood of the operator
set_system. Open polygons are closed before converting them to regions. Finally, the parameter Mode defines
whether the region is filled up (filled) or returned by its contour (margin).
Parameter

. Polygon (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_poly ; Hobject


Input polygon.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Created region.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Fill mode of the region.
Default Value : "filled"
Suggested values : Mode ∈ {"filled", "margin"}
Parallelization Information
gen_region_polygon_xld is reentrant and processed without parallelization.
Possible Predecessors
gen_polygons_xld
Alternatives
gen_region_polygon, gen_region_contour_xld
See also
set_system
Module
Foundation

gen_region_runs ( Hobject *Region, Hlong Row, Hlong ColumnBegin,


Hlong ColumnEnd )

T_gen_region_runs ( Hobject *Region, const Htuple Row,


const Htuple ColumnBegin, const Htuple ColumnEnd )

Create an image region from a runlength coding.


The operator gen_region_runs creates a region described by the input runlength structure. The runlength
representation is created by examining a region line by line with ascending line number (= from “top” to “bottom”).
Every line runs through from left to right (ascending column number). All starting and ending points being stored
by region segments (=runs). Thus a region can be described by a sequence of runs, a run being defined by line
number as well as starting and ending points (column number).
The storing is fastest when the runs are sorted. The order is as follows:

(l1 , b1 , e1 ) ≤ (l2 , b2 , e2 ) := (l1 < l2 ) ∨ (l1 = l2 ) ∧ (b1 ≤ b2 )

.
Parameter

. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *


Created region.

HALCON/C Reference Manual, 2008-5-13


12.2. CREATION 835

. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chord.y(-array) ; (Htuple .) Hlong


Lines of the runs.
Default Value : 100
Suggested values : Row ∈ {0, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ Row ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 10
. ColumnBegin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chord.x1(-array) ; (Htuple .) Hlong
Columns of the starting points of the runs.
Default Value : 50
Suggested values : ColumnBegin ∈ {0, 50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ ColumnBegin ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 10
Number of elements : ColumnBegin = Row
. ColumnEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chord.x2(-array) ; (Htuple .) Hlong
Columns of the ending points of the runs.
Default Value : 200
Suggested values : ColumnEnd ∈ {50, 100, 200, 300, 500}
Typical range of values : −∞ ≤ ColumnEnd ≤ ∞ (lin)
Minimum Increment : 1
Recommended Increment : 10
Number of elements : ColumnEnd = Row
Restriction : ColumnEnd ≥ ColumnBegin
Complexity
F shall be the number of pixels. If the pixels are sorted in ascending order the runtime complexity is: O(F ),
otherwise it is O(log(F ) ∗ F ).
Result
If the data is correct the operator gen_region_runs returns the value H_MSG_TRUE, otherwise an exception
handling is raised. The clipping according to the current image format is set via the operator set_system
(’clip_region’,<’true’/’false’>). If an empty region is created (by the clipping or by an empty
input) the operator set_system(’store_empty_region’,<true/false>) determines whether the
region is returned or an empty object tuple.
Parallelization Information
gen_region_runs is reentrant and processed without parallelization.
Possible Predecessors
get_region_runs
Alternatives
gen_region_points, gen_region_polygon, gen_region_line,
gen_region_polygon_filled
See also
reduce_domain
Module
Foundation

label_to_region ( const Hobject LabelImage, Hobject *Regions )


T_label_to_region ( const Hobject LabelImage, Hobject *Regions )

Extract regions with equal gray values from an image.


label_to_region segments an image into regions of equal gray value. One output region is generated for each
gray value occuring in the image. This is similar to calling threshold multiple times, and accumulating the
results with concat_obj. Another related operator is regiongrowing. However, label_to_region
does not perform a connection operation on the resulting regions, i.e., they may be disconnected. A typical
application of label_to_region is the segmentation of label images, hence its name.

HALCON 8.0.2
836 CHAPTER 12. REGIONS

The number of output regions is limited by the system parameter ’max_outp_obj_par’, which can be read via
get_system(::’max_outp_obj_par’:<Anzahl>).

Attention
label_to_region is not implemented for images of type ’real’. The input images must not contain negative
gray values.
Parameter

. LabelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / int4


Label image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Regions having a constant gray value.
Complexity
Let x1 be the minimum x-coordinate, x2 the maximum x-coordinate, y1 be the minimum y-coordinate, and y2 the
maximum y-coordinate of a particular gray value. Furthermore, let N be the number of different gray values in the
image. Then the runtime complexity is O(N ∗ (x2 − x1 + 1) ∗ (y2 − y1 + 1))
Result
label_to_region returns H_MSG_TRUE if the gray values lie within a correct range. The behavior with re-
spect to the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
label_to_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
min_max_gray, sobel_amp, binomial_filter, gauss_image, reduce_domain,
diff_of_gauss
Possible Successors
connection, dilation1, erosion1, opening, closing, rank_region, shape_trans,
skeleton
See also
threshold, concat_obj, regiongrowing, region_to_label
Module
Foundation

12.3 Features
area_center ( const Hobject Regions, Hlong *Area, double *Row,
double *Column )
T_area_center ( const Hobject Regions, Htuple *Area, Htuple *Row,
Htuple *Column )

Area and center of regions.


The operator area_center calculates the area and the center of the input regions. The area is defined as
the number of pixels of a region. The center is calculated as the mean value of the line or column coordinates,
respectively, of all pixels.
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of the input region. In case of empty region all parameters have the value 0.0 if no other behavior was
set (see set_system).
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be examined.
. Area (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Area of the region.

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 837

. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *


Line index of the center.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column index of the center.
Example

threshold(&Image,&Seg,120.0,255.0);
connection(Seg,&Connected);
T_area_center(Connected,&Area,&Row,&Column);

Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator area_center returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
area_center is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
See also
select_shape
Module
Foundation

circularity ( const Hobject Regions, double *Circularity )


T_circularity ( const Hobject Regions, Htuple *Circularity )

Shape factor for the circularity (similarity to a circle) of a region.


The operator circularity calculates the similarity of the input region with a circle.

Calculation: If F is the area of the region and max is the maximum distance from the center to all contour pixels,
the shape factor C is defined as:
F
C=
(max2 ∗ π)

The shape factor C of a circle is 1. If the region is long or has holes C is smaller than 1. The operator
circularity especially responds to large bulges, holes and unconnected regions.
In case of an empty region the operator circularity returns the value 0 (if no other behavior was set (see
set_system)). If more than one region is passed the numerical values of the shape factor are stored in a tuple,
the position of a value in the tuple corresponding to the position of the region in the input tuple.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be examined.
. Circularity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Roundness of the input region(s).
Assertion : (0 ≤ Circularity) ∧ (Circularity ≤ 1.0)

HALCON 8.0.2
838 CHAPTER 12. REGIONS

Example

/* Comparison between shape factors of rectangle, circle and ellipse */


gen_rectangle1(&R1,10.0,10.0,20.0,20.0);
gen_rectangle2(&R2,100.0,100.0,0.0,100.0,20.0);
gen_ellipse(&E,100.0,100.0,0.0,100.0,20.0);
gen_circle(&C,100.0,100.0,20.0);
circularity(R1,&R1_);
circularity(R2,&R2_);
circularity(E,&E_);
circularity(C,&C_);
printf("quadrate: %g\n",R1_);
printf("rectangle: %g\n",R2_);
printf("ellipse: %g\n",E_);
printf("circle: %g\n",C_);

Result
The operator circularity returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
circularity is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
roundness, compactness, convexity, eccentricity
See also
area_center, select_shape
Module
Foundation

compactness ( const Hobject Regions, double *Compactness )


T_compactness ( const Hobject Regions, Htuple *Compactness )

Shape factor for the compactness of a region.


The operator compactness calculates the compactness of the input regions.

Calculation: If L is the length of the contour (see contlength) and F the area of the region the shape factor
C is defined as:
L2
C=
4F π
The shape factor C of a circle is 1. If the region is long or has holes C is larger than 1. The operator
compactness responds to the course of the contour (roughness) and to holes. In case of an empty region
the operator compactness returns the value 0 if no other behavior was set (see set_system). If more than
one region is passed the numerical values of the shape factor are stored in a tuple, the position of a value in the
tuple corresponding to the position of the region in the input tuple.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Compactness (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Compactness of the input region(s).
Assertion : (Compactness ≥ 1.0) ∨ (Compactness = 0)

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 839

Result
The operator compactness returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
compactness is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
compactness, convexity, eccentricity
See also
contlength, area_center, select_shape
Module
Foundation

connect_and_holes ( const Hobject Regions, Hlong *NumConnected,


Hlong *NumHoles )

T_connect_and_holes ( const Hobject Regions, Htuple *NumConnected,


Htuple *NumHoles )

Number of connection components and holes


The operator connect_and_holes calculates the number of connection components and the number of holes
of each region of Regions.
If more than one region is passed the numerical values of the output control parameters NumConnected and
NumHoles are each stored in a tuple, the position of a value in the tuple corresponding to the position of the
region in the input tuple.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be examined.
. NumConnected (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Number of connection components of a region.
. NumHoles (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Number of holes of a region.
Result
The operator connect_and_holes returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>).
Parallelization Information
connect_and_holes is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
euler_number
See also
connection, fill_up, fill_up_shape, union1
Module
Foundation

HALCON 8.0.2
840 CHAPTER 12. REGIONS

contlength ( const Hobject Regions, double *ContLength )


T_contlength ( const Hobject Regions, Htuple *ContLength )

Contour length of a region.


The operator contlength calculates the total length of the contour (sum of all connection components of
the region) for each region of Regions. The distance between √ two neighboring contour points parallel to the
coordinate axes is rated 1, the distance in the diagonal is rated 2. If more than one region is passed the numerical
values of the contour length are stored in a tuple, the position of a value in the tuple corresponding to the position
of the region in the input tuple. In case of an empty region the operator contlength returns the value 0.
Attention
The contour of holes is not calculated.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be examined.
. ContLength (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Contour length of the input region(s).
Assertion : ContLength ≥ 0
Example (Syntax: C++)

#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"

int main (int argc, char *argv[])


{
if (argc < 2)
{
cout << "Usage: " << argv[0] << " <# of regions> " << endl;
return (-1);
}

HWindow w;
HRegionArray reg;

int NumOfElements = atoi (argv[1]);

cout << "Draw " << NumOfElements << " regions " << endl;

for (int i=0; i < NumOfElements; i++)


{
reg[i] = w.DrawRegion ();
}

Tuple circ = reg.Circularity ();


Tuple cont = reg.Contlength ();

for (i = 0; i < NumOfElements; i++)


{
cout << "Circularity of " << i+1 << ". region = " << circ[i].D();
cout << "\t\t Contour Length of" << i+1 <<
". region = " << cont[i].D() << endl;
}

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 841

w.Click ();
return(0);
}

Result
The operator contlength returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
contlength is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
get_region_contour
Alternatives
compactness
See also
area_center, get_region_contour
Module
Foundation

convexity ( const Hobject Regions, double *Convexity )


T_convexity ( const Hobject Regions, Htuple *Convexity )

Shape factor for the convexity of a region.


The operator convexity calculates the convexity of each input region of Regions.

Calculation: If Fc is the area of the convex hull and Fo the original area of the region the shape factor C is defined
as:

Fo
C=
Fc
The shape factor C is 1 if the region is convex (e.g., rectangle, circle etc.). If there are indentations or holes C is
smaller than 1.
In case of an empty region the operator convexity returns the value 0 (if no other behavior was set (see
set_system)). If more than one region is passed the numerical values of the contour length are stored in a tuple,
the position of a value in the tuple corresponding to the position of the region in the input tuple.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be examined.
. Convexity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Convexity of the input region(s).
Assertion : Convexity ≤ 1
Result
The operator convexity returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
convexity is reentrant and automatically parallelized (on tuple level).

HALCON 8.0.2
842 CHAPTER 12. REGIONS

Possible Predecessors
threshold, regiongrowing, connection
See also
select_shape, area_center, shape_trans
Module
Foundation

diameter_region ( const Hobject Regions, Hlong *Row1, Hlong *Column1,


Hlong *Row2, Hlong *Column2, double *Diameter )

T_diameter_region ( const Hobject Regions, Htuple *Row1,


Htuple *Column1, Htuple *Row2, Htuple *Column2, Htuple *Diameter )

Maximal distance between two boundary points of a region.


The operator diameter_region calculates the maximal distance between two boundary points of a region.
The coordinates of these two extremes and the distance between them will be returned.
Attention
If the region is empty, the results of Row1, Column1, Row2 and Column2 (all of them = 0) may lead to confu-
sion.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y(-array) ; (Htuple .) Hlong *
Row index of the first extreme point.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x(-array) ; (Htuple .) Hlong *
Column index of the first extreme point.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y(-array) ; (Htuple .) Hlong *
Row index of the second extreme point.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .line.end.x(-array) ; (Htuple .) Hlong *
Column index of the second extreme point.
. Diameter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Distance of the two extreme points.
Complexity √
If F is the area of a region, the runtime complexity amounts to O( F ) on average.
Result
The operator diameter_region returns the value H_MSG_TRUE, if the input is not empty. The reaction
to empty input (no input regions are available) may be determined with the help of the operator set_system
(’no_object_result’,<Result>). The reaction concerning an empty region (region is the empty set)
will be determined by the operator set_system(’empty_region_result’,<Result>). If necessary
an exception handling is raised.
Parallelization Information
diameter_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
disp_line
Alternatives
smallest_rectangle2
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 843

eccentricity ( const Hobject Regions, double *Anisometry,


double *Bulkiness, double *StructureFactor )

T_eccentricity ( const Hobject Regions, Htuple *Anisometry,


Htuple *Bulkiness, Htuple *StructureFactor )

Shape features derived from the ellipse parameters.


The operator eccentricity calculates three shape features derived from the geometric moments.
Definition: If the parameters Ra, Rb and the area A of the region are given (see elliptic_axis), the following
applies:
Ra
Anisometry =
Rb
π · Ra · Rb
Bulkiness =
A
StructureFactor = Anisometry · Bulkiness − 1

If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Attention
It should be noted that, like for all region-moments-based operators, the region’s pixels are regarded as math-
ematical, infinitely small points that are represented by the center of the pixels (see the documentation of
elliptic_axis). This can lead to non-empty regions that have Rb = 0. In these cases, the output features
that require a division by Rb are set to 0. In particular, regions that contain a single point or regions whose points
lie exactly on a straight line (e.g., one pixel high horizontal regions or one pixel wide vertical regions) have an
anisometry of 0.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be examined.
. Anisometry (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Shape feature (in case of a circle = 1.0).
Assertion : Anisometry ≥ 1.0
. Bulkiness (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Calculated shape feature.
. StructureFactor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Calculated shape feature.
Complexity √
If F is the area of the region the mean runtime complexity is O( F ).
Result
The operator eccentricity returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
eccentricity is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
See also
elliptic_axis, moments_region_2nd, select_shape, area_center
Module
Foundation

HALCON 8.0.2
844 CHAPTER 12. REGIONS

elliptic_axis ( const Hobject Regions, double *Ra, double *Rb,


double *Phi )

T_elliptic_axis ( const Hobject Regions, Htuple *Ra, Htuple *Rb,


Htuple *Phi )

Parameters of the equivalent ellipse.


The operator elliptic_axis calculates the radii and the orientation of the ellipse having the “same orienta-
tion” and the “same side relation” as the input region. Several input regions can be passed in Regions as tuples.
The length of the main radius Ra and the secondary radius Rb as well as the orientation of the main axis with
regard to the horizontal (Phi) are determined. The angle is indicated in arc measure.
Calculation:
If the moments M20 , M02 and M11 are normalized and passed to the area (see moments_region_2nd), the
radii Ra and Rb are calculated as:
q p
8(M20 + M02 + (M20 − M02 )2 + 4M11 2 )
Ra =
2
q p
8(M20 + M02 − (M20 − M02 )2 + 4M11 2 )
Rb =
2
The orientation Phi is defined by:

Phi = −0.5atan2(2M11 , M02 − M20 )

If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system
(’no_object_result’,<Result>)).
Attention
It should be noted that, like for all region-moments-based operators, the region’s pixels are regarded as mathemat-
ical, infinitely small points that are represented by the center of the pixels. This means that Ra and Rb can assume
the value 0. In particular, for an empty region and a region containing a single point Ra = Rb = 0 is returned.
Furthermore, for regions whose points lie exactly on a straight line (e.g., one pixel high horizontal regions or one
pixel wide vertical regions), Rb = 0 is returned.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Ra (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Main radius (normalized to the area).
Assertion : Ra ≥ 0.0
. Rb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Secondary radius (normalized to the area).
Assertion : (Rb ≥ 0.0) ∧ (Rb ≤ Ra)
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Angle between main radius and x axis (arc measure).
Assertion : ((−pi/2) < Phi) ∧ (Phi ≤ (pi/2))
Example

read_image(&Image,"fabrik");
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
regiongrowing(Image,&Seg,5,5,6.0,100);
T_elliptic_axis(Seg,&Ra,&Rb,&Phi);
T_area_center(Seg,_t,&Row,&Column);
T_gen_ellipse(&Ellipses,Row,Column,Phi,Ra,Rb);
set_draw(WindowHandle,"margin");
disp_region(Ellipses,WindowHandle);

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 845

Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator elliptic_axis returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
elliptic_axis is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
gen_ellipse
Alternatives
smallest_rectangle2, orientation_region
See also
moments_region_2nd, select_shape, set_shape
References
R. Haralick, L. Shapiro “Computer and Robot Vision” Addison-Wesley, 1992, pp. 73-75
Module
Foundation

euler_number ( const Hobject Regions, Hlong *EulerNumber )


T_euler_number ( const Hobject Regions, Htuple *EulerNumber )

Calculate the Euler number.


The procedure euler_number calculates the Euler number, i.e., the difference between the number of connec-
tion components and the number of holes.
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be examined.
. EulerNumber (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Calculated Euler number.
Result
The operator euler_number returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
euler_number is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
connect_and_holes
Module
Foundation

HALCON 8.0.2
846 CHAPTER 12. REGIONS

T_find_neighbors ( const Hobject Regions1, const Hobject Regions2,


const Htuple MaxDistance, Htuple *RegionIndex1, Htuple *RegionIndex2 )

Search direct neighbors.


The operator find_neighbors determines neighboring regions with Regions1 and Regions2 containing
the regions to be examined. Regions1 can have three different states:

• Regions1 is empty:
In this case all regions in Regions2 are permutatively checked for neighborhood.
• Regions1 consists of one region:
The regions of Regions1 are compared to all regions in Regions2.
• Regions1 consists of the same number of regions as Regions2:
Here all regions at the n-th position in Regions1 and Regions2 are checked for the neighboring relation.

The operator find_neighbors uses the chessboard distance between neighboring regions. It can be specified
by the parameter MaxDistance. Neighboring regions are located at the n-th position in RegionIndex1 and
RegionIndex2, i.e., the region with index RegionIndex1[n] from Regions1 is the neighbor of the region
with index RegionIndex2[n] from Regions2.
Attention
Covered regions are not found!
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Starting regions.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
. MaxDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Maximal distance of regions.
Default Value : 1
Suggested values : MaxDistance ∈ {1, 2, 3, 4, 5, 6, 7, 8, 10, 15, 20, 50}
Typical range of values : 1 ≤ MaxDistance ≤ 255
Minimum Increment : 1
Recommended Increment : 1
. RegionIndex1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of the found regions from Regions1.
. RegionIndex2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of the found regions from Regions2.
Result
The operator find_neighbors returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
find_neighbors is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection
See also
spatial_relation, select_region_spatial, expand_region, distance_transform,
interjacent, boundary
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 847

get_region_index ( const Hobject Regions, Hlong Row, Hlong Column,


Hlong *Index )

T_get_region_index ( const Hobject Regions, const Htuple Row,


const Htuple Column, Htuple *Index )

Index of all regions containing a given pixel.


The operator get_region_index returns the index of all regions in Regions (range of values: 1 to n)
containing the test pixel (Row,Column), i.e.:

|Regions[n] ∩ {(Row, Column)}| =


6 ∅

The returned indices can be used, e.g., in select_obj to select the regions containing the test pixel.
Attention
If the regions overlap more than one region might contain the pixel. In this case all these regions are returned. If
no region contains the indicated pixel the empty tuple (= no region) is returned.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to be examined.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; (Htuple .) Hlong
Line index of the test pixel.
Default Value : 100
Typical range of values : −∞ ≤ Row ≤ ∞ (lin)
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; (Htuple .) Hlong
Column index of the test pixel.
Default Value : 100
Typical range of values : −∞ ≤ Column ≤ ∞ (lin)
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Index of the regions containing the test pixel.
Complexity √
If F is the area of the region and N is the number of regions the mean runtime complexity is O(ln( F ) ∗ N ).
Result
The operator get_region_index returns the value H_MSG_TRUE if the parameters are correct. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_region_index is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
select_region_point
See also
get_mbutton, get_mposition, test_region_point
Module
Foundation

T_get_region_thickness ( const Hobject Region, Htuple *Thickness,


Htuple *Histogramm )

Access the thickness of a region along the main axis.


The operator get_region_thickness calculates the thickness of the regions along the main axis (see
elliptic_axis) for each pixel of the section. The thickness at one point on the main axis is defined as the
distance between the intersections of the contour with the plumb on the main axis in the respective point which are

HALCON 8.0.2
848 CHAPTER 12. REGIONS

the furthest apart. Additionally the operator get_region_thickness returns the Histogramm of the thick-
nesses of the region. The length of the histogram corresponds to the largest occurring thickness in the observed
region.
Attention
Only one region may be passed. If the region has several connection components, only the first one is investigated.
All other components are ignored.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Region to be analysed.
. Thickness (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Thickness of the region along its main axis.
. Histogramm (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Histogram of the thickness of the region along its main axis.
Result
The operator get_region_thickness returns the value H_MSG_TRUE if exactly one region is passed.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>).
Parallelization Information
get_region_thickness is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, threshold, connection, select_shape, select_obj
See also
copy_obj, elliptic_axis
Module
Foundation

hamming_distance ( const Hobject Regions1, const Hobject Regions2,


Hlong *Distance, double *Similarity )

T_hamming_distance ( const Hobject Regions1, const Hobject Regions2,


Htuple *Distance, Htuple *Similarity )

Hamming distance between two regions.


The operator hamming_distance returns the hamming distance between two regions, i.e., the number of pixels
of the regions which are different (Distance), i.e., the number of pixels contained in one region but not in the
other:

Distance = |Regions1 ∩ Regions2| + |Regions2 ∩ Regions1|

The parameter Similarity describes the similarity between the two regions based on the hamming distance
Distance:

Distance
Similarity = 1 −
|Regions1| + |Regions2|

If both regions are empty Similarity is set to 0. The regions with the same index from both input parameters
are always compared.
Attention
In both input parameters the same number of regions must be passed.

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 849

Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Hamming distance of two regions.
Assertion : Distance ≥ 0
. Similarity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Similarity of two regions.
Assertion : (0 ≤ Similarity) ∧ (Similarity ≤ 1)
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
hamming_distance returns the value H_MSG_TRUE if the number of objects in both parameters is the same and
is not 0. The behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set) is
set via set_system(’empty_region_result’,<Result>). If necessary an exception handling han-
dling is raised.
Parallelization Information
hamming_distance is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
intersection, complement, area_center
See also
hamming_change_region
Module
Foundation

hamming_distance_norm ( const Hobject Regions1,


const Hobject Regions2, const char *Norm, Hlong *Distance,
double *Similarity )

T_hamming_distance_norm ( const Hobject Regions1,


const Hobject Regions2, const Htuple Norm, Htuple *Distance,
Htuple *Similarity )

Hamming distance between two regions using normalization.


The operator hamming_distance_norm returns the hamming distance between two regions, i.e., the num-
ber of pixels of the regions which are different (Distance). Before calculating the difference the region in
Regions1 is normalized onto the regions in Regions2. The result is the number of pixels contained in one
region but not in the other:

Distance = |N orm(Regions1) ∩ Regions2| + |Regions2 ∩ N orm(Regions1)|

The parameter Similarity describes the similarity between the two regions based on the hamming distance
Distance:

Distance
Similarity = 1 −
|N orm(Regions1)| + |Regions2|

The following types of normalization are available:


’center’: The region is moved so that both regions have the save center of gravity.

HALCON 8.0.2
850 CHAPTER 12. REGIONS

If both regions are empty Similarity is set to 0. The regions with the same index from both input parameters
are always compared.
Attention
In both input parameters the same number of regions must be passed.
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Type of normalization.
Default Value : "center"
List of values : Norm ∈ {"center"}
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Hamming distance of two regions.
Assertion : Distance ≥ 0
. Similarity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Similarity of two regions.
Assertion : (0 ≤ Similarity) ∧ (Similarity ≤ 1)
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
hamming_distance_norm returns the value H_MSG_TRUE if the number of objects in both parameters is the same
and is not 0. The behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
hamming_distance_norm is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
intersection, complement, area_center
See also
hamming_change_region
Module
Foundation

inner_circle ( const Hobject Regions, double *Row, double *Column,


double *Radius )

T_inner_circle ( const Hobject Regions, Htuple *Row, Htuple *Column,


Htuple *Radius )

Largest inner circle of a region.


The operator inner_circle determines the largest inner circle of a region. This is the biggest discrete circle
region that completely fits into the region. For this circle the center (Row, Column) and the radius (Radius) are
calculated. If the position of the circle is ambiguous, the "first possible" position (as far upper left as possible) is
returned.
The output of the procedure is chosen in such a way that it can be used as an input for the HALCON procedures
disp_circle, gen_circle, and gen_ellipse_contour_xld.
If several regions are passed in Regions corresponding tuples are returned as output parameters. In case of an
empty input region all parameters have the value 0.0 if no other behavior was set with set_system.

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 851

Attention
If several inner circles are present at a region only the most upper left solution is returned.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y(-array) ; (Htuple .) double *
Line index of the center.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x(-array) ; (Htuple .) double *
Column index of the center.
. Radius (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius(-array) ; (Htuple .) double *
Radius of the inner circle.
Assertion : Radius ≥ 0
Example

read_image(&Image,"fabrik");
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
regiongrowing(Image,&Seg,5,5,6.0,100);
select_shape(Seg,&H,"area","and",100.0,2000.0);
T_inner_circle(H,&Row,&Column,&Radius);
T_gen_circle(&Circles,Row,Column,Radius);
set_draw(WindowHandle,"margin");
disp_region(Circles,WindowHandle);

Complexity √
If F is the area of the region and R is the radius of the inner circle the runtime complexity is O( F ∗ R).
Result
The operator inner_circle returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
inner_circle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
gen_circle, disp_circle
Alternatives
erosion_circle, inner_rectangle1
See also
set_shape, select_shape, smallest_circle
Module
Foundation

inner_rectangle1 ( const Hobject Regions, Hlong *Row1, Hlong *Column1,


Hlong *Row2, Hlong *Column2 )

T_inner_rectangle1 ( const Hobject Regions, Htuple *Row1,


Htuple *Column1, Htuple *Row2, Htuple *Column2 )

Largest inner rectangle of a region.


The operator inner_rectangle1 determines the largest axis-parallel rectangle that fits into a region. The
rectangle is described by the coordinates of the corner pixels (Row1, Column1, Row2, Column2).

HALCON 8.0.2
852 CHAPTER 12. REGIONS

If more than one region is passed in Regions the results are stored in tuples, the index of a value in the tuple
corresponding to the index of the input region. For empty regions all parameters have the value 0 if no other
behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region to be examined.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; (Htuple .) Hlong *
Row coordinate of the upper left corner point.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; (Htuple .) Hlong *
Column coordinate of the upper left corner point.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; (Htuple .) Hlong *
Row coordinate of the lower right corner point.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x(-array) ; (Htuple .) Hlong *
Column coordinate of the lower right corner point.
Result
The operator inner_rectangle1 returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
inner_rectangle1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
disp_rectangle1, gen_rectangle1
Alternatives
inner_circle
See also
smallest_rectangle1, select_shape
Module
Foundation

moments_region_2nd ( const Hobject Regions, double *M11, double *M20,


double *M02, double *Ia, double *Ib )

T_moments_region_2nd ( const Hobject Regions, Htuple *M11,


Htuple *M20, Htuple *M02, Htuple *Ia, Htuple *Ib )

Geometric moments of regions.


The operator moments_region_2nd calculates the moments (M20, M02) and the product of inertia of the
axes through the center parallel to the coordinate axes (M11). Furthermore the main axes of inertia (Ia, Ib) are
calculated.

Calculation: Z0 and S0 are the coordinates of the center of a region R with the area F . Then the moments Mij
are defined by: X
Mij = (Z0 − Z)i (S0 − S)j
(Z,S)∈R

wherein Z and S run through all pixels of the region R.


Furthermore,
M 20 + M 02
h=
2
then Ia and Ib are defined by:
p
Ia = h + h2 − M 20 ∗ M 02 + M 112

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 853

p
Ib = h − h2 − M 20 ∗ M 02 + M 112

If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. M11 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Product of inertia of the axes through the center parallel to the coordinate axes.
. M20 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order (line-dependent).
. M02 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order (column-dependent).
. Ia (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
The one main axis of inertia.
. Ib (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
The other main axis of inertia.
Complexity √
If F is the area of the region the mean runtime complexity is O( F ).
Result
The operator moments_region_2nd returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (region is the empty set) is set
via set_system(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
moments_region_2nd is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd_invar
See also
elliptic_axis
Module
Foundation

moments_region_2nd_invar ( const Hobject Regions, double *M11,


double *M20, double *M02 )

T_moments_region_2nd_invar ( const Hobject Regions, Htuple *M11,


Htuple *M20, Htuple *M02 )

Geometric moments of regions.


The operator moments_region_2nd_invar calculates the scaled moments (M20, M02) and the procut of
inertia of the axes through the center parallel to the coordinate axes (M11).

Calculation: Z0 and S0 are the coordinates of the center of a region R with the area F . Then the moments Mij
are defined by:
1 X
Mij = 2 (Z0 − Z)i (S0 − S)j
F
(Z,S)∈R

wherein Z and S run through all pixels of the region R.

HALCON 8.0.2
854 CHAPTER 12. REGIONS

If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. M11 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Product of inertia of the axes through the center parallel to the coordinate axes.
. M20 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order (line-dependent).
. M02 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order (column-dependent).
Complexity √
If F is the area of the region the mean runtime complexity is O( F ).
Result
The operator moments_region_2nd_invar returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_2nd_invar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation

moments_region_2nd_rel_invar ( const Hobject Regions, double *PHI1,


double *PHI2 )

T_moments_region_2nd_rel_invar ( const Hobject Regions,


Htuple *PHI1, Htuple *PHI2 )

Geometric moments of regions.


The operator moments_region_2nd_rel_invar calculates the scaled relative moments (PHI1, PHI2).

Calculation: The moments P HI1 and P HI2 are defined by:

P HI1 = M20 + M02

P HI2 = (M20 + M02 )2 + M11


2

If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 855

Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. PHI1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. PHI2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
Result
The operator moments_region_2nd_rel_invar returns the value H_MSG_TRUE if the input is not
empty. The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_2nd_rel_invar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation

moments_region_3rd ( const Hobject Regions, double *M21, double *M12,


double *M03, double *M30 )

T_moments_region_3rd ( const Hobject Regions, Htuple *M21,


Htuple *M12, Htuple *M03, Htuple *M30 )

Geometric moments of regions.


The operator moments_region_3rd calculates the translation-invariant central moments (M21, M12, M03,
M30) of order (p + q).

Calculation: x and y are the coordinates of the center of a region R with the area Z. Then the moments Mpq are
defined by: X
Mpq = M Z(xi , yi )(xi − x)p (yi − y)q
i=1
m10 m01
wherein are x = m00 and y = m00 .

If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. M21 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order (line-dependent).
. M12 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order (column-dependent).
. M03 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order (column-dependent).

HALCON 8.0.2
856 CHAPTER 12. REGIONS

. M30 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *


Moment of 3rd order (line-dependent).
Complexity √
If Z is the area of the region the mean runtime complexity is O( Z).
Result
The operator moments_region_3rd returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_3rd is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation

moments_region_3rd_invar ( const Hobject Regions, double *M21,


double *M12, double *M03, double *M30 )

T_moments_region_3rd_invar ( const Hobject Regions, Htuple *M21,


Htuple *M12, Htuple *M03, Htuple *M30 )

Geometric moments of regions.


The operator moments_region_3rd_invar calculates the scale-invariant moments (M21, M12, M03, M30).

Calculation: Then the moments Mij are defined by:


µpq
Mpq =
µ3
wobei p + q >= 2 und µ = µ00 = m00 sind.
wherein are p + q >= 2 and µ = µ00 = m00 .

If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. M21 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order (line-dependent).
. M12 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order (column-dependent).
. M03 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order (column-dependent).
. M30 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order (line-dependent).

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 857

Complexity √
If Z is the area of the region the mean runtime complexity is O( Z).
Result
The operator moments_region_3rd_invar returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_3rd_invar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation

moments_region_central ( const Hobject Regions, double *I1,


double *I2, double *I3, double *I4 )

T_moments_region_central ( const Hobject Regions, Htuple *I1,


Htuple *I2, Htuple *I3, Htuple *I4 )

Geometric moments of regions.


The operator moments_region_central calculates the central moments (I1, I2, I3, I4).
Calculation: Then the moments Ii are defined by:

I1 = µ20 µ02 − µ211


I2 = (µ30 µ03 − µ21 µ12 )2 − 4(µ30 µ12 − µ221 )(µ21 µ03 − µ212 )
I3 = µ20 (µ21 µ03 − µ212 ) − µ11 (µ30 µ03 − µ21 µ12 ) + µ02 (µ30 µ12 − µ221 )
I4 = µ230 µ302 − 6µ30 µ21 µ11 µ202 + 6µ30 µ12 µ02 (2µ211 − µ20 µ02 )
+µ30 µ03 (6µ20 µ11 µ02 − 8µ311 ) + 9µ221 µ20 µ202 − 18µ21 µ12 µ20 µ11 µ02
+6µ21 µ03 µ20 (2µ211 − µ20 µ02 ) + 9µ212 µ220 µ02 − 6µ12 µ03 µ11 µ220 + µ203 µ320

If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. I1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. I2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. I3 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. I4 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order.
Complexity √
If Z is the area of the region the mean runtime complexity is O( Z).

HALCON 8.0.2
858 CHAPTER 12. REGIONS

Result
The operator moments_region_central returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_central is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation

moments_region_central_invar ( const Hobject Regions, double *PSI1,


double *PSI2, double *PSI3, double *PSI4 )

T_moments_region_central_invar ( const Hobject Regions,


Htuple *PSI1, Htuple *PSI2, Htuple *PSI3, Htuple *PSI4 )

Geometric moments of regions.


The operator moments_region_central_invar calculates the moments (PSI1, PSI2, PSI3, PSI4) that
are invariant under translation and general linear transformations.
Calculation: Then the moments ψi are defined by:
I1
ψ1 =
µ4
I2
ψ2 = 1
µ 0
I3
ψ3 = 7
µ
I4
ψ4 = 1
µ 1

If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. PSI1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. PSI2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. PSI3 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. PSI4 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
Complexity √
If Z is the area of the region the mean runtime complexity is O( Z).

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 859

Result
The operator moments_region_central_invar returns the value H_MSG_TRUE if the input is not
empty. The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_central_invar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation

orientation_region ( const Hobject Regions, double *Phi )


T_orientation_region ( const Hobject Regions, Htuple *Phi )

Orientation of a region.
The operator orientation_region calculates the orientation of the region. The operator is based on
elliptic_axis. In addition the point on the contour with maximal distance to the center of gravity is cal-
culated. If the column coordinate of this point is less than the column coordinate of the center of gravity the value
of π is added to the angle.
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system
(’no_object_result’,<Result>)).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Orientation of region (arc measure).
Assertion : (−pi ≤ Phi) ∧ (Phi < pi)
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator orientation_region returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
orientation_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
disp_arrow
Alternatives
elliptic_axis, smallest_rectangle2

HALCON 8.0.2
860 CHAPTER 12. REGIONS

See also
moments_region_2nd, line_orientation
Module
Foundation

rectangularity ( const Hobject Regions, double *Rectangularity )


T_rectangularity ( const Hobject Regions, Htuple *Rectangularity )

Shape factor for the rectangularity of a region.


The operator rectangularity calculates the rectangularity of the input regions.
To determine the rectangularity, first a rectangle is computed that has the same first and second order moments
as the input region. The computation of the rectangularity measure is finally based on the area of the difference
between the computed rectangle and the input region normalized with respect to the area of the rectangle.
For rectangles rectangularity returns the value 1. The more the input region deviates from a perfect rectan-
gle, the less the returned value for Rectangularity will be.
In case of an empty region the operator rectangularity returns the value 0 (if no other behavior was set (see
set_system)). If more than one region is passed the numerical values of the rectangularity are stored in a tuple,
the position of a value in the tuple corresponding to the position of the region in the input tuple.
Attention
For input regions which orientation cannot be computed by using second order moments (as it is the case for
square regions, for example), the returned Rectangularity is underestimated by up to 10% depending on the
orientation of the input region.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be examined.
. Rectangularity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Rectangularity of the input region(s).
Assertion : (0 ≤ Rectangularity) ∧ (Rectangularity ≤ 1.0)
Result
The operator rectangularity returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
rectangularity is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
circularity, compactness, convexity, eccentricity
See also
contlength, area_center, select_shape
References
P. L. Rosin: “Measuring rectangularity”; Machine Vision and Applications; vol. 11; pp. 191-196; Springer-Verlag,
1999.
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 861

roundness ( const Hobject Regions, double *Distance, double *Sigma,


double *Roundness, double *Sides )

T_roundness ( const Hobject Regions, Htuple *Distance, Htuple *Sigma,


Htuple *Roundness, Htuple *Sides )

Shape factors from contour.


The operator roundness examines the distance between the contour and the center of the area. In particular
the mean distance (Distance), the deviation from the mean distance (Sigma) and two shape features derived
therefrom are determined. Roundness is the relation between mean value and standard deviation, and Sides
indicates the number of polygon pieces if a regular polygon is concerned.
The contour for calculating the features is determined depending on the global neighborhood (see set_system).
Calculation:
If p is the center of the area, p_i the pixels and F the area of the contour.
1 X
Distance = ||p − p_i||
F
1 X 2
Sigma2 = (||p − p_i|| − Distance)
F
Sigma
Roundness = 1 −
Distance
 0.4724
Distance
Sides = 1.4111
Sigma

If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be examined.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Mean distance from the center.
Assertion : Distance ≥ 0.0
. Sigma (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Standard deviation of Distance.
Assertion : Sigma ≥ 0.0
. Roundness (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Shape factor for roundness.
Assertion : Roundness ≤ 1.0
. Sides (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Number of polygon sides.
Assertion : Sides ≥ 0
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator roundness returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
roundness is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection

HALCON 8.0.2
862 CHAPTER 12. REGIONS

Alternatives
compactness
See also
contlength
References
R. Haralick, L. Shapiro “Computer and Robot Vision” Addison-Wesley, 1992, pp. 61
Module
Foundation

T_runlength_distribution ( const Hobject Region, Htuple *Foreground,


Htuple *Background )

Distribution of runs needed for runlength encoding of a region.


The operator runlength_distribution calculates the distribution of the runs of a region of the fore- and
background. The frequency of the occurrence of a certain length is calculated. Runs of infinite length are not
counted. Therefore the background are the holes of the region. As many values are passed as set by the maximum
length of fore- or background, respectively. The length of both tuples usually differs. The first entry of the tuples is
always 0 (no runs of the length 0). If there are no blanks the empty tuple is passed at Background. Analogously
the empty tuple is passed in case of an empty region at Foreground.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Region to be examined.
. Foreground (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Length distribution of the region (foreground).
. Background (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Length distribution of the background.
Complexity
If n is the number of runs of the region the runtime complexity is O(n).
Result
The operator runlength_distribution returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If more than one region is passed an exception handling is raised.
Parallelization Information
runlength_distribution is reentrant and processed without parallelization.
Possible Predecessors
threshold, select_obj
Alternatives
runlength_features
See also
runlength_features
Module
Foundation

runlength_features ( const Hobject Regions, Hlong *NumRuns,


double *KFactor, double *LFactor, double *MeanLength, Hlong *Bytes )

T_runlength_features ( const Hobject Regions, Htuple *NumRuns,


Htuple *KFactor, Htuple *LFactor, Htuple *MeanLength, Htuple *Bytes )

Characteristic values for runlength coding of regions.

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 863

The operator runlength_features calculates for every input region from Regions the number of runs
necessary for storing this region with the aid of runlength coding. Furthermore the so-called "‘K-factor"’ is deter-
mined, which indicates by how much the number of runs differs from the ideal of the square in which this value is
1.0.
The K-factor (KFactor) is calculated according to the formula:

NumRuns
KFactor = √
Area

wherein Area indicates the area of the region. It should be noted that the K-factor can be smaller than 1.0 (in case
of long horizontal regions).
The L-factor (LFactor) indicates the mean number of runs for each line index occurring in the region.
MeanLength indicates the mean length of the runs. The parameter Bytes indicates how many bytes are neces-
sary for coding the region with runlengths.
Attention
All features calculated by the operator runlength_features are not rotation invariant because the runlength
coding depends on the direction. The operator runlength_features does not serve for calculating shape
features but for controlling and analysing the efficiency of the runlength coding.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. NumRuns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Number of runs.
Assertion : 0 ≤ NumRuns
. KFactor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Storing factor in relation to a square.
Assertion : 0 ≤ KFactor
. LFactor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Mean number of runs per line.
Assertion : 0 ≤ LFactor
. MeanLength (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Mean length of runs.
Assertion : 0 ≤ MeanLength
. Bytes (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Number of bytes necessary for coding the region.
Assertion : 0 ≤ Bytes
Complexity
The mean runtime complexity is O(1).
Result
The operator runlength_features returns the value H_MSG_TRUE if the input is not empty. If necessary
an exception handling is raised.
Parallelization Information
runlength_features is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
See also
runlength_features, runlength_distribution
Module
Foundation

HALCON 8.0.2
864 CHAPTER 12. REGIONS

select_region_point ( const Hobject Regions, Hobject *DestRegions,


Hlong Row, Hlong Column )

T_select_region_point ( const Hobject Regions, Hobject *DestRegions,


const Htuple Row, const Htuple Column )

Choose all regions containing a given pixel.


The operator select_region_point selects all regions from Regions containing the test pixel
(Row,Column), i.e.:

|Regions[n] ∩ {(Row, Column)}| = 1

Attention
If the regions overlap more than one region might contain the pixel. In this case all these regions are returned. If
no region contains the indicated pixel the empty tuple (= no region) is returned.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject


Regions to be examined.
. DestRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
All regions containing the test pixel.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Line index of the test pixel.
Default Value : 100
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column index of the test pixel.
Default Value : 100
Example

read_image(&Image,"fabrik");
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
disp_image(Image);
regiongrowing(Image,&Seg,3,3,5.0,0);
set_color(WindowHandle,"red");
set_draw(WindowHandle,"margin");
do {
printf("Select the region with the mouse (End right buttonn \n");
get_mbutton(WindowHandle,&Row,&Column,&Button);
select_region_point(Seg,&Single,Row,Column);
disp_region(Single,WindowHandle);
clear(Single);
} while(Button != 4);

Complexity √
If F is the area of the region and N is the number of regions, the mean runtime complexity is O(ln( F ) ∗ N ).
Result
The operator select_region_point returns the value H_MSG_TRUE if the parameters are correct.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
select_region_point is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
test_region_point

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 865

See also
get_mbutton, get_mposition
Module
Foundation

T_select_region_spatial ( const Hobject Regions1,


const Hobject Regions2, const Htuple Direction, Htuple *RegionIndex1,
Htuple *RegionIndex2 )

Pose relation of regions.


The operator select_region_spatial chooses the regions from Regions2 which are sufficient for the
neighboring relation Direction. The regions to be examined have to be passed in Regions1 or Regions2,
respectively. Regions1 can have three different states:

• Regions1 is empty:
In this case all regions in Regions2 are permutatively checked for neighborhood.
• Regions1 consists of one region:
The regions of Regions1 are compared to all regions in Regions2.
• Regions1 consists of the same number of regions as Regions2:
The regions at the n-th position in Regions1 and Regions2 are each checked for a neighboring relation.

Possible values for Direction are:

’left’: Regions2 is left of Regions1


’right’: Regions2 is right of Regions1
’above’: Regions2 is above Regions1
’below’: Regions2 is below Regions1

The operator select_region_spatial calculates the centers of the regions to be compared and decides
according to the angle between the center straight lines and the x axis whether the direction relation is fulfilled.
The relation is fulfilled within the area of -45 degree to +45 degree around the coordinate axes. Thus, the direction
relation can be understood in such a way that the center of the second region must be located left (or right, above,
below) of the center of the first region. The indices of the regions fulfilling the direction relation are located at the
n-th position in RegionIndex1 and RegionIndex2, i.e., the region with the index RegionIndex2[n] has
the indicated relation with the region with the index RegionIndex1[n]. Access to regions via the index can be
obtained via the operator copy_obj.
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Starting regions
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions
. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Desired neighboring relation.
Default Value : "left"
List of values : Direction ∈ {"left", "right", "above", "below"}
. RegionIndex1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices in the input tuples (Regions1 or Regions2), respectively.
. RegionIndex2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices in the input tuples (Regions1 or Regions2), respectively.
Result
The operator select_region_spatial returns the value H_MSG_TRUE if Regions2 is not empty. The
behavior in case of empty parameter Regions2 (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.

HALCON 8.0.2
866 CHAPTER 12. REGIONS

Parallelization Information
select_region_spatial is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
area_center, intersection
See also
spatial_relation, find_neighbors, copy_obj, obj_to_integer
Module
Foundation

select_shape ( const Hobject Regions, Hobject *SelectedRegions,


const char *Features, const char *Operation, double Min, double Max )

T_select_shape ( const Hobject Regions, Hobject *SelectedRegions,


const Htuple Features, const Htuple Operation, const Htuple Min,
const Htuple Max )

Choose regions with the aid of shape features.


The operator select_shape chooses regions according to shape. For each input region from Regions the
indicated features (Features) are calculated. If each (Operation = ’and’) or at least one (Operation = ’or’)
of the calculated features is within the default limits (Min,Max) the region is adapted into the output (duplicated).
Condition: M ini ≤ F eaturei (Object) ≤ M axi
Possible values for Features:
’area’: Area of the object
’row’: Row index of the center
’column’: Column index of the center
’width’: Width of the region
’height’: Height of the region
’row1’: Row index of upper left corner
’column1’: Column index of upper left corner
’row2’: Row index of lower right corner
’column2’: Column index of lower right corner
’circularity’: Circularity (see circularity)
’compactness’: Compactness (see compactness)
’contlength’: Total length of contour (see operator contlength)
’convexity’: Convexity (see convexity)
’rectangularity’: Rectangularity (see rectangularity)
’ra’: Main radius of the equivalent ellipse (see elliptic_axis)
’rb’: Secondary radius of the equivalent ellipse (see elliptic_axis)
’phi’: Orientation of the equivalent ellipse (see elliptic_axis)
’anisometry:’ Anisometry (see eccentricity)
’bulkiness:’ Bulkiness (see operator eccentricity)
’struct_factor:’ Structur Factor (see operator eccentricity)
’outer_radius’: Radius of smallest surrounding circle (see smallest_circle)
’inner_radius’: Radius of largest inner circle (see inner_circle)
’inner_width’: Width of the largest axis-parallel rectangle that fits into the region (see inner_rectangle1)
’inner_height’: Height of the largest axis-parallel rectangle that fits into the region (see inner_rectangle1)

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 867

’dist_mean’: Mean distance from the region border to the center (see operator roundness)
’dist_deviation:’ Deviation of the distance from the region border from the center (see operator roundness)
’roundness’: Roundness (see operator roundness)
’num_sides’: Number of polygon sides (see operator roundness)
’connect_num’: Number of connection components (see operator connect_and_holes)
’holes_num’: Number of holes (see operator connect_and_holes)
’max_diameter’: Maximum diameter of the region (see operator diameter_region)
’orientation’: Orientation of the region (see operator orientation_region)
’euler_number’: Euler number (see operator euler_number)
’rect2_phi’: Orientation of the smallest surrounding rectangle (see operator smallest_rectangle2)
’rect2_len1’: Half the length of the smallest surrounding rectangle (see operator smallest_rectangle2)
’rect2_len2’: Half the width of the smallest surrounding rectangle (see operator smallest_rectangle2)
’moments_m11’: Geometric moments of the region (see operator moments_region_2nd)
’moments_m20’: Geometric moments of the region (see operator moments_region_2nd)
’moments_m02’: Geometric moments of the region (see operator moments_region_2nd)
’moments_ia’: Geometric moments of the region (see operator moments_region_2nd)
’moments_ib’: Geometric moments of the region (see operator moments_region_2nd)
’moments_m11_invar’: Geometric moments of the region (see operator moments_region_2nd_invar)
’moments_m20_invar’: Geometric moments of the region (see operator moments_region_2nd_invar)
’moments_m02_invar’: Geometric moments of the region (see operator moments_region_2nd_invar)
’moments_phi1’: Geometric moments of the region (see operator moments_region_2nd_rel_invar)
’moments_phi2’: Geometric moments of the region (see operator moments_region_2nd_rel_invar)
’moments_m21’: Geometric moments of the region (see operator moments_region_3rd)
’moments_m12’: Geometric moments of the region (see operator moments_region_3rd)
’moments_m03’: Geometric moments of the region (see operator moments_region_3rd)
’moments_m30’: Geometric moments of the region (see operator moments_region_3rd)
’moments_m21_invar’: Geometric moments of the region (see operator moments_region_3rd_invar)
’moments_m12_invar’: Geometric moments of the region (see operator moments_region_3rd_invar)
’moments_m03_invar’: Geometric moments of the region (see operator moments_region_3rd_invar)
’moments_m30_invar’: Geometric moments of the region (see operator moments_region_3rd_invar)
’moments_i1’: Geometric moments of the region (see operator moments_region_central)
’moments_i2’: Geometric moments of the region (see operator moments_region_central)
’moments_i3’: Geometric moments of the region (see operator moments_region_central)
’moments_i4’: Geometric moments of the region (see operator moments_region_central)
’moments_psi1’: Geometric moments of the region (see operator moments_region_central_invar)
’moments_psi2’: Geometric moments of the region (see operator moments_region_central_invar)
’moments_psi3’: Geometric moments of the region (see operator moments_region_central_invar)
’moments_psi4’: Geometric moments of the region (see operator moments_region_central_invar)

If only one feature (Features) is used the value of Operation is meaningless. Several features are processed
in the sequence in which they are entered.

HALCON 8.0.2
868 CHAPTER 12. REGIONS

Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to be examined.
. SelectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Regions fulfilling the condition.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Shape features to be checked.
Default Value : "area"
List of values : Features ∈ {"area", "row", "column", "width", "height", "row1", "column1", "row2",
"column2", "circularity", "compactness", "contlength", "convexity", "rectangularity", "ra", "rb", "phi",
"anisometry", "bulkiness", "struct_factor", "outer_radius", "inner_radius", "inner_width", "inner_height",
"max_diameter", "dist_mean", "dist_deviation", "roundness", "num_sides", "orientation", "connect_num",
"holes_num", "euler_number", "rect2_phi", "rect2_len1", "rect2_len2", "moments_m11", "moments_m20",
"moments_m02", "moments_ia", "moments_ib", "moments_m11_invar", "moments_m20_invar",
"moments_m02_invar", "moments_phi1", "moments_phi2", "moments_m21", "moments_m12",
"moments_m03", "moments_m30", "moments_m21_invar", "moments_m12_invar", "moments_m03_invar",
"moments_m30_invar", "moments_i1", "moments_i2", "moments_i3", "moments_i4", "moments_psi1",
"moments_psi2", "moments_psi3", "moments_psi4"}
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Linkage type of the individual features.
Default Value : "and"
List of values : Operation ∈ {"and", "or"}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double / Hlong / const char *
Lower limits of the features or ’min’.
Default Value : 150.0
Typical range of values : 0.0 ≤ Min ≤ 99999.0
Minimum Increment : 0.001
Recommended Increment : 1.0
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double / Hlong / const char *
Upper limits of the features or ’max’.
Default Value : 99999.0
Typical range of values : 0.0 ≤ Max ≤ 99999.0
Minimum Increment : 0.001
Recommended Increment : 1.0
Restriction : Max ≥ Min
Example

/* where are the eyes of the ape ? */


read_image(&Image,"affe");
threshold(Image,&S1,128.0,255.0);
connection(S1,&S2);
select_shape(S2,&S3,"area","and",500.0,50000.0);
select_shape(S3,&Eyes,"anisometry","and",1.0,1.7);
disp_region(Eyes,WindowHandle);

Result
The operator select_shape returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
select_shape is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
select_shape, select_gray, shape_trans, reduce_domain, count_obj

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 869

Alternatives
select_shape_std
See also
area_center, circularity, compactness, contlength, convexity, rectangularity,
elliptic_axis, eccentricity, inner_circle, smallest_circle,
smallest_rectangle1, smallest_rectangle2, inner_rectangle1, roundness,
connect_and_holes, diameter_region, orientation_region, moments_region_2nd,
moments_region_2nd_invar, moments_region_2nd_rel_invar, moments_region_3rd,
moments_region_3rd_invar, moments_region_central,
moments_region_central_invar, select_obj
Module
Foundation

select_shape_proto ( const Hobject Regions, const Hobject Pattern,


Hobject *SelectedRegions, const char *Feature, double Min,
double Max )

T_select_shape_proto ( const Hobject Regions, const Hobject Pattern,


Hobject *SelectedRegions, const Htuple Feature, const Htuple Min,
const Htuple Max )

Choose regions having a certain relation to each other.


The operator select_shape_proto selects regions based on certain relations between the regions. Every
region from Regions is compared to the union of regions from Pattern. The limits (Min and Max) are
specified absolutely or in percent (0..100), depending on the feature. Possible values for Feature are:

’distance_dilate’ The minimum distance in the maximum norm from the edge of Pattern to the edge of every
region from Regions is determined (see distance_rr_min_dil).
’distance_contour’ The minimum Euclidean distance from the edge of Pattern to the edge of every region
from Regions is determined. (see distance_rr_min).
’distance_center’ The Euclidean distance from the center of Pattern to the center of every region from
Regions is determined.
’covers’ It is examined how well the region Pattern fits into the regions from Regions. If there is no shift
so that Pattern is a subset of Regions the overlap is 0. If Pattern corresponds to the region after a
corresponding shift the overlap is 100. Otherwise the area of the opening of Regions with Pattern is put
into relation with the area of Regions (in percent).
’fits’ It is examined whether Pattern can be shifted in such a way that it fits in Regions. If this is possible the
corresponding region is copied from Regions. The parameters Min and Max are ignored.
’overlaps_abs’ The area of the intersection of Pattern and every region in Regions is computed.
’overlaps_rel’ The area of the intersection of Pattern and every region in Regions is computed. The relative
overlap is the ratio of the area of the intersection and the are of the respective region in Regions (in percent).

Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. Pattern (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region compared to Regions.
. SelectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Regions fulfilling the condition.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Shape features to be checked.
Default Value : "covers"
List of values : Feature ∈ {"distance_center", "distance_dilate", "distance_contour", "covers", "fits",
"overlaps_abs", "overlaps_rel"}

HALCON 8.0.2
870 CHAPTER 12. REGIONS

. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double / Hlong


Lower border of feature.
Default Value : 50.0
Suggested values : Min ∈ {0.0, 1.0, 5.0, 10.0, 20.0, 30.0, 50.0, 60.0, 70.0, 80.0, 90.0, 95.0, 99.0, 100.0,
200.0, 400.0}
Typical range of values : 0.0 ≤ Min
Minimum Increment : 0.001
Recommended Increment : 5.0
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double / Hlong
Upper border of the feature.
Default Value : 100.0
Suggested values : Max ∈ {0.0, 10.0, 20.0, 30.0, 50.0, 60.0, 70.0, 80.0, 90.0, 95.0, 99.0, 100.0, 200.0, 300.0,
400.0}
Typical range of values : 0.0 ≤ Max
Minimum Increment : 0.001
Recommended Increment : 5.0
Example

regiongrowing(Image,&Seg,3,3,5.0,0);
gen_circle(&C,100.0,100.0,MinRadius);
select_shape_proto(Seg,C,"fits",0.0,0.0);

Result
The operator select_shape_proto returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
select_shape_proto is reentrant and processed without parallelization.
Possible Predecessors
connection, draw_region, gen_circle, gen_rectangle1, gen_rectangle2,
gen_ellipse
Possible Successors
select_gray, shape_trans, reduce_domain, count_obj
Alternatives
select_shape
See also
opening, erosion1, distance_rr_min_dil, distance_rr_min
Module
Foundation

select_shape_std ( const Hobject Regions, Hobject *SelectedRegions,


const char *Shape, double Percent )

T_select_shape_std ( const Hobject Regions, Hobject *SelectedRegions,


const Htuple Shape, const Htuple Percent )

Select regions of a given shape.


The operator select_shape_std compares the shape of the given regions with default shapes. If the region
has a similar shape it is adopted into the output. Possible values for Shape are:

’max_area’ The largest region is selected.


’rectangle1’ The surrounding rectangle parallel to the coordinate axes is determined via the operator
smallest_rectangle1. If the area difference in percent is larger than Percent the region is adopted.

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 871

’rectangle2’ The smallest surrounding rectangle with any orientation is determined via the operator
smallest_rectangle2. If the area difference in percent is larger than Percent the region is adopted.
Note that as a more robust alternative the operator select_shape with Feature set to ’rectangularity’
can be used instead.

Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input regions to be selected.
. SelectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Regions with desired shape.
. Shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Shape features to be checked.
Default Value : "max_area"
List of values : Shape ∈ {"max_area", "rectangle1", "rectangle2"}
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Similarity measure.
Default Value : 70.0
Suggested values : Percent ∈ {10.0, 30.0, 50.0, 60.0, 70.0, 80.0, 90.0, 95.0, 100.0}
Typical range of values : 0.0 ≤ Percent ≤ 100.0 (lin)
Minimum Increment : 0.1
Recommended Increment : 10.0
Parallelization Information
select_shape_std is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection, smallest_rectangle1, smallest_rectangle2
Alternatives
intersection, complement, area_center, select_shape
See also
smallest_rectangle1, smallest_rectangle2, rectangularity
Module
Foundation

smallest_circle ( const Hobject Regions, double *Row, double *Column,


double *Radius )

T_smallest_circle ( const Hobject Regions, Htuple *Row, Htuple *Column,


Htuple *Radius )

Smallest surrounding circle of a region.


The operator smallest_circle determines the smallest surrounding circle of a region, i.e., the circle with the
smallest area of all circles containing the region. For this circle the center (Row,Column) and the radius (Radius)
are calculated. The procedure is applied when, for example, the location and size of circular objects (e.g., coins)
which, however, are not homogeneous inside or have broken edges due to bad segmentation, has to be determined.
The output of the procedure is selected in such a way that it can be used as input for the HALCONprocedures
disp_circle and gen_circle.
If several regions are passed in Regions corresponding tuples are returned as output parameter. In case of empty
region all parameters have the value 0.0 if no other behavior was set (see set_system).
Attention
Internally, the calculation is based on the center coordinates of the region pixels. To take into account that pixels
are not just infinitely small points but have a certain area, the calculated radius is enlarged by 0.5 before it is
returned in Radius. This, in most cases, gives acceptable results. However, in the worst case (pixel diagonal) this
enlargement is not sufficient. If one wants √to ensure that the border of the input region completely lies within the
circle, one had to enlarge
√ the radius by 1/ 2 instead of 0.5. Consequently, the value returned in Radius must
be corrected by 1/ 2 − 0.5. However, this would also be only an upper bound, i.e., the circle with the corrected
radius would be slightly too big in most cases.

HALCON 8.0.2
872 CHAPTER 12. REGIONS

Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y(-array) ; (Htuple .) double *
Line index of the center.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x(-array) ; (Htuple .) double *
Column index of the center.
. Radius (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius(-array) ; (Htuple .) double *
Radius of the surrounding circle.
Assertion : Radius ≥ 0
Example

read_image(&Image,"fabrik");
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
regiongrowing(Image,&Seg,5,5,6.0,100);
select_shape(Seg,&H,"area","and",100.0,2000.0);
T_smallest_circle(H,&Row,&Column,&Radius);
T_gen_circle(&Circles,Row,Column,Radius);
set_draw(WindowHandle,"margin");
disp_region(Circles,WindowHandle);

Complexity √
If F is the area of the region, then the mean runtime complexity is O( F .
Result
The operator smallest_circle returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
smallest_circle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
gen_circle, disp_circle
Alternatives
elliptic_axis, smallest_rectangle1, smallest_rectangle2
See also
set_shape, select_shape, inner_circle
Module
Foundation

smallest_rectangle1 ( const Hobject Regions, Hlong *Row1,


Hlong *Column1, Hlong *Row2, Hlong *Column2 )

T_smallest_rectangle1 ( const Hobject Regions, Htuple *Row1,


Htuple *Column1, Htuple *Row2, Htuple *Column2 )

Surrounding rectangle parallel to the coordinate axes.


The operator smallest_rectangle1 calculates the surrounding rectangle of all input regions (paral-
lel to the coordinate axes). The surrounding rectangle is described by the coordinates of the corner pixels
(Row1,Column1,Row2,Column2)

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 873

If more than one region is passed in Regions, the results are stored in tuples, the index of a value in the tuple
corresponding to the index of a region in the input. In case of empty region all parameters have the value 0 if no
other behavior was set (see set_system).
Attention
In case of empty region the result of Row1,Column1, Row2 and Column2 (all are 0) can lead to confusion.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; (Htuple .) Hlong *
Line index of upper left corner point.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; (Htuple .) Hlong *
Column index of upper left corner point.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; (Htuple .) Hlong *
Line index of lower right corner point.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x(-array) ; (Htuple .) Hlong *
Column index of lower right corner point.
Complexity
If F is the area of the region the mean runtime complexity is O(sqrt(F )).
Result
The operator smallest_rectangle1 returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
smallest_rectangle1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
disp_rectangle1, gen_rectangle1
Alternatives
smallest_rectangle2, area_center
See also
select_shape
Module
Foundation

smallest_rectangle2 ( const Hobject Regions, double *Row,


double *Column, double *Phi, double *Length1, double *Length2 )

T_smallest_rectangle2 ( const Hobject Regions, Htuple *Row,


Htuple *Column, Htuple *Phi, Htuple *Length1, Htuple *Length2 )

Smallest surrounding rectangle with any orientation.


The operator smallest_rectangle2 determines the smallest surrounding rectangle of a region, i.e., the
rectangle with the smallest area of all rectangles containing the region. For this rectangle the center, the inclination
and the two radii are calculated.
The procedure is applied when, for example, the location of a scenery of several regions (e.g., printed
text on a rectangular paper or in rectangular print (justified lines)) must be found. The parameters of
smallest_rectangle2 are chosen in such a way that they can be used directly as input for the HALCON-
procedures disp_rectangle2 and gen_rectangle2.
If more than one region is passed in Regions the results are stored in tuples, the index of a value in the tuple
corresponding to the index of a region in the input. In case of empty region all parameters have the value 0.0 if no
other behavior was set (see set_system).

HALCON 8.0.2
874 CHAPTER 12. REGIONS

Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; (Htuple .) double *
Line index of the center.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; (Htuple .) double *
Column index of the center.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; (Htuple .) double *
Orientation of the surrounding rectangle (arc measure)
Assertion : ((−pi/2) < Phi) ∧ (Phi ≤ (pi/2))
. Length1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth(-array) ; (Htuple .) double *
First radius (half length) of the surrounding rectangle.
Assertion : Length1 ≥ 0.0
. Length2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight(-array) ; (Htuple .) double *
Second radius (half width) of the surrounding rectangle.
Assertion : (Length2 ≥ 0.0) ∧ (Length2 ≤ Length1)
Example (Syntax: HDevelop)

read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
regiongrowing(Image,Seg,5,5,6,100)
smallest_rectangle2(Seg,Row,Column,Phi,Length1,Length2)
gen_rectangle2(Rectangle,Row,Column,Phi,Length1,Length2)
set_draw(WindowHandle,’margin’)
disp_region(Rectangle,WindowHandle)

Complexity
If F is
√the area2of the region and N is the number of supporting points of the convex hull, the runtime complexity
is O( F + N ).
Result
The operator smallest_rectangle2 returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
smallest_rectangle2 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
disp_rectangle2, gen_rectangle2
Alternatives
elliptic_axis, smallest_rectangle1
See also
smallest_circle, set_shape
Module
Foundation

T_spatial_relation ( const Hobject Regions1, const Hobject Regions2,


const Htuple Percent, Htuple *RegionIndex1, Htuple *RegionIndex2,
Htuple *Relation1, Htuple *Relation2 )

Pose relation of regions with regard to the coordinate axes.

HALCON/C Reference Manual, 2008-5-13


12.3. FEATURES 875

The operator spatial_relation selects regions located by Percent percent “left”, “right”, “above” or
“below” other regions. Regions1 and Regions2 contain the regions to be compared. Regions1 can have
three states:

• Regions1 is empty:
In this case all regions in Regions2 are permutatively checked for neighborhood.
• Regions1 consists of one region:
The regions of Regions1 are compared to all regions in Regions2.
• Regions1 consists of the same number of regions as Regions2:
Regions1 and Regions2 are checked for a neighboring relation.

The percentage Percent is interpreted in such a way that the area of the second region has to be located really
left/right or above/below the region margins of the first region by at least Percent percent. The indices of
the regions that fulfill at least one of these conditions are then located at the n-th position in the output parame-
ters RegionIndex1 and RegionIndex2. Additionally the output parameters Relation1 and Relation2
contain at the n-th position the type of relation of the region pair (RegionIndex1[n], RegionIndex2[n]),
i.e., region with index RegionIndex2[n] has the relation Relation1[n] and Relation2[n] with region with
index RegionIndex1[n].
Possible values for Relation1 and Relation2 are:

Relation1: ’left’, ’right’ or ”


Relation2: ’above’, ’below’ or ”

In RegionIndex1 and RegionIndex2 the indices of the regions in the tuples of the input regions (Regions1
or Regions2), respectively, are entered as image identifiers. Access to chosen regions via the index can be
obtained by the operator copy_obj.
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Starting regions.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Percentage of the area of the comparative region which must be located left/right or above/below the region
margins of the starting region.
Default Value : 50
Suggested values : Percent ∈ {0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Typical range of values : 0 ≤ Percent ≤ 100 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (0 ≤ Percent) ∧ (Percent ≤ 100)
. RegionIndex1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of the regions in the tuple of the input regions which fulfill the pose relation.
. RegionIndex2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of the regions in the tuple of the input regions which fulfill the pose relation.
. Relation1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Horizontal pose relation in which RegionIndex2[n] stands with RegionIndex1[n].
. Relation2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Vertical pose relation in which RegionIndex2[n] stands with RegionIndex1[n].
Result
The operator spatial_relation returns the value H_MSG_TRUE if Regions2 is not empty and Percent
is correctly choosen. The behavior in case of empty parameter Regions2 (no input regions available) is set via
the operator set_system(’no_object_result’,<Result>). The behavior in case of empty region (the
region is the empty set) is set via set_system(’empty_region_result’,<Result>). If necessary an
exception handling is raised.
Parallelization Information
spatial_relation is reentrant and processed without parallelization.

HALCON 8.0.2
876 CHAPTER 12. REGIONS

Possible Predecessors
threshold, regiongrowing, connection
Alternatives
area_center, intersection
See also
select_region_spatial, find_neighbors, copy_obj, obj_to_integer
Module
Foundation

12.4 Geometric-Transformations
T_affine_trans_region ( const Hobject Region,
Hobject *RegionAffineTrans, const Htuple HomMat2D,
const Htuple Interpolate )

Apply an arbitrary affine 2D transformation to regions.


affine_trans_region applies an arbitrary affine 2D transformation, i.e., scaling, rotation, translation, and
slant (skewing), to the regions given in Region and returns the transformed regions in RegionAffineTrans.
The affine transformation is described by the homogeneous transformation matrix given in HomMat2D, which
can be created using the operators hom_mat2d_identity, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_translate, etc., or be the result of operators like vector_angle_to_rigid.
The components of the homogeneous transformation matrix are interpreted as follows: The row coordinate of the
image corresponds to x and the col coordinate corresponds to y of the coordinate system in which the transforma-
tion matrix was defined. This is necessary to obtain a right-handed coordinate system for the image. In particular,
this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices quite
naturally corresponds to the usual (row,column) order for coordinates in the image.
The parameter Interpolate determines whether the transformation is to be done by using interpolation in-
ternally. This can lead to smoother region boundaries, especially if regions are enlarged. However, the runtime
increases drastically.
Attention
affine_trans_region in general is not reversible (clipping and discretization during rotation and scaling).
The used coordinate system is the same as in affine_trans_pixel. This means that in fact not HomMat2D
is applied but a modified version. Therefore, applying affine_trans_region corresponds to the following
chain of transformations, which is applied to each point (Row_i, Col_i) of the region (input and output pixels as
homogeneous vectors):
       
RowT rans_i 1 0 −0.5 1 0 +0.5 Row_i
 ColT rans_i  =  0 1 −0.5  · HomMat2D ·  0 1 +0.5  ·  Col_i 
1 0 0 1 0 0 1 1

As an effect, you might get unexpected results when creating affine transformations based on coordinates that
are derived from the region, e.g., by operators like area_center. For example, if you use this operator to
calculate the center of gravity of a rotationally symmetric region and then rotate the region around this point using
hom_mat2d_rotate, the resulting region will not lie on the original one. In such a case, you can compensate
this effect by applying the following translations to HomMat2D before using it in affine_trans_region:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_region(Region, RegionAffinTrans, HomMat2DAdapted, ’false’)

Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be rotated and scaled.

HALCON/C Reference Manual, 2008-5-13


12.4. GEOMETRIC-TRANSFORMATIONS 877

. RegionAffineTrans (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *


Transformed output region(s).
Number of elements : RegionAffineTrans = Region
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Interpolate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should the transformation be done using interpolation?
Default Value : "false"
List of values : Interpolate ∈ {"true", "false"}
Result
If the matrix HomMat2D represents an affine transformation (i.e., not a projective transformation),
affine_trans_region returns H_MSG_TRUE. The behavior in case of empty input (no regions given) can
be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty input re-
gion via set_system(’empty_region_result’,<Result>), and the behavior in case of an empty
result region via set_system(’store_empty_region’,<true/false>). If necessary, an exception
handling is raised.
Parallelization Information
affine_trans_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
hom_mat2d_identity, hom_mat2d_scale, hom_mat2d_translate, hom_mat2d_invert,
hom_mat2d_rotate
Possible Successors
select_shape
Alternatives
move_region, mirror_region, zoom_region
See also
affine_trans_image
Module
Foundation

mirror_region ( const Hobject Region, Hobject *RegionMirror,


const char *RowColumn, Hlong WidthHeight )

T_mirror_region ( const Hobject Region, Hobject *RegionMirror,


const Htuple RowColumn, const Htuple WidthHeight )

Reflect a region about an axis parallel to the x- or y-axis.


mirror_region reflects a region about an axis parallel to the x- respectively y-axis (parameter RowColumn).
The parameter WidthHeight specifies two times the coordinate of the axis of symmetry. Hence, if Region
has been extracted from an image and should be mirrored in a way such as if it had been extracted from a mir-
rored version of this image, WidthHeight corresponds to to one of the dimensions of this image (according to
RowColumn).
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be reflected.
. RegionMirror (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Reflected region(s).
Number of elements : RegionMirror = Region
. RowColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Axis of symmetry.
Default Value : "row"
List of values : RowColumn ∈ {"column", "row"}

HALCON 8.0.2
878 CHAPTER 12. REGIONS

. WidthHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Twice the coordinate of the axis of symmetry.
Default Value : 512
Suggested values : WidthHeight ∈ {128, 256, 512, 525, 768, 1024}
Typical range of values : 1 ≤ WidthHeight ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : WidthHeight > 0
Example

read_image(&Image,"monkey");
threshold(Image,&Seg,128.0,255.0);
mirror_region(Seg,&Mirror,"row",512);
disp_region(Mirror,WindowHandle);

Parallelization Information
mirror_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
affine_trans_region
See also
zoom_region
Module
Foundation

move_region ( const Hobject Region, Hobject *RegionMoved, Hlong Row,


Hlong Column )

T_move_region ( const Hobject Region, Hobject *RegionMoved,


const Htuple Row, const Htuple Column )

Translate a region.
move_region translates the input regions by the vector given by (Row, Column). If necessary, the resulting
regions are clipped with the current image format.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be moved.
. RegionMoved (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Translated region(s).
Number of elements : RegionMoved = Region
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the vector by which the region is to be moved.
Default Value : 30
Suggested values : Row ∈ {-128, -64, -32, -16, -10, -8, -4, -2, -1, 0, 1, 2, 4, 5, 8, 10, 16, 32, 64, 128}
Typical range of values : -512 ≤ Row ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10

HALCON/C Reference Manual, 2008-5-13


12.4. GEOMETRIC-TRANSFORMATIONS 879

. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong


Row coordinate of the vector by which the region is to be moved.
Default Value : 30
Suggested values : Column ∈ {-128, -64, -32, -16, -10, -8, -4, -2, -1, 0, 1, 2, 4, 5, 8, 10, 16, 32, 64, 128}
Typical range of values : -512 ≤ Column ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ).
Result
move_region always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty input
region via set_system(’empty_region_result’,<Result>), and the behavior in case of an empty
result region via set_system(’store_empty_region’,<true/false>). If necessary, an exception
handling is raised.
Parallelization Information
move_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
See also
affine_trans_image, mirror_region, zoom_region
Module
Foundation

polar_trans_region ( const Hobject Region, Hobject *PolarTransRegion,


double Row, double Column, double AngleStart, double AngleEnd,
double RadiusStart, double RadiusEnd, Hlong Width, Hlong Height,
const char *Interpolation )

T_polar_trans_region ( const Hobject Region,


Hobject *PolarTransRegion, const Htuple Row, const Htuple Column,
const Htuple AngleStart, const Htuple AngleEnd,
const Htuple RadiusStart, const Htuple RadiusEnd, const Htuple Width,
const Htuple Height, const Htuple Interpolation )

Transform a region within an annular arc to polar coordinates.


polar_trans_region transforms a Region within the annular arc specified by the center point (Row,
Column), the radii RadiusStart and RadiusEnd and the angles AngleStart and AngleEnd to its polar
coordinate version in a virtual image of the dimensions Width × Height.
The polar transformation is a change of the coordinate system. Instead of a row and a column coordinate, each
point’s position is expressed by its radius r (i.e. the distance to the center point Row, Column) and the angle φ
between the column axis (through the center point) and the line from the center point towards the point. Note that
this transformation is not affine.
The coordinate (0, 0) in the output region always corresponds to the point in the input region that is specified by
RadiusStart and AngleStart. Analogously, the coordinate (Height − 1, Width − 1) corresponds to the
point in the input region that is specified by RadiusEnd and AngleEnd. In the usual mode (AngleStart
< AngleEnd and RadiusStart < RadiusEnd), the polar transformation is performed in the mathemati-
cally positive orientation (counterclockwise). Furthermore, points with smaller radii lie in the upper part of the
output region. By suitably exchanging the values of these parameters (e.g., AngleStart > AngleEnd or
RadiusStart > RadiusEnd), any desired orientation of the output region can be achieved.
The angles can be chosen from all real numbers. Center point and radii can be real as well. However, if they are
both integers and the difference of RadiusEnd and RadiusStart equals Height−1, calculation will be sped
up through an optimized routine.

HALCON 8.0.2
880 CHAPTER 12. REGIONS

The radii and angles are inclusive, which means that the first row of the virtual target image contains the circle
with radius RadiusStart and the last row contains the circle with radius RadiusEnd. For complete circles,
where the difference between AngleStart and AngleEnd equals 2π (360 degrees), this also means that the
first column of the target image will be the same as the last.
1
To avoid this, do not make this difference 2π, but 2π(1 − Width ) degrees instead.
The parameter Interpolation is used to select the interpolation method ’bilinear’ or ’nearest_neighbor’.
Setting Interpolation to ’bilinear’ leads to smoother region boundaries, especially if regions are enlarged.
However, the runtime increases significantly.
If more than one region is passed in Region, their polar transformations are computed individually and stored
as a tuple in PolarTransRegion. Please note that the indices of an input region and its transformation only
correspond if the system variable ’store_empty_regions’ is set to ’true’ (see set_system). Otherwise empty
output regions are discarded and the length of the input tuple Region is most likely not equal to the length of the
output tuple PolarTransRegion.
Attention
If Width or Height are chosen greater than the dimensions of the current image, the system variable
’clip_region’ should be set to ’false’ (see set_system). Otherwise, an output region that does not lie within the
dimensions of the current image can produce an error message.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Input region.
. PolarTransRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Output region.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Row coordinate of the center of the arc.
Default Value : 256
Suggested values : Row ∈ {0, 16, 32, 64, 128, 240, 256, 480, 512}
Typical range of values : 0 ≤ Row ≤ 32767
Restriction : (Row ≥ -131068) ∧ (Row ≤ 131068)
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Column coordinate of the center of the arc.
Default Value : 256
Suggested values : Column ∈ {0, 16, 32, 64, 128, 256, 320, 512, 640}
Typical range of values : 0 ≤ Column ≤ 32767
Restriction : (Column ≥ -131068) ∧ (Column ≤ 131068)
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Angle of the ray to be mapped to column coordinate 0 of PolarTransRegion.
Default Value : 0.0
Suggested values : AngleStart ∈ {0.0, 0.78539816, 1.57079632, 3.141592654, 6.2831853,
12.566370616}
Typical range of values : -6.2831853 ≤ AngleStart ≤ 6.2831853
. AngleEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Angle of the ray to be mapped to column coordinate Width − 1 of PolarTransRegion.
Default Value : 6.2831853
Suggested values : AngleEnd ∈ {0.0, 0.78539816, 1.57079632, 3.141592654, 6.2831853, 12.566370616}
Typical range of values : -6.2831853 ≤ AngleEnd ≤ 6.2831853
. RadiusStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the circle to be mapped to row coordinate 0 of PolarTransRegion.
Default Value : 0
Suggested values : RadiusStart ∈ {0, 16, 32, 64, 100, 128, 256, 512}
Typical range of values : 0 ≤ RadiusStart ≤ 32767
. RadiusEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the circle to be mapped to row coordinate Height − 1 of PolarTransRegion.
Default Value : 100
Suggested values : RadiusEnd ∈ {0, 16, 32, 64, 100, 128, 256, 512}
Typical range of values : 0 ≤ RadiusEnd ≤ 32767

HALCON/C Reference Manual, 2008-5-13


12.4. GEOMETRIC-TRANSFORMATIONS 881

. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong


Width of the virtual output image.
Default Value : 512
Suggested values : Width ∈ {256, 320, 512, 640, 800, 1024}
Typical range of values : 0 ≤ Width ≤ 32767
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Height of the virtual output image.
Default Value : 512
Suggested values : Height ∈ {240, 256, 480, 512, 600, 1024}
Typical range of values : 0 ≤ Height ≤ 32767
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Interpolation method for the transformation.
Default Value : "nearest_neighbor"
List of values : Interpolation ∈ {"nearest_neighbor", "bilinear"}
Parallelization Information
polar_trans_region is reentrant and automatically parallelized (on tuple level).
See also
polar_trans_image, polar_trans_image_ext, polar_trans_image_inv,
polar_trans_region_inv, polar_trans_contour_xld, polar_trans_contour_xld_inv
Module
Foundation

polar_trans_region_inv ( const Hobject PolarRegion,


Hobject *XYTransRegion, double Row, double Column, double AngleStart,
double AngleEnd, double RadiusStart, double RadiusEnd, Hlong WidthIn,
Hlong HeightIn, Hlong Width, Hlong Height, const char *Interpolation )

T_polar_trans_region_inv ( const Hobject PolarRegion,


Hobject *XYTransRegion, const Htuple Row, const Htuple Column,
const Htuple AngleStart, const Htuple AngleEnd,
const Htuple RadiusStart, const Htuple RadiusEnd,
const Htuple WidthIn, const Htuple HeightIn, const Htuple Width,
const Htuple Height, const Htuple Interpolation )

Transform a region in polar coordinates back to cartesian coordinates.


polar_trans_region_inv transforms the polar coordinate representation of a region, stored in
PolarRegion, back onto an annular arc in cartesian coordinates, described by the radii RadiusStart and
RadiusEnd and the angles AngleStart and AngleEnd with the center point located at (Row, Column). All
of these values can be chosen as real numbers. In addition, the dimensions of the virtual image containing the region
PolarRegion must be given in WidthIn and HeightIn. WidthIn−1 is the column coordinate correspond-
ing to AngleEnd and HeightIn − 1 is the row coordinate corresponding to RadiusEnd. AngleStart and
RadiusStart correspond to column and row coordinate 0. Furthermore, the dimensions Width and Height
of the virtual output image containing the transformed region XYTransRegion are required.
The angles and radii are inclusive, which means that the row coordinate 0 in PolarRegion will be mapped
onto a a circle with a distance of RadiusStart pixels from the specified center and the row with the coordinate
HeightIn − 1 will be mapped onto a circle of radius RadiusEnd. This applies to AngleStart, AngleEnd,
and WidthIn in an analogous way. If the width of the input region PolarRegion corresponds to an angle
interval greater than 2π, the region is cropped such that length of this interval is 2π.
The parameter Interpolation is used to select the interpolation method ’bilinear’ or ’nearest_neighbor’.
Setting Interpolation to ’bilinear’ leads to smoother region boundaries, especially if regions are enlarged.
However, the runtime increases significantly.
polar_trans_region_inv is the inverse function of polar_trans_region.
The call sequence:
polar_trans_region(Region, PolarRegion, Row, Column, rad(360), 0, 0,
Radius, Width, Height, ’nearest_neighbor’)

HALCON 8.0.2
882 CHAPTER 12. REGIONS

polar_trans_region_inv(PolarRegion, XYTransRegion, Row, Column, rad(360),


0, 0, Radius, Width, Height, Width, Height, ’nearest_neighbor’)
returns the region Region, restricted to the circle around (Row, Column) with radius Radius, as its output
region XYTransRegion.
If more than one region is passed in PolarRegion, their cartesian transformations are computed individually
and stored as a tuple in XYTransRegion. Please note that the indices of an input region and its transformation
only correspond if the system variable ’store_empty_regions’ is set to ’false’ (see set_system). Otherwise
empty output regions are discarded and the length of the input tuple PolarRegion is most likely not equal to
the length of the output tuple XYTransRegion.
Attention
If Width or Height are chosen greater than the dimensions of the current image, the system variable
’clip_region’ should be set to ’false’ (see set_system). Otherwise, an output region that does not lie within the
dimensions of the current image can produce an error message.
Parameter

. PolarRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Input region.
. XYTransRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Output region.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Row coordinate of the center of the arc.
Default Value : 256
Suggested values : Row ∈ {0, 16, 32, 64, 128, 240, 256, 480, 512}
Typical range of values : 0 ≤ Row ≤ 32767
Restriction : (Row ≥ -131068) ∧ (Row ≤ 131068)
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Column coordinate of the center of the arc.
Default Value : 256
Suggested values : Column ∈ {0, 16, 32, 64, 128, 256, 320, 512, 640}
Typical range of values : 0 ≤ Column ≤ 32767
Restriction : (Column ≥ -131068) ∧ (Column ≤ 131068)
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Angle of the ray to map the column coordinate 0 of PolarRegion to.
Default Value : 0.0
Suggested values : AngleStart ∈ {0.0, 0.78539816, 1.57079632, 3.141592654, 6.2831853}
Typical range of values : -6.2831853 ≤ AngleStart ≤ 6.2831853
. AngleEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Angle of the ray to map the column coordinate WidthIn − 1 of PolarRegion to.
Default Value : 6.2831853
Suggested values : AngleEnd ∈ {0.0, 0.78539816, 1.57079632, 3.141592654, 6.2831853}
Typical range of values : -6.2831853 ≤ AngleEnd ≤ 6.2831853
. RadiusStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the circle to map the row coordinate 0 of PolarRegion to.
Default Value : 0
Suggested values : RadiusStart ∈ {0, 16, 32, 64, 100, 128, 256, 512}
Typical range of values : 0 ≤ RadiusStart ≤ 32767
. RadiusEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the circle to map the row coordinate HeightIn − 1 of PolarRegion to.
Default Value : 100
Suggested values : RadiusEnd ∈ {0, 16, 32, 64, 100, 128, 256, 512}
Typical range of values : 0 ≤ RadiusEnd ≤ 32767
. WidthIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Width of the virtual input image.
Default Value : 512
Suggested values : WidthIn ∈ {256, 320, 512, 640, 800, 1024}
Typical range of values : 0 ≤ WidthIn ≤ 32767

HALCON/C Reference Manual, 2008-5-13


12.4. GEOMETRIC-TRANSFORMATIONS 883

. HeightIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong


Height of the virtual input image.
Default Value : 512
Suggested values : HeightIn ∈ {240, 256, 480, 512, 600, 1024}
Typical range of values : 0 ≤ HeightIn ≤ 32767
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Width of the virtual output image.
Default Value : 512
Suggested values : Width ∈ {256, 320, 512, 640, 800, 1024}
Typical range of values : 0 ≤ Width ≤ 32767
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Height of the virtual output image.
Default Value : 512
Suggested values : Height ∈ {240, 256, 480, 512, 600, 1024}
Typical range of values : 0 ≤ Height ≤ 32767
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Interpolation method for the transformation.
Default Value : "nearest_neighbor"
List of values : Interpolation ∈ {"nearest_neighbor", "bilinear"}
Parallelization Information
polar_trans_region_inv is reentrant and automatically parallelized (on tuple level).
See also
polar_trans_image, polar_trans_image_ext, polar_trans_image_inv,
polar_trans_region, polar_trans_contour_xld, polar_trans_contour_xld_inv
Module
Foundation

T_projective_trans_region ( const Hobject Regions,


Hobject *TransRegions, const Htuple HomMat2D,
const Htuple Interpolation )

Apply a projective transformation to a region.


projective_trans_region applies the projective transformation specified by the homogeneous matrix
HomMat2D on the regions in Regions and returns the transformed regions in TransRegions.
For creation and interpretation details of this matrix see also projective_trans_image.
If ’clip_region’ is set to its default value ’true’ by set_system(’clip_region’, ’true’) or if the
transformation is degenerated and thus produces infinite regions, the output region is clipped by the rectangle with
upper left corner (0, 0) and lower right corner (’width’, ’height’), where ’width’ and ’height’ are system variables
(see also get_system). If ’clip_region’ is ’false’, the output region is not clipped except by the maximum
supported coordinate size MAX_FORMAT. This may result in extremely memory and time intensive computations,
so use with care.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input regions.
. TransRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Output regions.
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Homogeneous projective transformation matrix.
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Interpolation method for the transformation.
Default Value : "bilinear"
List of values : Interpolation ∈ {"nearest_neighbor", "bilinear"}
Parallelization Information
projective_trans_region is reentrant and automatically parallelized (on tuple level).

HALCON 8.0.2
884 CHAPTER 12. REGIONS

Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image, projective_trans_image_size,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
Module
Foundation

transpose_region ( const Hobject Region, Hobject *Transposed,


Hlong Row, Hlong Column )

T_transpose_region ( const Hobject Region, Hobject *Transposed,


const Htuple Row, const Htuple Column )

Reflect a region about a point.


transpose_region reflects a region about a point. The fixed point is given by Column and Row. The image
P 0 of a point P is determined by the following requirement:
If P = S, then P 0 = S, i.e., the point S is the fixed point of the mapping. If P 6= S, S is the center point of a line
segment connecting P and P 0 . Therefore, the following equations result:

x + x0
Column =
2
y + y0
Row = .
2
If Row and Column are set to the origin, the in morphology often used transposition results. Hence
transpose_region is often used to reflect (transpose) a structuring element.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region to be reflected.
. Transposed (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Transposed region.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 0
Suggested values : Row ∈ {0, 64, 128, 256, 512}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Default Value : 0
Suggested values : Column ∈ {0, 64, 128, 256, 512}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of the input region. Then the runtime complexity for one region is

O( F ) .

Result
transpose_region returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:

HALCON/C Reference Manual, 2008-5-13


12.4. GEOMETRIC-TRANSFORMATIONS 885

• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)

Otherwise, an exception is raised.


Parallelization Information
transpose_region is reentrant and automatically parallelized (on tuple level).
Possible Successors
reduce_domain, select_shape, area_center, connection
See also
dilation1, opening, closing
Module
Foundation

zoom_region ( const Hobject Region, Hobject *RegionZoom,


double ScaleWidth, double ScaleHeight )

T_zoom_region ( const Hobject Region, Hobject *RegionZoom,


const Htuple ScaleWidth, const Htuple ScaleHeight )

Zoom a region.
zoom_region enlarges or reduces the regions given in Region in the x- and y-direction by the given scale
factors ScaleWidth and ScaleHeight.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be zoomed.
. RegionZoom (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Zoomed region(s).
Number of elements : RegionZoom = Region
. ScaleWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; double
Scale factor in x-direction.
Default Value : 2.0
Suggested values : ScaleWidth ∈ {0.25, 0.5, 1.0, 2.0, 3.0}
Typical range of values : 0.0 ≤ ScaleWidth ≤ 100.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.5
. ScaleHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; double
Scale factor in y-direction.
Default Value : 2.0
Suggested values : ScaleHeight ∈ {0.25, 0.5, 1.0, 2.0, 3.0}
Typical range of values : 0.0 ≤ ScaleHeight ≤ 100.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.5
Parallelization Information
zoom_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
See also
zoom_image_size, zoom_image_factor
Module
Foundation

HALCON 8.0.2
886 CHAPTER 12. REGIONS

12.5 Sets
complement ( const Hobject Region, Hobject *RegionComplement )
T_complement ( const Hobject Region, Hobject *RegionComplement )

Return the complement of a region.


complement determines the complement of the input region(s).
If the system flag ’clip_region’ is ’true’, which is the default, the difference of the largest image processed so far
(see reset_obj_db) and the input region is returned.
If the system flag ’clip_region’ is ’false’ (see set_system), the resluting region would be infinitely large. To
avoid this, the complement is done only virtually by setting the complement flag of Region to TRUE. For succeed-
ing operations the de Morgan laws are applied while calculating results. Using complement with ’clip_region’
set to ’false’ makes sense only to avoid fringe effects, e.g., if the area of interest is bigger or smaller than the
image. For the latter case, the clipping would be set explicitly. If there is no reason to use the operator with
’clip_region’=’false’ but you need the flag for other operations of your program, it is recommended to temporarilly
set the system flag to’true’ and change it back to ’false’ after applying complement. Otherwise, negative regions
may result from succeeding operations.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Input region(s).
. RegionComplement (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Complemented regions.
Number of elements : RegionComplement = Region
Result
complement always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty in-
put region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling
is raised.
Parallelization Information
complement is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
difference, union1, union2, intersection, reset_obj_db, set_system
Module
Foundation

difference ( const Hobject Region, const Hobject Sub,


Hobject *RegionDifference )

T_difference ( const Hobject Region, const Hobject Sub,


Hobject *RegionDifference )

Calculate the difference of two regions.


difference calculates the set-theoretic difference of two regions:

(Regions in Region) − (Regions in Sub)

The resulting region is defined as the input region (Region) with all points from Sub removed.

HALCON/C Reference Manual, 2008-5-13


12.5. SETS 887

Attention
Empty regions are valid for both parameters. On output, empty regions may result. The value of the system flag
’store_empty_region’ determines the behavior in this case.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. Sub (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
The union of these regions is subtracted from Region.
. RegionDifference (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Resulting region.
Example

/* provides the region X without the points in Y */


difference(X,Y,&RegionDifference);

Complexity
Let N be the number of regions, F _1 be their average√ F _2 be the total area of all regions in Sub. Then
area, and√
the runtime complexity is O(F _1 ∗ log(F _1) + N ∗ ( F _1 + F _2)).
Result
difference always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty in-
put region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling
is raised.
Parallelization Information
difference is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape, disp_region
See also
intersection, union1, union2, complement, symm_difference
Module
Foundation

intersection ( const Hobject Region1, const Hobject Region2,


Hobject *RegionIntersection )

T_intersection ( const Hobject Region1, const Hobject Region2,


Hobject *RegionIntersection )

Calculate the intersection of two regions.


intersection calculates the intersection of the regions in Region1 with the regions in Region2. Each
region in Region1 is intersected with all regions in Region2. The order of regions in RegionIntersection
is identical to the order of regions in Region1.
Attention
Empty input regions are permitted. Because empty result regions are possible, the system flag ’store_empty_region’
should be set appropriately.
Parameter
. Region1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be intersected with all regions in Region2.
. Region2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions with which Region1 is intersected.

HALCON 8.0.2
888 CHAPTER 12. REGIONS

. RegionIntersection (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *


Result of the intersection.
Number of elements : RegionIntersection ≤ Region1
Complexity

Let N be the number of regions in Region1, F1 be their average√ √ F2 be the total area of all regions in
area, and
Region2. Then the runtime complexity is O(F1 log (F1 ) + N ∗ ( F1 + F2 )).
Result
intersection always returns H_MSG_TRUE. The behavior in case of empty input (no regions given) can
be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty input
region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
intersection is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
See also
union1, union2, complement
Module
Foundation

symm_difference ( const Hobject Region1, const Hobject Region2,


Hobject *RegionDifference )

T_symm_difference ( const Hobject Region1, const Hobject Region2,


Hobject *RegionDifference )

Calculate the symmetric difference of two regions.


symm_difference calculates the symmetric difference of two regions. Two possible definitions of the sym-
metric difference can be seen in the example below. A third definition is to regard the exclusive or of the two
regions.
Attention
Empty regions are valid for both parameters. On output, empty regions may result. The value of the system flag
’store_empty_region’ determines the behavior in this case.
Parameter
. Region1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input region 1.
. Region2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input region 2.
. RegionDifference (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Resulting region.
Example (Syntax: HDevelop)

/* Simulate the symmetric difference of Region1 and Region2 with */


/* difference and union: */
difference(Region1, Region2, Diff1)
difference(Region2, Region1, Diff2)
union(Diff1, Diff2, Difference)

/* Simulate the symmetric difference of Region1 and Region2 with */


/* union, intersection, and difference: */

HALCON/C Reference Manual, 2008-5-13


12.5. SETS 889

union(Region1, Region2, Union)


intersection(Region1, Region2, Intersection)
difference(Union, Intersection, Difference)

Result
symm_difference always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
symm_difference is reentrant and processed without parallelization.
Possible Successors
select_shape, disp_region
See also
intersection, union1, union2, complement, difference
Module
Foundation

union1 ( const Hobject Region, Hobject *RegionUnion )


T_union1 ( const Hobject Region, Hobject *RegionUnion )

Return the union of all input regions.


union1 computes the union of all input regions and returns the result in RegionUnion.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions of which the union is to be computed.
. RegionUnion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Union of all input regions.
Number of elements : RegionUnion ≤ Region
Example

/* Union of segmentation results: */


threshold(Image,&Region1,128.0,255.0);
dyn_threshold(Image,Mean,&Region2,5.0,"light");
concat_obj(Region1,Region2,&Regions);
union1(Regions,&RegionUnion);

Complexity √ √
Let F be the sum of all areas of the input regions. Then the runtime complexity is O(log( F ) ∗ F ).
Result
union1 always returns H_MSG_TRUE. The behavior in case of empty input (no regions given) can be set via
set_system(’no_object_result’,<Result>) and the behavior in case of an empty input region via
set_system(’empty_region_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
union1 is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
union2

HALCON 8.0.2
890 CHAPTER 12. REGIONS

See also
intersection, complement
Module
Foundation

union2 ( const Hobject Region1, const Hobject Region2,


Hobject *RegionUnion )

T_union2 ( const Hobject Region1, const Hobject Region2,


Hobject *RegionUnion )

Return the union of two regions.


union2 computes the union of the region in Region1 with all regions in Region2. This means that union2
is not commutative!
Parameter
. Region1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region for which the union with all regions in Region2 is to be computed.
. Region2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions which should be added to Region1.
. RegionUnion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Resulting regions.
Number of elements : RegionUnion = Region1
Complexity √ √
Let F be the sum of all areas of the input regions. Then the runtime complexity is O(log( F ) ∗ F ).
Result
union2 always returns H_MSG_TRUE. The behavior in case of empty input (no regions given) can be set via
set_system(’no_object_result’,<Result>) and the behavior in case of an empty input region via
set_system(’empty_region_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
union2 is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
union1
See also
intersection, complement
Module
Foundation

12.6 Tests
test_equal_region ( const Hobject Regions1, const Hobject Regions2,
Hlong *IsEqual )

T_test_equal_region ( const Hobject Regions1, const Hobject Regions2,


Htuple *IsEqual )

Test whether the regions of two objects are identical.


The operator test_equal_region compares the regions of the two input parameters. The n-th element in
Regions1 is compared to the n-th object in Regions2 (for all n). If all regions are equal and the number of
regions is identical the operator IsEqual is set to TRUE, otherwise FALSE.

HALCON/C Reference Manual, 2008-5-13


12.6. TESTS 891

Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Test regions.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
Number of elements : Regions1 = Regions2
. IsEqual (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
boolean result value.
Complexity √ √
If F is the area of a region the runtime complexity is O(1) or O( F ) if the result is TRUE, O( F ) if the result is
FALSE.
Result
The operator test_equal_region returns the value H_MSG_TRUE if the parameters are correct.
The behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). If the number of objects differs an exception is raised. Else
test_equal_region returns the value H_MSG_TRUE
Parallelization Information
test_equal_region is reentrant and processed without parallelization.
Alternatives
intersection, complement, area_center
See also
test_equal_obj
Module
Foundation

test_region_point ( const Hobject Regions, Hlong Row, Hlong Column,


Hlong *IsInside )

T_test_region_point ( const Hobject Regions, const Htuple Row,


const Htuple Column, Htuple *IsInside )

Test if the region consists of the given point.


test_region_point tests if at least one input region of Regions consists of the test point (Row,Column).
Is this the case, IsInside is set to TRUE, else to FALSE
Attention
In case of empty input (= no region) and set_system(’no_object_result’,’true’) IsInside is
set to FALSE (no region contains the pixel).
The test pixel is not contained in an empty region (no pixel of the region corresponds to the pixel). If all regions
are empty IsInside is set to FALSE , i.e., an empty region behaves as if it did not exist.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Line index of the test pixel.
Default Value : 100
Typical range of values : 0 ≤ Row ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column index of the test pixel.
Default Value : 100
Typical range of values : 0 ≤ Column ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 1

HALCON 8.0.2
892 CHAPTER 12. REGIONS

. IsInside (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *


boolean result value.
Complexity √
If F is the area of one region and N is the number of regions, the runtime complexity is O(ln( F ) ∗ N ).
Result
The operator test_region_point returns the value H_MSG_TRUE if the parameters are correct.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
test_region_point is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
union1, intersection, area_center
See also
select_region_point
Module
Foundation

test_subset_region ( const Hobject Region1, const Hobject Region2,


Hlong *IsSubset )

T_test_subset_region ( const Hobject Region1, const Hobject Region2,


Htuple *IsSubset )

Test whether a region is contained in another region.


test_subset_region tests whether Region1 is a subset of Region2 and returns the result in IsSubset.
If more than one region should be tested, Region1 and Region2 must have the same number of elements. In
this case, a tuple that contains as many elements as Region1 and Region2 is returned in IsSubset.
Parameter

. Region1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Test region.
. Region2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region for comparison.
Number of elements : Region1 = Region2
. IsSubset (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Is Region1 contained in Region2?
Result
test_subset_region returns the value H_MSG_TRUE if the parameters are correct. The be-
havior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). If the number of objects differs an exception is raised.
Parallelization Information
test_subset_region is reentrant and automatically parallelized (on tuple level).
Alternatives
difference, area_center
See also
test_equal_region
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 893

12.7 Transformation
background_seg ( const Hobject Foreground, Hobject *BackgroundRegions )
T_background_seg ( const Hobject Foreground,
Hobject *BackgroundRegions )

Determine the connected components of the background of given regions.


background_seg determines connected components of the background of the foreground regions given in
Foreground. This operator is normally used after an edge operator in order to determine the regions enclosed
by the extracted edges. The connected components are determined using 4-neighborhood.
Parameter
. Foreground (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input regions.
. BackgroundRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Connected components of the background.
Example

/* Segmentation with edge filter: */


read_image(&Image,"fabrik") ;
sobel_dir(Image,&Sobel,&Dir,"sum_sqrt",3) ;
threshold(Sobel,&Edges,20,255) ;
skeleton(Edges,&Margins) ;
background_seg(Margins,&Regions) ;

Complexity
Let F be the area of the background, H and W be the height
√ and√ width of the image, and N be the number of
resulting regions. Then the runtime complexity is O(H + F ∗ N ).
Result
background_seg always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
background_seg is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
Alternatives
complement, connection
See also
threshold, hysteresis_threshold, skeleton, expand_region, set_system, sobel_amp,
edges_image, roberts, bandpass_image
Module
Foundation

clip_region ( const Hobject Region, Hobject *RegionClipped, Hlong Row1,


Hlong Column1, Hlong Row2, Hlong Column2 )

T_clip_region ( const Hobject Region, Hobject *RegionClipped,


const Htuple Row1, const Htuple Column1, const Htuple Row2,
const Htuple Column2 )

Clip a region to a rectangle.

HALCON 8.0.2
894 CHAPTER 12. REGIONS

clip_region clips the input regions to the rectangle given by the four control parameters. clip_region is
more efficient than calling intersection with a rectangle generated by gen_rectangle1.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region to be clipped.
. RegionClipped (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Clipped regions.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong
Row coordinate of the upper left corner of the rectangle.
Default Value : 0
Suggested values : Row1 ∈ {0, 128, 200, 256}
Typical range of values : −∞ ≤ Row1 ≤ ∞ (lin)
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong
Column coordinate of the upper left corner of the rectangle.
Default Value : 0
Suggested values : Column1 ∈ {0, 128, 200, 256}
Typical range of values : −∞ ≤ Column1 ≤ ∞ (lin)
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y ; Hlong
Row coordinate of the lower right corner of the rectangle.
Default Value : 256
Suggested values : Row2 ∈ {128, 200, 256, 512}
Typical range of values : 0 ≤ Row2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .rectangle.corner.x ; Hlong
Column coordinate of the lower right corner of the rectangle.
Default Value : 256
Suggested values : Column2 ∈ {128, 200, 256, 512}
Typical range of values : 0 ≤ Column2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Result
clip_region returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case
of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
clip_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
intersection, gen_rectangle1, clip_region_rel
Module
Foundation

clip_region_rel ( const Hobject Region, Hobject *RegionClipped,


Hlong Top, Hlong Bottom, Hlong Left, Hlong Right )

T_clip_region_rel ( const Hobject Region, Hobject *RegionClipped,


const Htuple Top, const Htuple Bottom, const Htuple Left,
const Htuple Right )

Clip a region relative to its size.

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 895

clip_region_rel clips a region to a rectangle lying within the region. The size of the rectangle is determined
by the enclosing rectangle of the region, which is reduced by the values given in the four control parameters. All
four parameters must contain numbers larger or equal to zero, and determine by which amount the rectangle is
reduced at the top (Top), at the bottom (Bottom), at the left (Left), and at the right (Right). If all parameters
are set to zero, the region remains unchanged.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be clipped.
. RegionClipped (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Clipped regions.
. Top (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of rows clipped at the top.
Default Value : 1
Suggested values : Top ∈ {0, 1, 2, 3, 4, 5, 7, 10, 20, 30, 50}
Typical range of values : 0 ≤ Top (lin)
Minimum Increment : 1
Recommended Increment : 1
. Bottom (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of rows clipped at the bottom.
Default Value : 1
Suggested values : Bottom ∈ {0, 1, 2, 3, 4, 5, 7, 10, 20, 30, 50}
Typical range of values : 0 ≤ Bottom (lin)
Minimum Increment : 1
Recommended Increment : 1
. Left (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of columns clipped at the left.
Default Value : 1
Suggested values : Left ∈ {0, 1, 2, 3, 4, 5, 7, 10, 20, 30, 50}
Typical range of values : 0 ≤ Left (lin)
Minimum Increment : 1
Recommended Increment : 1
. Right (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of columns clipped at the right.
Default Value : 1
Suggested values : Right ∈ {0, 1, 2, 3, 4, 5, 7, 10, 20, 30, 50}
Typical range of values : 0 ≤ Right (lin)
Minimum Increment : 1
Recommended Increment : 1
Result
clip_region_rel returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
clip_region_rel is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
smallest_rectangle1, intersection, gen_rectangle1, clip_region
Module
Foundation

HALCON 8.0.2
896 CHAPTER 12. REGIONS

connection ( const Hobject Region, Hobject *ConnectedRegions )


T_connection ( const Hobject Region, Hobject *ConnectedRegions )

Compute connected components of a region.


connection determines the connected components of the input regions given in Region. The neighborhood
used for this can be set via set_system(’neighborhood’,<4/8>). The default is 8-neighborhood, which
is useful for determining the connected components of the foreground. The maximum number of connected com-
ponents that is returned by connection can be set via set_system(’max_connection’,<Num>). The
default value of 0 causes all connected components to be returned. The inverse operator of connection is
union1.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Input region.
. ConnectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Connected components.
Example

read_image(&Image,"affe");
set_colored(WindowHandle,12);
threshold(Image,&Light,150.0,255.0);
count_obj(Light,&Number1);
printf("Nummber of regions after threshold = %d\n",Number1);
disp_region(Light,WindowHandle);
connection(Light,&Many);
count_obj(Many,&Number2);
printf("Nummber of regions after threshold = %d\n",Number2);
disp_region(Many,WindowHandle);

Complexity
Let F be the area√of the√input region and N be the number of generated connected components. Then the runtime
complexity is O( F ∗ N ).
Result
connection always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty in-
put region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling
is raised.
Parallelization Information
connection is reentrant and processed without parallelization.
Possible Predecessors
auto_threshold, threshold, dyn_threshold, erosion1
Possible Successors
select_shape, select_gray, shape_trans, set_colored, dilation1, count_obj,
reduce_domain, add_channels
Alternatives
background_seg
See also
set_system, union1
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 897

distance_transform ( const Hobject Region, Hobject *DistanceImage,


const char *Metric, const char *Foreground, Hlong Width,
Hlong Height )

T_distance_transform ( const Hobject Region, Hobject *DistanceImage,


const Htuple Metric, const Htuple Foreground, const Htuple Width,
const Htuple Height )

Compute the distance transformation of a region.


distance_transform computes for every point of the input region Region (or its complement, respectively)
the distance of the point to the border of the region. The parameter Foreground determines whether the dis-
tances are calculated for all points within the region (Foreground = ’true’) or for all points outside the region
(Foreground = ’false’). The distance is computed for every point of the output image DistanceImage,
which has the specified dimensions Width and Height. The input region is always clipped to the extent of
the output image. If it is important that the distances within the entire region should be computed, the region
should be moved (see move_region) so that it has only positive coordinates and the width and height of the
output image should be large enough to contain the region. The extent of the input region can be obtained with
smallest_rectangle1.
The parameter Metric determines which metric is used for the calculation of the distances. If Metric = ’city-
block’, the distance is calculated from the shortest path from the point to the border of the region, where only
horizontal and vertical “movements” are allowed. They are weighted with a distance of 1. If Metric = ’chess-
board’, the distance is calculated from the shortest path to the border, where horizontal, vertical, and diagonal
“movements” are allowed. They are weighted with a distance of 1. If Metric = ’octagonal’, a combination
of these approaches is used, which leads to diagonal paths getting a higher weight. If Metric = ’chamfer-3-4’,
horizontal and vertical movements are weighted with a weight of 3, while diagonal movements are weighted with a
weight of 4. To normalize the distances, the resulting distance image is divided by 3. Since this normalization step
takes some time, and one usually is interested in the relative distances of the points, the normalization can be sup-
pressed with Metric = ’chamfer-3-4-unnormalized’. Finally, if Metric = ’euclidean’, the computed distance is
approximately Euclidean.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region for which the distance to the border is computed.
. DistanceImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : int4
Image containing the distance information.
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of metric to be used for the distance transformation.
Default Value : "city-block"
List of values : Metric ∈ {"city-block", "chessboard", "octagonal", "chamfer-3-4",
"chamfer-3-4-unnormalized", "euclidean"}
. Foreground (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Compute the distance for pixels inside (true) or outside (false) the input region.
Default Value : "true"
List of values : Foreground ∈ {"true", "false"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the output image.
Default Value : 640
Suggested values : Width ∈ {160, 192, 320, 384, 640, 768}
Typical range of values : 1 ≤ Width
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the output image.
Default Value : 480
Suggested values : Height ∈ {120, 144, 240, 288, 480, 576}
Typical range of values : 1 ≤ Height
Example (Syntax: HDevelop)

/* Step towards extracting the medial axis of a shape: */

HALCON 8.0.2
898 CHAPTER 12. REGIONS

gen_rectangle1 (Rectangle1, 0, 0, 200, 400)


gen_rectangle1 (Rectangle2, 200, 0, 400, 200)
union2 (Rectangle1, Rectangle2, Shape)
distance_transform (Shape, DistanceImage, ’chessboard’, ’true’, 640, 480)

Complexity
The runtime complexity is O(Width ∗ Height).
Result
distance_transform returns H_MSG_H_MSG_TRUE if all parameters are correct.
Parallelization Information
distance_transform is reentrant and processed without parallelization.
Possible Predecessors
threshold, dyn_threshold, regiongrowing
Possible Successors
threshold
See also
skeleton
References
P. Soille: “Morphological Image Analysis, Principles and Applications”; Springer Verlag Berlin Heidelberg New
York, 1999.
G. Borgefors: “Distance Transformations in Arbitrary Dimensions”; Computer Vision, Graphics, and Image Pro-
cessing, Vol. 27, pages 321–345, 1984.
P.E. Danielsson: “Euclidean Distance Mapping”; Computer Graphics and Image Processing, Vol. 14, pages 227–
248, 1980.
Module
Foundation

eliminate_runs ( const Hobject Region, Hobject *RegionClipped,


Hlong ElimShorter, Hlong ElimLonger )

T_eliminate_runs ( const Hobject Region, Hobject *RegionClipped,


const Htuple ElimShorter, const Htuple ElimLonger )

Eliminate runs of a given length.


eliminate_runs eliminates all runs of the run length encoding of the input regions which are shorter than
ElimShorter or longer as ElimLonger.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region to be clipped.
. RegionClipped (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Clipped regions.
. ElimShorter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
All runs which are shorter are eliminated.
Default Value : 3
Suggested values : ElimShorter ∈ {2, 3, 4, 5, 6, 8, 10, 12, 15}
Typical range of values : 1 ≤ ElimShorter ≤ 500 (lin)
Minimum Increment : 1
Recommended Increment : 1

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 899

. ElimLonger (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


All runs which are longer are eliminated.
Default Value : 1000
Suggested values : ElimLonger ∈ {50, 100, 200, 500, 1000, 2000}
Typical range of values : 1 ≤ ElimLonger ≤ 10000 (lin)
Minimum Increment : 1
Recommended Increment : 10
Result
eliminate_runs returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
eliminate_runs is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
erosion1, dilation1, disp_region
Alternatives
shape_trans
Module
Foundation

expand_region ( const Hobject Regions, const Hobject ForbiddenArea,


Hobject *RegionExpanded, Hlong Iterations, const char *Mode )

T_expand_region ( const Hobject Regions, const Hobject ForbiddenArea,


Hobject *RegionExpanded, const Htuple Iterations, const Htuple Mode )

Fill gaps between regions or split overlapping regions.


expand_region closes gaps between the input regions, which resulted from the suppression of small regions
in a segmentation operator, for example, (mode ’image’), or to separate overlapping regions (mode ’region’). Both
uses result from the expansion of regions. The operator works by adding or removing a one pixel wide “strip” to a
region.
The expansion takes place only in regions that are designated as not “forbidden” (parameter ForbiddenArea).
The number of iterations is determined by the parameter Iterations. By passing ’maximal’,
expand_region iterates until convergence, i.e., until no more changes occur. By passing 0 for this parame-
ter, all non-overlapping regions are returned. The two modes of operation (’image’ and ’region’) are different in
the following ways:

’image’ The input regions are expanded iteratively until they touch another region or the image border. In this
case, the image border is defined to be the rectangle ranging from (0,0) to (row_max,col_max). Here,
(row_max,col_max) corresponds to the lower right corner of the smallest surrounding rectangle of all input re-
gions (i.e., of all regions that are passed in Regions and ForbiddenArea). Because expand_region
processes all regions simultaneously, gaps between regions are distributed evenly to all regions. Overlapping
regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to the respective regions. Because the intersection with the original region is
computed after the shrinking operation gaps in the output regions may result, i.e., the segmentation is not
complete. This can be prevented by calling expand_region a second time with the complement of the
original regions as “forbidden area.”

HALCON 8.0.2
900 CHAPTER 12. REGIONS

Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions for which the gaps are to be closed, or which are to be separated.
. ForbiddenArea (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Regions in which no expansion takes place.
. RegionExpanded (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Expanded or separated regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Number of iterations.
Default Value : "maximal"
Suggested values : Iterations ∈ {"maximal", 0, 1, 2, 3, 5, 7, 10, 15, 20, 30, 50, 70, 100, 200}
Typical range of values : 0 ≤ Iterations ≤ 1000 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Expansion mode.
Default Value : "image"
List of values : Mode ∈ {"image", "region"}
Example

read_image(&Image,"fabrik");
threshold(Image,&Light,100.0,255.0);
disp_region(Light,WindowHandle);
connection(Light,&Seg);
expand_region(Seg,EMPTY_REGION,&Exp1,"maximal","image");
set_colored(WindowHandle,12);
set_draw(WindowHandle,"margin");
disp_region(Exp1,WindowHandle);

Result
expand_region always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty
input region via set_system(’empty_region_result’,<Result>), and the behavior in case of an
empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an ex-
ception handling is raised.
Parallelization Information
expand_region is reentrant and processed without parallelization.
Possible Predecessors
pouring, threshold, dyn_threshold, regiongrowing
Alternatives
dilation1
See also
expand_gray, interjacent, skeleton
Module
Foundation

fill_up ( const Hobject Region, Hobject *RegionFillUp )


T_fill_up ( const Hobject Region, Hobject *RegionFillUp )

Fill up holes in regions.


fill_up fills up holes in regions. The number of regions remains unchanged. The neighborhood type is set via
set_system(’neighborhood’,<4/8>) (default: 8-neighborhood).

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 901

Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input regions containing holes.
. RegionFillUp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Regions without holes.
Result
fill_up returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
fill_up is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
fill_up_shape
See also
boundary
Module
Foundation

fill_up_shape ( const Hobject Region, Hobject *RegionFillUp,


const char *Feature, double Min, double Max )

T_fill_up_shape ( const Hobject Region, Hobject *RegionFillUp,


const Htuple Feature, const Htuple Min, const Htuple Max )

Fill up holes in regions having given shape features.


fill_up_shape fills up those holes in the input region Region having given shape features. The parameter
Feature determines the shape feature to be used, while Min and Max determine the range the shape feature has
to lie in in order for the hole to be filled up.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input region(s).
. RegionFillUp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Output region(s) with filled holes.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Shape feature used.
Default Value : "area"
List of values : Feature ∈ {"area", "compactness", "convexity", "anisometry", "phi", "ra", "rb",
"inner_circle", "outer_circle"}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Minimum value for Feature.
Default Value : 1.0
Suggested values : Min ∈ {0.0, 1.0, 10.0, 50.0, 100.0, 500.0, 1000.0, 10000.0}
Typical range of values : 0.0 ≤ Min
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Maximum value for Feature.
Default Value : 100.0
Suggested values : Max ∈ {10.0, 50.0, 100.0, 500.0, 1000.0, 10000.0, 100000.0}
Typical range of values : 0.0 ≤ Max

HALCON 8.0.2
902 CHAPTER 12. REGIONS

Example

read_image(&Image,"affe");
threshold(Image,&Seg,120.0,255.0);
fill_up_shape(Seg,&Filled,"area",0.0,200.0);

Result
fill_up_shape returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
fill_up_shape is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
fill_up
See also
select_shape, connection, area_center
Module
Foundation

hamming_change_region ( const Hobject InputRegion,


Hobject *OutputRegion, Hlong Width, Hlong Height, Hlong Distance )

T_hamming_change_region ( const Hobject InputRegion,


Hobject *OutputRegion, const Htuple Width, const Htuple Height,
const Htuple Distance )

Generate a region having a given Hamming distance.


hamming_change_region changes the region in the left upper part of the image given by Width and Height
such that the resulting regions have a Hamming distance of Distance to the input regions. This is done by adding
or removing Distance points from the input region.
Attention
If Width and Height are chosen too large the resulting region requires a lot of memory.
Parameter

. InputRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region to be modified.
. OutputRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Regions having the required Hamming distance.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the region to be changed.
Default Value : 100
Suggested values : Width ∈ {64, 128, 256, 512}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width > 0

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 903

. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong


Height of the region to be changed.
Default Value : 100
Suggested values : Height ∈ {64, 128, 256, 512}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height > 0
. Distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Hamming distance between the old and new regions.
Default Value : 1000
Suggested values : Distance ∈ {100, 500, 1000, 5000, 10000}
Typical range of values : 0 ≤ Distance ≤ 10000 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (Distance ≥ 0) ∧ (Distance < (Width · Height))
Complexity
Memory requirement of the generated region (worst case): O(2 ∗ Width ∗ Height).
Result
hamming_change_region returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty
input (no regions given) can be set via set_system(’no_object_result’,<Result>). If necessary,
an exception handling is raised.
Parallelization Information
hamming_change_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
hamming_distance
Module
Foundation

interjacent ( const Hobject Region, Hobject *RegionInterjacent,


const char *Mode )

T_interjacent ( const Hobject Region, Hobject *RegionInterjacent,


const Htuple Mode )

Partition the image plane using given regions.


interjacent partitions the image plane using the regions given in Region. The result is a region containing
the extracted separating lines. The following modes of operation can be used:

’medial_axis’ This mode is used for regions that do not touch or overlap. The operator will find separating lines
between the regions which partition the background evenly between the input regions. This corresponds to
the following calls:
complement(’full’,Region,Tmp) skeleton(Tmp,Result)
’border’ If the input regions do not touch or overlap this mode is equivalent to boundary(Region,Result),
i.e., it replaces each region by its boundary. If regions are touching they are aggregated into one region. The
corresponding output region then contains the boundary of the aggregated region, as well as the one pixel
wide separating line between the original regions. This corresponds to the following calls:
boundary(Region,Tmp1,’inner’) union1(Tmp1,Tmp2)
skeleton(Tmp2,Result)

HALCON 8.0.2
904 CHAPTER 12. REGIONS

’mixed’ In this mode the operator behaves like the mode ’medial_axis’ for non-overlapping regions. If regions
touch or overlap, again separating lines between the input regions are generated on output, but this time
including the “touching line” between regions, i.e., touching regions are separated by a line in the output
region. This corresponds to the following calls:
erosion1(Region,Mask,Tmp1,1) union1(Tmp1,Tmp2)
complement(full,Tmp2,Tmp3) skeleton(Tmp3,Result)
where Mask denotes the following “cross mask”:

×
× × ×
×

Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions for which the separating lines are to be determined.
. RegionInterjacent (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Output region containing the separating lines.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Mode of operation.
Default Value : "mixed"
List of values : Mode ∈ {"medial_axis", "border", "mixed"}
Example

read_image(&Image,"wald1_rot") ;
mean(Image,&Mean,31,31) ;
dyn_threshold(Mean,&Seg,20) ;
interjacent(Seg,&Graph,"medial_axis") ;
disp_region(Graph,WindowHandle) ;

Result
interjacent always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty input
region via set_system(’empty_region_result’,<Result>), and the behavior in case of an empty
result region via set_system(’store_empty_region’,<true/false>). If necessary, an exception
handling is raised.
Parallelization Information
interjacent is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
See also
expand_region, junctions_skeleton, boundary
Module
Foundation

junctions_skeleton ( const Hobject Region, Hobject *EndPoints,


Hobject *JuncPoints )

T_junctions_skeleton ( const Hobject Region, Hobject *EndPoints,


Hobject *JuncPoints )

Find junctions and end points in a skeleton.

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 905

junctions_skeleton detects junctions and end points in a skeleton (see skeleton). The junctions in
the input region Region are output as a region in JuncPoints, while the end points are output as a region in
EndPoints.
To obtain reasonable results with junctions_skeleton the input region Region must not contain lines
which are more than one pixel wide. Regions obtained by skeleton meet this condition, while regions obtained
by morph_skeleton do not meet this condition in general.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Input skeletons.
. EndPoints (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Extracted end points.
Number of elements : EndPoints = Region
. JuncPoints (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Extracted junctions.
Number of elements : JuncPoints = Region
Example

/* non-connected branches of a skeleton */


skeleton(Region,&Skeleton) ;
junctions_skeleton(Skeleton,&EPoints,&JPoints) ;
difference(S,JPoints,&Rows) ;
connection(Rows,&Parts) ;

Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ).
Result
junctions_skeleton always returns the value H_MSG_TRUE. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of
an empty input region via set_system(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
junctions_skeleton is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
skeleton
Possible Successors
area_center, connection, get_region_points, difference
See also
pruning, split_skeleton_region
Module
Foundation

merge_regions_line_scan ( const Hobject CurrRegions,


const Hobject PrevRegions, Hobject *CurrMergedRegions,
Hobject *PrevMergedRegions, Hlong ImageHeight,
const char *MergeBorder, Hlong MaxImagesRegion )

T_merge_regions_line_scan ( const Hobject CurrRegions,


const Hobject PrevRegions, Hobject *CurrMergedRegions,
Hobject *PrevMergedRegions, const Htuple ImageHeight,
const Htuple MergeBorder, const Htuple MaxImagesRegion )

Merge regions from line scan images.

HALCON 8.0.2
906 CHAPTER 12. REGIONS

The operator merge_regions_line_scan connects adjacent regions, which were segmentated from adja-
cent images with the height ImageHeight. This operator was especially designed to process regions that were
extracted from images grabbed by a line scan camera. CurrRegions contains the regions from the current image
and PrevRegions the regions from the previous one.
With the help of the parameter MergeBorder two cases can be distinguished: If the top (first) line of the current
image touches the bottom (last) line of the previous image, MergeBorder must be set to ’top’, otherwise set
MergeBorder to ’bottom’.
If the operator merge_regions_line_scan is used recursivly, the parameter MaxImagesRegion deter-
mines the maximum number of images which are covered by a merged region. All older region parts are removed.
The operator merge_regions_line_scan returns two region arrays. PrevMergedRegions contains
all those regions from the previous input regions PrevRegions, which could not be merged with a current
region. CurrMergedRegions collects all current regions together with the merged parts from the previ-
ous images. Merged regions will exceed the original image, because the previous regions are moved upward
(MergeBorder=’top’) or downward (MergeBorder=’bottom’) according to the image height. For this the
system parameter ’clip_region’ (see also set_system) will internaly be set to ’false’.
Parameter

. CurrRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Current input regions.
. PrevRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Merged regions from the previous iteration.
. CurrMergedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Current regions, merged with old ones where applicable.
. PrevMergedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Regions from the previous iteration which could not be merged with the current ones.
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the line scan images.
Default Value : 512
List of values : ImageHeight ∈ {240, 480, 512}
. MergeBorder (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Image line of the current image, which touches the previous image.
Default Value : "top"
List of values : MergeBorder ∈ {"top", "bottom"}
. MaxImagesRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum number of images for a single region.
Default Value : 3
Suggested values : MaxImagesRegion ∈ {1, 2, 3, 4, 5}
Result
The operator merge_regions_line_scan returns the value H_MSG_TRUE if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
merge_regions_line_scan is reentrant and processed without parallelization.
Module
Foundation

partition_dynamic ( const Hobject Region, Hobject *Partitioned,


double Distance, double Percent )

T_partition_dynamic ( const Hobject Region, Hobject *Partitioned,


const Htuple Distance, const Htuple Percent )

Partition a region horizontally at positions of small vertical extent.


partition_dynamic partitions the input Region horizontally into regions that have an approximate width of
Distance. The input region is split at positions where it has a relativly small vertical extent.

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 907

The positions where the input region is split are determined by the following approach: First, initial split positions
are determined such that they are equally distributed over the horizontal extent of the input region, i.e., such that all
the resulting parts would have the same width. For this, the number n of resulting parts is determined by dividing
the width of the input region by Distance and rounding the result to the closest integer value. The distance
between the initial split positions is now calculated by dividing the width of the input region by n. Note that the
distance between these initial split positions is typically not identical to Distance. Then, the final split positions
are determined in the neighborhood of the initial split positions such that the input region is split at positions where
it has the least vertical extent within this neighborhood. The maximum deviation of the final split position from
the initial split position is Distance*Percent*0.01.
The resulting regions are returned in Partitioned. Note that the input region is only partitioned if its width is
larger than 1.5 times Distance.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region to be partitioned.
. Partitioned (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Partitioned region.
. Distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Approximate width of the resulting region parts.
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Maximum percental shift of the split position.
Default Value : 20
Suggested values : Percent ∈ {0, 10, 20, 30, 40, 50, 70, 90, 100}
Typical range of values : 0 ≤ Percent ≤ 100
Result
partition_dynamic returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty
input (no regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>), and the behav-
ior in case of an empty result region via set_system(’store_empty_region’,<true/false>). If
necessary, an exception handling is raised.
Parallelization Information
partition_dynamic is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection
Alternatives
partition_rectangle
See also
intersection, smallest_rectangle1, shape_trans, clip_region
Module
Foundation

partition_rectangle ( const Hobject Region, Hobject *Partitioned,


double Width, double Height )

T_partition_rectangle ( const Hobject Region, Hobject *Partitioned,


const Htuple Width, const Htuple Height )

Partition a region into rectangles of equal size.


partition_rectangle partitions the input region into rectangles having an extent of Width times Height.
The region is always split into rectangles of equal size. Therefore, Width and Height are adapted to the actual
size of the region. If the region is smaller than the given size its output remains unchanged. A partition is only
done if the size of the region is at least 1.5 times the size of the rectangle given by the paramters.

HALCON 8.0.2
908 CHAPTER 12. REGIONS

Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region to be partitioned.
. Partitioned (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Partitioned region.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Width of the individual rectangles.
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Height of the individual rectangles.
Result
partition_rectangle returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty
input (no regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>), and the behav-
ior in case of an empty result region via set_system(’store_empty_region’,<true/false>). If
necessary, an exception handling is raised.
Parallelization Information
partition_rectangle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection
Alternatives
partition_dynamic
See also
intersection, smallest_rectangle1, shape_trans, clip_region
Module
Foundation

rank_region ( const Hobject Region, Hobject *RegionCount, Hlong Width,


Hlong Height, Hlong Number )

T_rank_region ( const Hobject Region, Hobject *RegionCount,


const Htuple Width, const Htuple Height, const Htuple Number )

Rank operator for regions.


rank_region calculates the binary rank operator. A filter mask of size Height x Width) is used. In the
process, for each point in the region the number of points of Region lying within the filter mask are counted. If
this number is greater or equal to Number, the current point is added to the output region. If

Height ∗ Width
Number = ,
2

is chosen, the median operator is obtained.


Attention
For Height and Width only odd values > 3 are valid. If invalid parameters are chosen they are converted
automatically (without raising an exception handling) to the next larger odd values.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region(s) to be transformed.
. RegionCount (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Resulting region(s).

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 909

. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong


Width of the filter mask.
Default Value : 15
Suggested values : Width ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21}
Typical range of values : 3 ≤ Width ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Width ≥ 3) ∧ odd(Width)
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 15
Suggested values : Height ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21}
Typical range of values : 3 ≤ Height ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Height ≥ 3) ∧ odd(Height)
. Number (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Minimum number of points lying within the filter mask.
Default Value : 70
Suggested values : Number ∈ {5, 10, 20, 40, 60, 80, 90, 120, 150, 200}
Typical range of values : 1 ≤ Number ≤ 1000 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Number > 0
Example

read_image(&Image,"affe") ;
mean_image(Image,&Mean,5,5) ;
dyn_threshold(Mean,&Points,25) ;
rank_region(Points,Textur,15,15,30) ;
gen_circle(&Mask,10,10,3) ;
opening1(Textur,Mask,&Seg) ;

Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ∗ 8).
Result
rank_region returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case
of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
rank_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape, disp_region
Alternatives
closing_rectangle1, expand_region
See also
rank_image, mean_image
Module
Foundation

HALCON 8.0.2
910 CHAPTER 12. REGIONS

remove_noise_region ( const Hobject InputRegion,


Hobject *OutputRegion, const char *Type )

T_remove_noise_region ( const Hobject InputRegion,


Hobject *OutputRegion, const Htuple Type )

Remove noise from a region.


remove_noise_region removes noise from a region. In mode ’n_4’, a structuring element consisting of the
four neighbors of a point is generated. A dilation with this structuring element is performed, and the intersection
of the result and the input region is calculated. Thus all pixels having no 4-connected neighbor are removed.
Parameter
. InputRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be modified.
. OutputRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Less noisy regions.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Mode of noise removal.
Default Value : "n_4"
List of values : Type ∈ {"n_4", "n_8", "n_48"}
Complexity √
Let F be the area of the input region. Then the runtime complexity is O( F ∗ 4).
Result
remove_noise_region returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty
input (no regions given) can be set via set_system(’no_object_result’,<Result>). If necessary,
an exception handling is raised.
Parallelization Information
remove_noise_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
dilation1, intersection, gen_region_points
Module
Foundation

shape_trans ( const Hobject Region, Hobject *RegionTrans,


const char *Type )

T_shape_trans ( const Hobject Region, Hobject *RegionTrans,


const Htuple Type )

Transform the shape of a region.


shape_trans transforms the shape of the input regions depending on the parameter Type:

’convex’ Convex hull.


’ellipse’ Ellipse with the same moments and area as the input region.
’outer_circle’ Smallest enclosing circle.
’inner_circle’ Largest circle fitting into the region.
’rectangle1’ Smallest enclosing rectangle parallel to the coordinate axes.
’rectangle2’ Smallest enclosing rectangle.
’inner_rectangle1’ Largest axis-parallel rectangle fitting into the region.

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 911

’inner_center’ The point on the skeleton of the input region having the smallest distance to the center of gravity
of the input region.

Attention
If Type = ’outer_circle’ is selected it might happen that the resulting circular region does not completely cover
the input region. This is because internally the operators smallest_circle and gen_circle are used to
compute the outer circle.√As described in the documentation of smallest_circle, the calculated radius can
be too small by up to 1/ 2 − 0.5 pixels. Additionally,√the circle that is generated by gen_circle is translated
by up to 0.5 pixels in both directions, i.e., by up to 1/ 2 pixels. Consequently, when adding up both effects, the
original region might protrude beyond the returned circular region by at most 1 pixel.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be transformed.
. RegionTrans (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Transformed regions.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of transformation.
Default Value : "convex"
List of values : Type ∈ {"convex", "ellipse", "outer_circle", "inner_circle", "rectangle1", "rectangle2",
"inner_rectangle1", "inner_center"}
Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ).
Result
shape_trans returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
shape_trans is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, regiongrowing
Possible Successors
disp_region, regiongrowing_mean, area_center
See also
convexity, elliptic_axis, area_center, smallest_rectangle1,
smallest_rectangle2, inner_rectangle1, set_shape, select_shape, inner_circle
Module
Foundation

skeleton ( const Hobject Region, Hobject *Skeleton )


T_skeleton ( const Hobject Region, Hobject *Skeleton )

Compute the skeleton of a region.


skeleton computes the skeleton, i.e., the medial axis of the input regions. The skeleton is constructed in a way
that each point on it can be seen as the center point of a circle with the largest radius possible while still being
completely contained in the region.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Region to be thinned.
. Skeleton (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Resulting skeleton.
Number of elements : Skeleton = Region

HALCON 8.0.2
912 CHAPTER 12. REGIONS

Complexity
Let F be the area of the enclosing rectangle of the input region. Then the runtime complexity is O(F ) (per region).
Result
skeleton returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
skeleton is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
sobel_amp, edges_image, bandpass_image, threshold, hysteresis_threshold
Possible Successors
junctions_skeleton, pruning
Alternatives
morph_skeleton, thinning
See also
gray_skeleton, sobel_amp, edges_image, roberts, bandpass_image, threshold
References
Eckardt, U. “Verdünnung mit Perfekten Punkten”, Proceedings 10. DAGM-Symposium, IFB 180, Zurich, 1988
Module
Foundation

sort_region ( const Hobject Regions, Hobject *SortedRegions,


const char *SortMode, const char *Order, const char *RowOrCol )

T_sort_region ( const Hobject Regions, Hobject *SortedRegions,


const Htuple SortMode, const Htuple Order, const Htuple RowOrCol )

Sorting of regions with respect to their relative position.


The operator sort_region sorts the regions with respect to their relative position. All sorting methods with
the exception of ’character’ use one point of the region. With the help of the parameter RowOrCol = ’row’ these
points will be sorted according to their row and then according to their column. By using ’column’, the column
value will be used first. The following values are available for the parameter SortMode:

’character’ The regions will be treated like characters in a row and will be sorted according to their order in the
line: If two regions overlap horizontally, they will be sorted with respect to their column values, otherwise
they will be sorted with regard to their row values. To be able to sort a line correctly, all regions in the line
must overlap each other vertically. Furthermore, the regions in adjacent rows must not overlap.
’first_point’ The point with the lowest column value in the first row of the region.
’last_point’ The point with the highest column value in the last row of the region.
’upper_left’ Upper left corner of the surrounding rectangle.
’upper_right’ Upper right corner of the surrounding rectangle.
’lower_left’ Lower left corner of the surrounding rectangle.
’lower_right’ Lower right corner of the surrounding rectangle.

The parameter Order determines whether the sorting order is increasing or decreasing: using ’true’ the order will
be increasing, using ’false’ the order will be decreasing.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject


Regions to be sorted.
. SortedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Sorted regions.

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 913

. SortMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *


Kind of sorting.
Default Value : "first_point"
List of values : SortMode ∈ {"character", "first_point", "last_point", "upper_left", "lower_left",
"upper_right", "lower_right"}
. Order (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Increasing or decreasing sorting order.
Default Value : "true"
List of values : Order ∈ {"true", "false"}
. RowOrCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Sorting first with respect to row, then to column.
Default Value : "row"
List of values : RowOrCol ∈ {"row", "column"}
Result
If the parameters are correct, the operator sort_region returns the value H_MSG_TRUE. Otherwise an ex-
ception will be raised.
Parallelization Information
sort_region is reentrant and processed without parallelization.
Possible Successors
do_ocr_multi, do_ocr_single
Module
Foundation

T_split_skeleton_lines ( const Hobject SkeletonRegion,


const Htuple MaxDistance, Htuple *BeginRow, Htuple *BeginCol,
Htuple *EndRow, Htuple *EndCol )

Split lines represented by one pixel wide, non-branching lines.


split_skeleton_lines splits lines represented by one pixel wide, non-branching regions into shorter lines
based on their curvature. A line is split if the maximum distance of a point on the line to the line segment
connecting its end points is larger than MaxDistance (split & merge algorithm). The start and end points of
the approximating line segments are returned in BeginRow, BeginCol, EndRow, and EndCol.
Attention
The input regions must represent non-branching lines, that is single branches of the skeleton.
Parameter

. SkeletonRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject


Input lines (represented by 1 pixel wide, non-branching regions).
. MaxDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Maximum distance of the line points to the line segment connecting both end points.
Default Value : 3
Suggested values : MaxDistance ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Typical range of values : 1 ≤ MaxDistance ≤ 500 (lin)
Minimum Increment : 1
Recommended Increment : 1
. BeginRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong *
Row coordinates of the start points of the output lines.
. BeginCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *
Column coordinates of the start points of the output lines.
. EndRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong *
Row coordinates of the end points of the output lines.
. EndCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong *
Column coordinates of the end points of the output lines.

HALCON 8.0.2
914 CHAPTER 12. REGIONS

Example

read_image(&Image,"fabrik");
edges_image (Image, &ImaAmp, &ImaDir, "lanser2", 0.5, "nms", 8, 16);
threshold (ImaAmp, &RawEdges, 8, 255);
skeleton (RawEdges, &Skeleton);
junctions_skeleton (Skeleton, &EndPoints, &JuncPoints);
difference (Skeleton, JuncPoints, &SkelWithoutJunc);
connection (SkelWithoutJunc, &SingleBranches);
select_shape (SingleBranches, &SelectedBranches, "area", "and", 16, 99999);
split_skeleton_lines (SelectedBranches, 3, &BeginRow, &BeginCol, &EndRow,
&EndCol);

Result
split_skeleton_lines always returns the value H_MSG_TRUE. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
split_skeleton_lines is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, select_shape, skeleton, junctions_skeleton, difference
Possible Successors
select_lines, partition_lines, disp_line
See also
split_skeleton_region, detect_edge_segments
Module
Foundation

split_skeleton_region ( const Hobject SkeletonRegion,


Hobject *RegionLines, Hlong MaxDistance )

T_split_skeleton_region ( const Hobject SkeletonRegion,


Hobject *RegionLines, const Htuple MaxDistance )

Split lines represented by one pixel wide, non-branching regions.


split_skeleton_region splits lines represented by one pixel wide, non-branching regions into shorter lines
based on their curvature. A line is split if the maximum distance of a point on the line to the line segment connecting
its end points is larger than MaxDistance (split & merge algorithm). However, not the approximating lines are
returned, but rather the original lines split into several output regions.
Attention
The input regions must represent non-branching lines, that is single branches of the skeleton.
Parameter
. SkeletonRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input lines (represented by 1 pixel wide, non-branching regions).
. RegionLines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Split lines.
. MaxDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum distance of the line points to the line segment connecting both end points.
Default Value : 3
Suggested values : MaxDistance ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Typical range of values : 1 ≤ MaxDistance ≤ 500 (lin)
Minimum Increment : 1
Recommended Increment : 1

HALCON/C Reference Manual, 2008-5-13


12.7. TRANSFORMATION 915

Example

read_image(&Image,"fabrik");
edges_image (Image, &ImaAmp, &ImaDir, "lanser2", 0.5, "nms", 8, 16);
threshold (ImaAmp, &RawEdges, 8, 255);
skeleton (RawEdges, &Skeleton);
junctions_skeleton (Skeleton, &EndPoints, &JuncPoints);
difference (Skeleton, JuncPoints, &SkelWithoutJunc);
connection (SkelWithoutJunc, &SingleBranches);
select_shape (SingleBranches, &SelectedBranches, "area", "and", 16, 99999);
split_skeleton_region (SelectedBranches, Lines, 3);

Result
split_skeleton_region always returns the value H_MSG_TRUE. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
split_skeleton_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, select_shape, skeleton, junctions_skeleton, difference
Possible Successors
count_obj, select_shape, select_obj, area_center, elliptic_axis,
smallest_rectangle2, get_region_polygon, get_region_contour
See also
split_skeleton_lines, get_region_polygon, gen_polygons_xld
Module
Foundation

HALCON 8.0.2
916 CHAPTER 12. REGIONS

HALCON/C Reference Manual, 2008-5-13


Chapter 13

Segmentation

13.1 Classification
add_samples_image_class_gmm ( const Hobject Image,
const Hobject ClassRegions, Hlong GMMHandle, double Randomize )

T_add_samples_image_class_gmm ( const Hobject Image,


const Hobject ClassRegions, const Htuple GMMHandle,
const Htuple Randomize )

Add training samples from an image to the training data of a Gaussian Mixture Model.
add_samples_image_class_gmm adds training samples from the Image to the Gaussian Mixture
Model (GMM) given by GMMHandle. add_samples_image_class_gmm is used to store the
training samples before a classifier to be used for the pixel classification of multichannel images with
classify_image_class_gmm is trained. add_samples_image_class_gmm works analogously
to add_sample_class_gmm. The Image must have a number of channels equal to NumDim, as spec-
ified with create_class_gmm. The training regions for the NumClasses pixel classes are passed in
ClassRegions. Hence, ClassRegions must be a tuple containing NumClasses regions. The order of
the regions in ClassRegions determines the class of the pixels. If there are no samples for a particular class
in Image an empty region must be passed at the position of the class in ClassRegions. With this mecha-
nism it is possible to use multiple images to add training samples for all relevant classes to the GMM by calling
add_samples_image_class_gmm multiple times with the different images and suitably chosen regions. The
regions in ClassRegions should contain representative training samples for the respective classes. Hence, they
need not cover the entire image. The regions in ClassRegions should not overlap each other, because this
would lead to the fact that in the training data the samples from the overlapping areas would be assigned to multi-
ple classes, which may lead to a lower classification performance. Image data of integer type can be particularly
badly suited for modelling with a GMM. Randomize can be used to overcome this problem, as explained in
add_sample_class_gmm.
Parameter

. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Training image.
. ClassRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions of the classes to be trained.
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong
GMM handle.
. Randomize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Standard deviation of the Gaussian noise added to the training data.
Default Value : 0.0
Suggested values : Randomize ∈ {0.0, 1.5, 2.0}
Restriction : Randomize ≥ 0.0

917
918 CHAPTER 13. SEGMENTATION

Result
If the parameters are valid, the operator add_samples_image_class_gmm returns the value
H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
add_samples_image_class_gmm is processed completely exclusively without parallelization.
Possible Predecessors
create_class_gmm
Possible Successors
train_class_gmm, write_samples_class_gmm
Alternatives
read_samples_class_gmm
See also
classify_image_class_gmm, add_sample_class_gmm, clear_samples_class_gmm,
get_sample_num_class_gmm, get_sample_class_gmm
Module
Foundation

add_samples_image_class_mlp ( const Hobject Image,


const Hobject ClassRegions, Hlong MLPHandle )

T_add_samples_image_class_mlp ( const Hobject Image,


const Hobject ClassRegions, const Htuple MLPHandle )

Add training samples from an image to the training data of a multilayer perceptron.
add_samples_image_class_mlp adds training samples from the image Image to the multilayer per-
ceptron (MLP) given by MLPHandle. add_samples_image_class_mlp is used to store the
training samples before a classifier to be used for the pixel classification of multichannel images with
classify_image_class_mlp is trained. add_samples_image_class_mlp works analogously to
add_sample_class_mlp. Because here the MLP is always used for classification, OutputFunction =
’softmax’ must be specified when the MLP is created with create_class_mlp. The image Image must have
a number of channels equal to NumInput, as specified with create_class_mlp. The training regions for
the NumOutput pixel classes are passed in ClassRegions. Hence, ClassRegions must be a tuple con-
taining NumOutput regions. The order of the regions in ClassRegions determines the class of the pixels. If
there are no samples for a particular class in Image an empty region must be passed at the position of the class
in ClassRegions. With this mechanism it is possible to use multiple images to add training samples for all
relevant classes to the MLP by calling add_samples_image_class_mlp multiple times with the different
images and suitably chosen regions. The regions in ClassRegions should contain representative training sam-
ples for the respective classes. Hence, they need not cover the entire image. The regions in ClassRegions
should not overlap each other, because this would lead to the fact that in the training data the samples from the
overlapping areas would be assigned to multiple classes, which may lead to slower convergence of the training and
a lower classification performance.
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Training image.
. ClassRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions of the classes to be trained.
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong
MLP handle.
Result
If the parameters are valid, the operator add_samples_image_class_mlp returns the value
H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
add_samples_image_class_mlp is processed completely exclusively without parallelization.

HALCON/C Reference Manual, 2008-5-13


13.1. CLASSIFICATION 919

Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp, write_samples_class_mlp
Alternatives
read_samples_class_mlp
See also
classify_image_class_mlp, add_sample_class_mlp, clear_samples_class_mlp,
get_sample_num_class_mlp, get_sample_class_mlp, add_samples_image_class_svm
Module
Foundation

add_samples_image_class_svm ( const Hobject Image,


const Hobject ClassRegions, Hlong SVMHandle )

T_add_samples_image_class_svm ( const Hobject Image,


const Hobject ClassRegions, const Htuple SVMHandle )

Add training samples from an image to the training data of a support vector machine.
add_samples_image_class_svm adds training samples from the image Image to the support vec-
tor machine (SVM) given by SVMHandle. add_samples_image_class_svm is used to store
the training samples before training a classifier for the pixel classification of multichannel images
with classify_image_class_svm. add_samples_image_class_svm works analogously to
add_sample_class_svm.
The image Image must have a number of channels equal to NumFeatures, as specified with
create_class_svm. The training regions for the NumClasses pixel classes are passed in ClassRegions.
Hence, ClassRegions must be a tuple containing NumClasses regions. The order of the regions in
ClassRegions determines the class of the pixels. If there are no samples for a particular class in Image,
an empty region must be passed at the position of the class in ClassRegions. With this mechanism it
is possible to use multiple images to add training samples for all relevant classes to the SVM by calling
add_samples_image_class_svm multiple times with the different images and suitably chosen regions.
The regions in ClassRegions should contain representative training samples for the respective classes. Hence,
they need not cover the entire image. The regions in ClassRegions should not overlap each other, because
this would lead to the fact that in the training data the samples from the overlapping areas would be assigned to
multiple classes, which may lead to slower convergence of the training and a lower classification performance.
A further application of this operator is the automatic novelty detection, where, e.g., anomalies in color or texture
can be detected. For this mode a training set that defines a sample region (e.g., skin regions for skin detection or
samples of the correct texture) is passed to the SVMHandle, which is created in the Mode ’novelty-detection’.
After training, regions that differ from the trained sample regions are detected (e.g., the rejection class for skin or
errors in texture).
Parameter

. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Training image.
. ClassRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions of the classes to be trained.
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong
SVM handle.
Result
If the parameters are valid add_samples_image_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
add_samples_image_class_svm is processed completely exclusively without parallelization.

HALCON 8.0.2
920 CHAPTER 13. SEGMENTATION

Possible Predecessors
create_class_svm
Possible Successors
train_class_svm, write_samples_class_svm
Alternatives
read_samples_class_svm
See also
classify_image_class_svm, add_sample_class_svm, clear_samples_class_svm,
get_sample_num_class_svm, get_sample_class_svm, add_samples_image_class_mlp
Module
Foundation

class_2dim_sup ( const Hobject ImageCol, const Hobject ImageRow,


const Hobject FeatureSpace, Hobject *RegionClass2Dim )

T_class_2dim_sup ( const Hobject ImageCol, const Hobject ImageRow,


const Hobject FeatureSpace, Hobject *RegionClass2Dim )

Segment an image using two-dimensional pixel classification.


class_2dim_sup classifies the points in two-channel images using a two-dimensional feature space. For each
point, two gray values (one from each image) are used as features. The feature space is represented by the input
region. The classification is done as follows:
A point from the input region of an image is accepted if the point (gr , gc ), which is determined by the respective
gray values, is contained in the region FeatureSpace. gr is here a gray value from the image ImageRow,
while gc is the corresponding gray value from ImageCol.
Let P be a point with the coordinates P = (R, C), gr be the gray value at position (R, C) in the image ImageRow,
and gc be the gray value at position (R, C) in the image ImageCol. Then the point P is aggregated into the output
region if

(gr , gc ) ∈ FeatureSpace

gr is interpreted as row coordinate and gc as column coordinate.


For the generation of FeatureSpace, see histo_2dim. The feature space can be modified by applying region
transformation operators, such as rank_region, dilation1, shape_trans, elliptic_axis, etc.,
before calling class_2dim_sup.
The parameters ImageCol and ImageRow must contain an equal number of images with the same size. The
image points are taken from the intersection of the domains of both images (see reduce_domain).
Parameter

. ImageCol (input_object) . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1


Input image (first channel).
. ImageRow (input_object) . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1
Input image (second channel).
. FeatureSpace (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region defining the feature space.
. RegionClass2Dim (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Classified regions.
Example

read_image(&Image,"combine");
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
disp_image(Image,WindowHandle);
fwrite_string("draw region of interest with the mouse");

HALCON/C Reference Manual, 2008-5-13


13.1. CLASSIFICATION 921

fnew_line();
set_color(WindowHandle,"green");
draw_region(&Testreg,draw_region);
/* Texture transformation for 2-dimensional charachteristic */
texture_laws(Image,&T1,"el",2,5);
mean_image(T1,&M1,21,21);
clear_obj(T1);
texture_laws(M1,&T2,"es,",2,5);
mean_image(T2,&M2,21,21);
clear_obj(T2);
/* 2-dimensinal histogram of the test region */
histo_2dim(Testreg,M1,M2,&Histo);
/* All points occuring at least once */
threshold(Histo,&FeatureSpace,1.0,100000.0);
set_draw(WindowHandle,"fill");
set_color(WindowHandle,"red");
disp_region(FeatureSpace,WindowHandle);
fwrite_string("Characteristics area in red");
fnew_line();
/* Segmentation */
class_2dim_sup(M1,M2,FeatureSpace,&RegionClass2Dim);
set_color(WindowHandle,"blue");
disp_region(RegionClass2Dim,WindowHandle);
fwrite_string("Result of classification in blue");
fnew_line();

Complexity
Let A be the area of the input region. Then the runtime complexity is O(2562 + A).
Result
class_2dim_sup returns H_MSG_TRUE. If all parameters are correct, the behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_2dim_sup is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
histo_2dim, threshold, draw_region, dilation1, opening, shape_trans
Possible Successors
connection, select_shape, select_gray
Alternatives
class_ndim_norm, class_ndim_box, threshold
See also
histo_2dim
Module
Foundation

class_2dim_unsup ( const Hobject Image1, const Hobject Image2,


Hobject *Classes, Hlong Threshold, Hlong NumClasses )

T_class_2dim_unsup ( const Hobject Image1, const Hobject Image2,


Hobject *Classes, const Htuple Threshold, const Htuple NumClasses )

Segment two images by clustering.


class_2dim_unsup performs a classification with two single-channel images. First, a two-dimensional his-
togram of the two images is computed ( histo_2dim). In this histogram, the first maximum is extracted; it
serves as the first cluster center. The histogram is computed with the intersection of the domains of both images

HALCON 8.0.2
922 CHAPTER 13. SEGMENTATION

(see reduce_domain). After this, all pixels in the images that are at most Threshold pixels from the cluster
center in the maximum norm, are determined. These pixels form one output region. Next, the pixels thus classified
are deleted from the histogram so that they are not taken into account for the next class. In this modified histogram,
again the maximum is extracted; it again serves as a cluster center. The above steps are repeated NumClasses
times; thus, NumClasses output regions result. Only pixels defined in both images are returned.
Attention
Both input images must have the same size.
Parameter
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
First input image.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Second input image.
. Classes (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Classification result.
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Threshold (maximum distance to the cluster’s center).
Default Value : 15
Suggested values : Threshold ∈ {0, 2, 5, 8, 12, 17, 20, 30, 50, 70}
. NumClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of classes (cluster centers).
Default Value : 5
Suggested values : NumClasses ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 30, 40, 50}
Example

read_image(&ColorImage,"patras");
decompose3(ColorImage,&Red,&Green,&Blue);
class_2dim_unsup(Red,Green,&Seg,15,5);
set_colored(WindowHandle,12);
disp_region(Seg,WindowHandle);

Result
class_2dim_unsup returns H_MSG_TRUE if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_2dim_unsup is reentrant and processed without parallelization.
Possible Predecessors
decompose2, decompose3, median_image, anisotropic_diffusion, reduce_domain
Possible Successors
select_shape, select_gray, connection
Alternatives
threshold, histo_2dim, class_2dim_sup, class_ndim_norm, class_ndim_box
Module
Foundation

class_ndim_box ( const Hobject MultiChannelImage, Hobject *Regions,


Hlong ClassifHandle )

T_class_ndim_box ( const Hobject MultiChannelImage, Hobject *Regions,


const Htuple ClassifHandle )

Classify pixels using hyper-cuboids.


class_ndim_box classifies the pixels of the multi-channel image given in MultiChannelImage. To do so,
the classificator ClassifHandle created with create_class_box is used. The classificator can be trained

HALCON/C Reference Manual, 2008-5-13


13.1. CLASSIFICATION 923

using learn_ndim_box or as described with create_class_box. More information on the structure of


the classificator can be found also under that operator.
MultiChannelImage is a multi channel image. Its pixel values are used for the classification.
Parameter

. MultiChannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /


cyclic / int1 / int2 / int4 / real
Multi channel input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Classification result.
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong
Classificator handle.
Example

read_image(&Image,"meer");
disp_image(Image,WindowHandle);
set_color(WindowHandle,"green");
fwrite_string("Draw the foreground");
fnew_line();
draw_region(&Reg1,WindowHandle);
reduce_domain(Image,Reg1,&Foreground);
set_color(WindowHandle,"red");
fwrite_string("Draw background");
fnew_line();
draw_region(&Reg2,WindowHandle);
reduce_domain(Image,Reg2,&Background);
fwrite_string("Start to learn");
fnew_line();
create_class_box(&ClassifHandle);
learn_ndim_box(Foreground,Background,Image,ClassifHandle);
fwrite_string("start classification");
fnew_line();
class_ndim_box(Image,&Res,ClassifHandle);
set_draw(WindowHandle,"fill");
disp_region(Res,WindowHandle);
close_class_box(ClassifHandle);

Complexity
Let N be the number of hyper-cuboids and A be the area of the input region. Then the runtime complexity is
O(N, A).
Result
class_ndim_box returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_ndim_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, learn_class_box, median_image, compose2, compose3, compose4,
compose5, compose6, compose7
Alternatives
class_ndim_norm, class_2dim_sup, class_2dim_unsup
See also
descript_class_box
Module
Foundation

HALCON 8.0.2
924 CHAPTER 13. SEGMENTATION

class_ndim_norm ( const Hobject MultiChannelImage, Hobject *Regions,


const char *Metric, const char *SingleMultiple, double Radius,
double Center )

T_class_ndim_norm ( const Hobject MultiChannelImage, Hobject *Regions,


const Htuple Metric, const Htuple SingleMultiple, const Htuple Radius,
const Htuple Center )

Classify pixels using hyper-spheres or hyper-cubes.


class_ndim_norm classifies the pixels of the multi-channel image given in MultiChannelImage. The
result is returned in Regions as one region per classification object. The metric used (’euclid’ or ’maximum’)
is determined by Metric. This parameter must be set to the same value used in learn_ndim_norm. The
parameter SingleMultiple determines whether one region (’single’) or multiples regions (’multiple’) are gen-
erated for each cluster. Radius determines the radii or half edge length of the clusters, respectively. Center
determines their centers.
Parameter
. MultiChannelImage (input_object) . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte
Multi channel input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Classification result.
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Metric to be used.
Default Value : "euclid"
List of values : Metric ∈ {"euclid", "maximum"}
. SingleMultiple (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Return one region or one region for each cluster.
Default Value : "single"
List of values : SingleMultiple ∈ {"single", "multiple"}
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Cluster radii or half edge lengths (returned by learn_ndim_norm).
. Center (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Coordinates of the cluster centers (returned by learn_ndim_norm).
Example

read_image(&Image,"meer:);
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
disp_image(Image,WindowHandle);
fwrite_string("draw region of interest with the mouse");
fnew_line();
set_color(WindowHandle,"green");
draw_region(&Testreg,draw_region);
/* Texture transformation for 3-dimensional charachteristic */
texture_laws(Image,&T1,"el",2,5);
mean_image(T1,&M1,21,21);
texture_laws(Image,&T2,"es",2,5);
mean_image(T2,&M2,21,21);
texture_laws(Image,&T3,"le",2,5);
mean_image(T3,&M3,21,21);
compose3(M1,M2,M3,&M);
/* Cluster for 3-dimensional characteristic area determine training area */
create_tuple(&Metric,1);
set_s(Metric,"euclid",0);
create_tuple(&Radius,1);
set_d(Radius,20.0,0);
create_tuple(&MinNumber,1);
set_i(MinNumber,5,0);
T_learn_ndim_norm(Testobj,EMPTY_REGION,&M,"euclid",Radius,MinNumber,

HALCON/C Reference Manual, 2008-5-13


13.1. CLASSIFICATION 925

&Radius,&Center,_t);
/* Segmentation */
create_tuple(&RegionMode,1);
set_s(RegionMode,"multiple",0);
class_ndim_norm(M,&Regions,Metric,RegionMode,Radius,Center);
set_colored(WindowHandle,12);
disp_region(Regions,WindowHandle);
fwrite_string("Result of classification;");
fwrite_string("Each cluster in another color.");
fnew_line();

Complexity
Let N be the number of clusters and A be the area of the input region. Then the runtime complexity is O(N, A).
Result
class_ndim_norm returns H_MSG_TRUE if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_ndim_norm is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
learn_ndim_norm, compose2, compose3, compose4, compose5, compose6, compose7
Possible Successors
connection, select_shape, reduce_domain, select_gray
Alternatives
class_ndim_box, class_2dim_sup, class_2dim_unsup
Module
Foundation

classify_image_class_gmm ( const Hobject Image,


Hobject *ClassRegions, Hlong GMMHandle, double RejectionThreshold )

T_classify_image_class_gmm ( const Hobject Image,


Hobject *ClassRegions, const Htuple GMMHandle,
const Htuple RejectionThreshold )

Classify an image with a Gaussian Mixture Model.


classify_image_class_gmm performs a pixel classification with the Gaussian Mixture Model (GMM)
GMMHandle on the multichannel image Image. Before calling classify_image_class_gmm the
GMM must be trained with train_class_gmm. Image must have NumDim channels, as specified with
create_class_gmm. On output, ClassRegions contains NumClasses regions as the result of the classifi-
cation. The parameter RejectionThreshold can be used to reject pixels that have an uncertain classification.
RejectionThreshold represents a threshold on the K-sigma probability measure returned by the classifi-
cation (see classify_class_gmm and evaluate_class_gmm). All pixels having a probability below
RejectionThreshold are not assigned to any class.
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Input image.
. ClassRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented classes.
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong
GMM handle.

HALCON 8.0.2
926 CHAPTER 13. SEGMENTATION

. RejectionThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


Threshold for the rejection of the classification.
Default Value : 0.5
Suggested values : RejectionThreshold ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : (RejectionThreshold ≥ 0.0) ∧ (RejectionThreshold ≤ 1.0)
Example (Syntax: HDevelop)

read_image (Image, ’ic’)


gen_rectangle1 (Board, 80, 320, 110, 350)
gen_rectangle1 (Cap, 359, 263, 371, 302)
gen_rectangle1 (Resistor, 200, 252, 290, 256)
gen_rectangle1 (IC, 180, 135, 216, 165)
Classes := [Board,Cap]
Classes := [Classes,Resistor]
Classes := [Classes,IC]
create_class_gmm (3, 4, [1,30], ’full’, ’none’,0, 42, GMMHandle)
add_samples_image_class_gmm (Image, Classes, GMMHandle, 1.5)
get_sample_num_class_gmm (GMMHandle, NumSamples)
train_class_gmm (GMMHandle, 150, 1e-4, ’training’, 1e-4, Centers, Iter)
classify_image_class_gmm (Image, ClassRegions, GMMHandle, 0.0001)
clear_class_gmm (GMMHandle)

Result
If the parameters are valid, the operator classify_image_class_gmm returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
classify_image_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
train_class_gmm, read_class_gmm
See also
add_samples_image_class_gmm, create_class_gmm
Module
Foundation

classify_image_class_mlp ( const Hobject Image,


Hobject *ClassRegions, Hlong MLPHandle, double RejectionThreshold )

T_classify_image_class_mlp ( const Hobject Image,


Hobject *ClassRegions, const Htuple MLPHandle,
const Htuple RejectionThreshold )

Classify an image with a multilayer perceptron.


classify_image_class_mlp performs a pixel classification with the multilayer perceptron (MLP)
MLPHandle on the multichannel image Image. Before calling classify_image_class_mlp the MLP
must be trained with train_class_mlp. Image must have NumInput channels, as specified with
create_class_mlp. On output, ClassRegions contains NumOutput regions as the result of the clas-
sification. The parameter RejectionThreshold can be used to reject pixels that have an uncertain classi-
fication. RejectionThreshold represents a threshold on the probability measure returned by the classifi-
cation (see classify_class_mlp and evaluate_class_mlp). All pixels having a probability below
RejectionThreshold are not assigned to any class. Because an MLP typically assigns pixels that lie outside
the convex hull of the training data in the feature space to one of the classes with high probability (confidence), it
is useful in many cases to explicitly train a rejection class, even if RejectionThreshold is used, by adding
samples for the rejection class with add_samples_image_class_mlp and by re-training the MLP with
train_class_mlp.

HALCON/C Reference Manual, 2008-5-13


13.1. CLASSIFICATION 927

Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Input image.
. ClassRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented classes.
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong
MLP handle.
. RejectionThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Threshold for the rejection of the classification.
Default Value : 0.5
Suggested values : RejectionThreshold ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : (RejectionThreshold ≥ 0.0) ∧ (RejectionThreshold ≤ 1.0)
Example (Syntax: HDevelop)

read_image (Image, ’ic’)


gen_rectangle1 (Board, 80, 320, 110, 350)
gen_rectangle1 (Capacitor, 359, 263, 371, 302)
gen_rectangle1 (Resistor, 200, 252, 290, 256)
gen_rectangle1 (IC, 180, 135, 216, 165)
Classes := [Board,Capacitor]
Classes := [Classes,Resistor]
Classes := [Classes,IC]
create_class_mlp (3, 3, 4, ’softmax’, ’principal_components’, 3, 42,
MLPHandle)
add_samples_image_class_mlp (Image, Classes, MLPHandle)
get_sample_num_class_mlp (MLPHandle, NumSamples)
train_class_mlp (MLPHandle, 200, 1, 0.01, Error, ErrorLog)
classify_image_class_mlp (Image, ClassRegions, MLPHandle, 0.5)
dev_display (ClassRegions)
clear_class_mlp (MLPHandle)

Result
If the parameters are valid, the operator classify_image_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
classify_image_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
train_class_mlp, read_class_mlp
Alternatives
classify_image_class_svm, class_ndim_box, class_ndim_norm, class_2dim_sup
See also
add_samples_image_class_mlp, create_class_mlp
Module
Foundation

classify_image_class_svm ( const Hobject Image,


Hobject *ClassRegions, Hlong SVMHandle )

T_classify_image_class_svm ( const Hobject Image,


Hobject *ClassRegions, const Htuple SVMHandle )

Classify an image with a support vector machine.


classify_image_class_svm performs a pixel classification with the support vector machine (SVM)
SVMHandle on the multichannel image Image. Before calling classify_image_class_svm the SVM

HALCON 8.0.2
928 CHAPTER 13. SEGMENTATION

must be trained with train_class_svm. Image must have NumFeatures channels, as specified with
create_class_svm. On output, ClassRegions contains NumClasses regions as the result of the classi-
fication.
To prevent that the SVM assigns pixels that lie outside the convex hull of the training data in the feature space to
one of the classes, it is useful in many cases to explicitly train a rejection class by adding samples for the rejection
class with add_samples_image_class_svm and by re-training the SVM with train_class_svm.
An alternative for explicitly defining a rejection class is to use an SVM in the mode ’novelty-detection’. Please
refer to the description in create_class_svm and add_samples_image_class_svm.
Parameter

. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Input image.
. ClassRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented classes.
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong
SVM handle.
Example (Syntax: HDevelop)

read_image (Image, ’ic’)


gen_rectangle1 (Board, 20, 270, 160, 420)
gen_rectangle1 (Capacitor, 359, 263, 371, 302)
gen_rectangle1 (Resistor, 200, 252, 290, 256)
gen_rectangle1 (IC, 180, 135, 216, 165)
Classes := [Board,Capacitor]
Classes := [Classes,Resistor]
Classes := [Classes,IC]
create_class_svm (3, ’rbf’, 0.01, 0.01, 4, ’one-versus-all’,
’normalization’, 3, SVMHandle)
add_samples_image_class_svm (Image, Classes, SVMHandle)
train_class_svm (SVMHandle, 0.001, ’default’)
reduce_class_svm (SVMHandle, ’bottom_up’, 2, 0.01, SVMHandleReduced)
classify_image_class_svm (Image, ClassRegions, SVMHandleReduced)
dev_display (ClassRegions)
clear_class_svm (SVMHandleReduced)
clear_class_svm (SVMHandle)

Result
If the parameters are valid the operator classify_image_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
classify_image_class_svm is reentrant and processed without parallelization.
Possible Predecessors
train_class_svm, read_class_svm, reduce_class_svm
Alternatives
classify_image_class_mlp, class_ndim_box, class_ndim_norm, class_2dim_sup
See also
add_samples_image_class_svm, create_class_svm
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


13.1. CLASSIFICATION 929

learn_ndim_box ( const Hobject Foreground, const Hobject Background,


const Hobject MultiChannelImage, Hlong ClassifHandle )

T_learn_ndim_box ( const Hobject Foreground, const Hobject Background,


const Hobject MultiChannelImage, const Htuple ClassifHandle )

Train a classificator using a multi-channel image.


learn_ndim_box trains the classificator ClassifHandle with the gray values of MultiChannelImage
using the points in Foreground as training sample. The points in Background are to be rejected by the
classificator. The classificator trained thus can be used in class_ndim_box to segment multi-channel images.
Foreground are the points that should be found, Background contains the points that should not be found.
Each pixel is trained once during the training process. For points in Foreground the class “0” is used, while
for Background “1” is used. Pixels are trained by alternating points from Foreground with points from
Background. If one region is smaller than the other, pixels are taken cyclically from the smaller region until the
larger region is exhausted. learn_ndim_box later accepts only points that can be classified into class “0”.
From a user’s point of view the key difference between learn_ndim_norm and learn_ndim_box is that
in the latter case the rejection class affects the classification process itself. Here, a hyper plane is generated that
separates Foreground and Background classes, so that no points in feature space are classified incorrectly.
As for learn_ndim_norm, however, an overlap between Foreground and Background class is allowed.
This has its effect on the return value Quality. The larger the overlap, the smaller this value.
Attention
All channels must be of the same type.
Parameter

. Foreground (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Foreground pixels to be trained.
. Background (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Background pixels to be trained (rejection class).
. MultiChannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / int4 / real
Multi-channel training image.
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong
Classificator handle.
Complexity
Let N be the number of generated hyper-cuboids and A be the area of the larger input region. Then the runtime
complexity is O(N ∗ A).
Result
learn_ndim_box returns H_MSG_TRUE if all parameters are correct and there is an active classificator. The
behavior with respect to the input images can be determined by setting the values of the flags ’no_object_result’
and ’empty_region_result’ with set_system. If necessary, an exception is raised.
Parallelization Information
learn_ndim_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, draw_region
Possible Successors
class_ndim_box, descript_class_box
Alternatives
learn_class_box, learn_ndim_norm
Module
Foundation

HALCON 8.0.2
930 CHAPTER 13. SEGMENTATION

T_learn_ndim_norm ( const Hobject Foreground, const Hobject Background,


const Hobject Image, const Htuple Metric, const Htuple Distance,
const Htuple MinNumberPercent, Htuple *Radius, Htuple *Center,
Htuple *Quality )

Construct classes for class_ndim_norm.


learn_ndim_norm generates classification clusters from the region Foreground and the corresponding gray
values in the multi-channel image Image, which can be used in class_ndim_norm. Background deter-
mines a class of pixels not to be found in class_ndim_norm. This parameter may be empty (empty object).
The parameter Distance determines the maximum distance Radius of the clusters. It describes the minimum
distance between two cluster centers. If the parameter Distance is small the (small) hyper-cubes or hyper-
spheres can approximate the feature space well. Simultaneously the runtime during classification increases.
The ratio of the number of pixels in a cluster to the total number of pixels (in percent) must be larger than the
value of MinNumberPercent, otherwise the cluster is not returned. MinNumberPercent serves to eliminate
outliers in the training set. If it is chosen too large many clusters are suppressed.
Two different clustering procedures can be selected: The minimum distance algorithm (n-dimensional hyper-
spheres) and the maximum algorithm (n-dimensional hyper-cubes) for describing the pixels of the image to classify
in the n-dimensional histogram (parameter Metric). The Euclidian metric usually yields the better results, but
takes longer to compute. The parameter Quality returns the quality of the clustering. It is a measure of overlap
between the rejection class and the classificator classes. Values larger than 0 denote the corresponding ratio of
overlap. If no rejection region is given, its value is set to 1. The regions in Background do not influence on the
clustering. They are merely used to check the results that can be expected.
From a user’s point of view the key difference between learn_ndim_norm and learn_ndim_box is that
in the latter case the rejection class affects the classification process itself. Here, a hyper plane is generated that
separates Foreground and Background classes, so that no points in feature space are classified incorrectly.
As for learn_ndim_norm, however, an overlap between Foreground and Background class is allowed.
This has its effect on the return value Quality. The larger the overlap, the smaller this value.
Parameter

. Foreground (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Foreground pixels to be trained.
. Background (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Background pixels to be trained (rejection class).
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Multi-channel training image.
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Metric to be used.
Default Value : "euclid"
List of values : Metric ∈ {"euclid", "maximum"}
. Distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Maximum cluster radius.
Default Value : 10.0
Suggested values : Distance ∈ {1.0, 2.0, 3.0, 4.0, 6.0, 8.0, 10.0, 13.0, 17.0, 24.0, 30.0, 40.0}
Typical range of values : 0.0 ≤ Distance ≤ 511.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 1.0
Restriction : Distance > 0.0
. MinNumberPercent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
The ratio of the number of pixels in a cluster to the total number of pixels (in percent) must be larger than
MinNumberPercent (otherwise the cluster is not output).
Default Value : 0.01
Suggested values : MinNumberPercent ∈ {0.001, 0.05, 0.1, 0.2, 0.5, 1.0, 2.0, 5.0, 10.0}
Typical range of values : 0.0 ≤ MinNumberPercent ≤ 100.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : (0 ≤ MinNumberPercent) ∧ (MinNumberPercent ≤ 100)

HALCON/C Reference Manual, 2008-5-13


13.2. EDGES 931

. Radius (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *


Cluster radii or half edge lengths.
. Center (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Coordinates of all cluster centers.
. Quality (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Overlap of the rejection class with the classified objects (1: no overlap).
Assertion : (0 ≤ Quality) ∧ (Quality ≤ 1)
Result
learn_ndim_norm returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the input
images can be determined by setting the values of the flags ’no_object_result’ and ’empty_region_result’ with
set_system. If necessary, an exception is raised.
Parallelization Information
learn_ndim_norm is local and processed completely exclusively without parallelization.
Possible Predecessors
min_max_gray, sobel_amp, binomial_filter, gauss_image, reduce_domain,
diff_of_gauss
Possible Successors
class_ndim_norm, connection, dilation1, erosion1, opening, closing, rank_region,
shape_trans, skeleton
Alternatives
learn_ndim_box, learn_class_box
See also
class_ndim_norm, class_ndim_box, histo_2dim
References
P. Haberäcker, "‘Digitale Bildverarbeitung"’; Hanser-Studienbücher, München, Wien, 1987
Module
Foundation

13.2 Edges

T_detect_edge_segments ( const Hobject Image, const Htuple SobelSize,


const Htuple MinAmplitude, const Htuple MaxDistance,
const Htuple MinLength, Htuple *BeginRow, Htuple *BeginCol,
Htuple *EndRow, Htuple *EndCol )

Detect straight edge segments.


detect_edge_segments detects straight edge segments in the gray image Image. The extracted edge seg-
ments are returned as line segments with start point (BeginRow,BeginCol) and end point (EndRow,EndCol).
Edge detection is based on the Sobel filter, using ’sum_abs’ as parameter and SobelSize as the filter mask size
(see sobel_amp). Only pixels with a filter response larger than MinAmplitude are used as candidates for
edge points. These thresholded edge points are thinned and split into straight segments. Due to technical reasons,
edge points in which several edges meet are lost. Therefore, detect_edge_segments usually does not return
closed object contours. The parameter MaxDistance controls the maximum allowed distance of an edge point
to its approximating line. For efficiency reasons, the sum of the absolute values of the coordinate differences is
used instead of the Euclidean distance. MinLength controls the minimum length of the line segments. Lines
shorter than MinLength are not returned.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte


Input image.
. SobelSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Mask size of the Sobel operator.
Default Value : 5
List of values : SobelSize ∈ {3, 5, 7, 9, 11, 13}

HALCON 8.0.2
932 CHAPTER 13. SEGMENTATION

. MinAmplitude (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Minimum edge strength.
Default Value : 32
Suggested values : MinAmplitude ∈ {10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 80, 90, 100, 110}
Typical range of values : 1 ≤ MinAmplitude ≤ 255
Minimum Increment : 1
Recommended Increment : 1
Restriction : MinAmplitude ≥ 0
. MaxDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Maximum distance of the approximating line to its original edge.
Default Value : 3
Suggested values : MaxDistance ∈ {2, 3, 4, 5, 6, 7, 8}
Typical range of values : 1 ≤ MaxDistance ≤ 30
Minimum Increment : 1
Recommended Increment : 1
Restriction : MaxDistance ≥ 0
. MinLength (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Minimum length of to resulting line segments.
Default Value : 10
Suggested values : MinLength ∈ {3, 5, 7, 9, 11, 13, 16, 20}
Typical range of values : 1 ≤ MinLength ≤ 500
Minimum Increment : 1
Recommended Increment : 1
Restriction : MinLength ≥ 0
. BeginRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong *
Row coordinate of the line segments’ start points.
. BeginCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *
Column coordinate of the line segments’ start points.
. EndRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong *
Row coordinate of the line segments’ end points.
. EndCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong *
Column coordinate of the line segments’ end points.
Example

Htuple SobelSize,MinAmplitude,MaxDistance,MinLength;
Htuple RowBegin,ColBegin,RowEnd,ColEnd;

create_tuple(&SobelSize,1);
set_i(SobelSize,5,0);
create_tuple(&MinAmplitude,1);
set_i(MinAmplitude,32,0);
create_tuple(&MaxDistance,1);
set_i(MaxDistance,3,0);
create_tuple(&MinLength,1);
set_i(MinLength,10,0);
T_detect_edge_segments(Image,SobelSize,MinAmplitude,MaxDistance,MinLength,
&RowBegin,&ColBegin,&RowEnd,&ColEnd);

Result
detect_edge_segments returns H_MSG_TRUE if all parameters are correct. If the input is empty the be-
haviour can be set via set_system(’no_object_result’,<Result>). If necessary, an exception han-
dling is raised.
Parallelization Information
detect_edge_segments is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
sigma_image, median_image

HALCON/C Reference Manual, 2008-5-13


13.2. EDGES 933

Possible Successors
select_lines, partition_lines, select_lines_longest, line_position,
line_orientation
Alternatives
sobel_amp, threshold, skeleton
Module
Foundation

hysteresis_threshold ( const Hobject Image, Hobject *RegionHysteresis,


Hlong Low, Hlong High, Hlong MaxLength )

T_hysteresis_threshold ( const Hobject Image,


Hobject *RegionHysteresis, const Htuple Low, const Htuple High,
const Htuple MaxLength )

Perform a hysteresis threshold operation on an image.


hysteresis_threshold performs a hysteresis threshold operation (introduced by Canny) on an image. All
points in the input image Image having a gray value larger than or equal to High are immediately accepted
(“secure” points). Conversely, all points with gray values less than Low are immediately rejected. “Potential”
points with gray values between both thresholds are accepted if they are connected to “secure” points by a path of
“potential” points having a length of at most MaxLength points. This means that “secure” points influence their
surroundings (hysteresis).
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2


Input image.
. RegionHysteresis (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Segmented region.
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Lower threshold for the gray values.
Default Value : 30
Suggested values : Low ∈ {5, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Typical range of values : 0 ≤ Low ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 5
Restriction : (0 < Low) ∧ (Low < 255)
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Upper threshold for the gray values.
Default Value : 60
Suggested values : High ∈ {5, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130}
Typical range of values : 0 ≤ High ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 5
Restriction : ((0 < High) ∧ (High < 255)) ∧ (High > Low)
. MaxLength (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum length of a path of “potential” points to reach a “secure” point.
Default Value : 10
Suggested values : MaxLength ∈ {1, 2, 3, 5, 7, 10, 12, 14, 17, 20, 25, 30, 35, 40, 50}
Typical range of values : 1 ≤ MaxLength
Minimum Increment : 1
Recommended Increment : 5
Restriction : MaxLength > 1
Result
hysteresis_threshold returns H_MSG_TRUE if all parameters are correct. The behavior with respect
to the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.

HALCON 8.0.2
934 CHAPTER 13. SEGMENTATION

Parallelization Information
hysteresis_threshold is reentrant and automatically parallelized (on tuple level).
Alternatives
dyn_threshold, threshold, class_2dim_sup, fast_threshold
See also
edges_image, sobel_dir, background_seg
References
J. Canny, "‘Finding Edges and Lines in Images"’; Report, AI-TR-720, M.I.T. Artificial Intelligence Lab., Cam-
bridge, MA, 1983.
Module
Foundation

nonmax_suppression_amp ( const Hobject ImgAmp, Hobject *ImageResult,


const char *Mode )

T_nonmax_suppression_amp ( const Hobject ImgAmp,


Hobject *ImageResult, const Htuple Mode )

Suppress non-maximum points on an edge.


nonmax_suppression_amp suppresses all points in the regions of the image ImgAmp whose gray values are
not local (directed) maxima. In contrast to nonmax_suppression_dir, a direction image is not needed. Two
modes of operation can be selected:

’hvnms’ A point is labeled as a local maximum if its gray value is larger than or equal to the gray values within
a seach space of ± 5 pixels, either horizontally or vertically. Non-maximum points are removed from the
region, gray values remain unchanged.
’loc_max’ A point is labeled as a local maximum if its gray value is larger than or equal to the gray values of its
eight neighbors.

Parameter

. ImgAmp (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2


Amplitude (gradient magnitude) image.
. ImageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2
Image with thinned edge regions.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Select horizontal/vertical or undirected NMS.
Default Value : "hvnms"
List of values : Mode ∈ {"hvnms", "loc_max"}
Result
nonmax_suppression_amp returns H_MSG_TRUE if all parameters are correct. The behavior with respect
to the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
nonmax_suppression_amp is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
sobel_amp
Possible Successors
threshold, hysteresis_threshold
Alternatives
local_max, nonmax_suppression_dir
See also
skeleton

HALCON/C Reference Manual, 2008-5-13


13.2. EDGES 935

References
S.Lanser: "‘Detektion von Stufenkanten mittels rekursiver Filter nach Deriche"’; Diplomarbeit; Technische Uni-
versität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
J.Canny: "‘Finding Edges and Lines in Images"’; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cam-
bridge, MA; 1983.
Module
Foundation

nonmax_suppression_dir ( const Hobject ImgAmp, const Hobject ImgDir,


Hobject *ImageResult, const char *Mode )

T_nonmax_suppression_dir ( const Hobject ImgAmp,


const Hobject ImgDir, Hobject *ImageResult, const Htuple Mode )

Suppress non-maximum points on an edge using a direction image.


nonmax_suppression_dir suppresses all points in the regions of the image ImgAmp whose gray values
are not local (directed) maxima. ImgDir is a direction image giving the direction perpendicular to the local
maximum (Unit: 2 degrees, i.e., 50 degrees are coded as 25 in the image). Such images are returned, for example,
by edges_image. Two modes of operation can be selected:

’nms’ Each point in the image is tested whether its gray value is a local maximum perpendicular to its direction.
In this mode only the two neighbors closest to the given direction are examined. If one of the two gray values
is greater than the gray value of the point to be tested, it is suppressed (i.e., removed from the input region.
The corresponding gray value remains unchanged).
’inms’ Like ’nms’. However, the two gray values for the test are obtained by interpolation from four adjacent
points.

Parameter
. ImgAmp (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Amplitude (gradient magnitude) image.
. ImgDir (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : direction
Direction image.
. ImageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2
Image with thinned edge regions.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Select non-maximum-suppression or interpolating NMS.
Default Value : "nms"
List of values : Mode ∈ {"nms", "inms"}
Result
nonmax_suppression_dir returns H_MSG_TRUE if all parameters are correct. The behavior with respect
to the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
nonmax_suppression_dir is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
edges_image, sobel_dir, frei_dir
Possible Successors
threshold, hysteresis_threshold
Alternatives
nonmax_suppression_amp
See also
skeleton
References
S.Lanser: "‘Detektion von Stufenkanten mittels rekursiver Filter nach Deriche"’; Diplomarbeit; Technische Uni-
versität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.

HALCON 8.0.2
936 CHAPTER 13. SEGMENTATION

J.Canny: "‘Finding Edges and Lines in Images"’; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cam-
bridge; 1983.
Module
Foundation

13.3 Regiongrowing

expand_gray ( const Hobject Regions, const Hobject Image,


const Hobject ForbiddenArea, Hobject *RegionExpand,
const char *Iterations, const char *Mode, Hlong Threshold )

T_expand_gray ( const Hobject Regions, const Hobject Image,


const Hobject ForbiddenArea, Hobject *RegionExpand,
const Htuple Iterations, const Htuple Mode, const Htuple Threshold )

Fill gaps between regions (depending on gray value or color) or split overlapping regions.
expand_gray closes gaps between the input regions, which resulted from the suppression of small regions in a
segmentation operator, (mode ’image’), for example, or separates overlapping regions ’region’). Both uses result
from the expansion of regions. The operator works by adding a one pixel wide “strip” to a region, in which the
gray values or color are different from the gray values or color of neighboring pixles on the region’s border by at
most Threshold (in each channel). For images of type ’cyclic’ (e.g., direction images), also points with a gray
value difference of at least 255 − Threshold are added to the output region.
The expansion takes place only in regions, which are designated as not “forbidden” (parameter
ForbiddenArea). The number of iterations is determined by the parameter Iterations. By passing ’maxi-
mal’, expand_gray iterates until convergence, i.e., until no more changes occur. By passing 0 for this parameter,
all non-overlapping regions are returned. The two modes of operation (’image’ and ’region’) are different in the
following ways:

’image’ The input regions are expanded iteratively until they touch another region or the image border, or the
expansion stops because of too high gray value differences. Because expand_gray processes all regions
simultaneously, gaps between regions are distributed evenly to all regions with a similar gray value. Over-
lapping regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to regions having a matching gray value or color.

Attention
Because regions are only expanded into areas having a matching gray value or color, usually gaps will remain
between the output regions, i.e., the segmentation is not complete.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions for which the gaps are to be closed, or which are to be separated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / cyclic
Image (possibly multi-channel) for gray value or color comparison.
. ForbiddenArea (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Regions in which no expansion takes place.
. RegionExpand (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Expanded or separated regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char * / Hlong
Number of iterations.
Default Value : "maximal"
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, "maximal"}
Typical range of values : 1 ≤ Iterations ≤ 500 (lin)
Minimum Increment : 1
Recommended Increment : 1

HALCON/C Reference Manual, 2008-5-13


13.3. REGIONGROWING 937

. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *


Expansion mode.
Default Value : "image"
List of values : Mode ∈ {"image", "region"}
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Maximum difference between the gray value or color at the region’s border and a candidate for expansion.
Default Value : 32
Suggested values : Threshold ∈ {5, 10, 15, 20, 25, 30, 40, 50}
Typical range of values : 1 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 5
Example

read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
regiongrowing(Image,&RawSegments,3,3,6.0,100);
set_colored(WindowHandle,12);
disp_region(RawSegments,WindowHandle);
expand_gray(RawSegments,Image,EMPTY_REGION,&Segments,"maximal","image",24);
clear_window(WindowHandle);
disp_region(Segments,WindowHandle)

Result
expand_gray always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty input
region via set_system(’empty_region_result’,<Result>), and the behavior in case of an empty
result region via set_system(’store_empty_region’,<true/false>). If necessary, an exception
handling is raised.
Parallelization Information
expand_gray is reentrant and processed without parallelization.
Possible Predecessors
connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
expand_gray_ref, expand_region
Module
Foundation

expand_gray_ref ( const Hobject Regions, const Hobject Image,


const Hobject ForbiddenArea, Hobject *RegionExpand,
const char *Iterations, const char *Mode, Hlong RefGray,
Hlong Threshold )

T_expand_gray_ref ( const Hobject Regions, const Hobject Image,


const Hobject ForbiddenArea, Hobject *RegionExpand,
const Htuple Iterations, const Htuple Mode, const Htuple RefGray,
const Htuple Threshold )

Fill gaps between regions (depending on gray value or color) or split overlapping regions.
expand_gray_ref closes gaps between the input regions, which resulted from the suppression of small regions
in a segmentation operator, (mode ’image’), for example, or separates overlapping regions ’region’). Both uses
result from the expansion of regions. The operator works by adding a one pixel wide “strip” to a region, in
which the gray values or color are different from a reference gray value or color by at most Threshold (in each

HALCON 8.0.2
938 CHAPTER 13. SEGMENTATION

channel). For images of type ’cyclic’ (e.g., direction images), also points with a gray value difference of at least
255 − Threshold are added to the output region.
The expansion takes place only in regions, which are designated as not “forbidden” (parameter
ForbiddenArea). The number of iterations is determined by the parameter Iterations. By passing ’max-
imal’, expand_gray_ref iterates until convergence, i.e., until no more changes occur. By passing 0 for this
parameter, all non-overlapping regions are returned. The two modes of operation (’image’ and ’region’) are differ-
ent in the following ways:

’image’ The input regions are expanded iteratively until they touch another region or the image border, or the
expansion stops because of too high gray value differences. Because expand_gray_ref processes all
regions simultaneously, gaps between regions are distributed evenly to all regions with a similar gray value.
Overlapping regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to regions having a matching gray value or color.

Attention
Because regions are only expanded into areas having a matching gray value or color, usually gaps will remain
between the output regions, i.e., the segmentation is not complete.
Parameter

. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions for which the gaps are to be closed, or which are to be separated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / cyclic
Image (possibly multi-channel) for gray value or color comparison.
. ForbiddenArea (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Regions in which no expansion takes place.
. RegionExpand (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Expanded or separated regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char * / Hlong
Number of iterations.
Default Value : "maximal"
Suggested values : Iterations ∈ {"maximal", 1, 2, 3, 4, 5, 7, 10, 15, 20, 30, 50, 70, 100, 150, 200, 300,
500}
Typical range of values : 1 ≤ Iterations ≤ 500 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Expansion mode.
Default Value : "image"
List of values : Mode ∈ {"image", "region"}
. RefGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Reference gray value or color for comparison.
Default Value : 128
Suggested values : RefGray ∈ {1, 10, 20, 50, 100, 128, 200, 255}
Typical range of values : 1 ≤ RefGray ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Maximum difference between the reference gray value or color and a candidate for expansion.
Default Value : 32
Suggested values : Threshold ∈ {4, 10, 15, 20, 25, 30, 40}
Typical range of values : 1 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 5
Example

read_image(&Image,"fabrik");

HALCON/C Reference Manual, 2008-5-13


13.3. REGIONGROWING 939

disp_image(Image,WindowHandle);
regiongrowing(Image,&RawSegments,3,3,6.0,100);
set_colored(WindowHandle,12);
disp_region(RawSegments,WindowHandle);
T_intensity(RawSegments,Image,&Mean,_t);
set_i(Thresh,24,0);
set_s(Iter,"maximal",0);
set_s(Mode,"image",0);
T_expand_gray_ref(RawSegments,Image,EMPTY_REGION,&Segments,Iter,Mode,
Mean,Thresh);
clear_window(WindowHandle);
disp_region(Segments,WindowHandle);

Result
expand_gray_ref always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty
input region via set_system(’empty_region_result’,<Result>), and the behavior in case of an
empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an ex-
ception handling is raised.
Parallelization Information
expand_gray_ref is reentrant and processed without parallelization.
Possible Predecessors
connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
expand_gray, expand_region
Module
Foundation

expand_line ( const Hobject Image, Hobject *RegionExpand,


Hlong Coordinate, const char *ExpandType, const char *RowColumn,
double Threshold )

T_expand_line ( const Hobject Image, Hobject *RegionExpand,


const Htuple Coordinate, const Htuple ExpandType,
const Htuple RowColumn, const Htuple Threshold )

Expand a region starting at a given line.


expand_line generates a region by expansion, starting at a given line (row or column). The expansion is
terminated when the current gray value differs by more than Threshold from the mean gray value along the line
(ExpandType = ’mean’) or from the previously added gray value (ExpandType = ’gradient’).
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Input image.
. RegionExpand (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Extracted segments.
. Coordinate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Row or column coordinate.
Default Value : 256
Suggested values : Coordinate ∈ {16, 64, 128, 200, 256, 300, 400, 511}
Restriction : Coordinate ≥ 0

HALCON 8.0.2
940 CHAPTER 13. SEGMENTATION

. ExpandType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *


Stopping criterion.
Default Value : "gradient"
List of values : ExpandType ∈ {"gradient", "mean"}
. RowColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Segmentation mode (row or column).
Default Value : "row"
List of values : RowColumn ∈ {"row", "column"}
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Threshold for the expansion.
Default Value : 3.0
Suggested values : Threshold ∈ {0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 13.0, 17.0, 20.0, 30.0}
Typical range of values : 1.0 ≤ Threshold ≤ 255.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0
Restriction : (Threshold ≥ 0.0) ∧ (Threshold ≤ 255.0)
Example

read_image(&Image,"fabrik");
gauss_image(Image,&Gauss,5);
expand_line(Gauss,&Reg,100,"mean","row",5.0);
set_colored(WindowHandle,12);
disp_region(Maxima,WindowHandle);

Parallelization Information
expand_line is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image, anisotropic_diffusion,
median_image, affine_trans_image, rotate_image
Possible Successors
intersection, opening, closing
Alternatives
regiongrowing_mean, expand_gray, expand_gray_ref
Module
Foundation

regiongrowing ( const Hobject Image, Hobject *Regions, Hlong Row,


Hlong Column, double Tolerance, Hlong MinSize )

T_regiongrowing ( const Hobject Image, Hobject *Regions,


const Htuple Row, const Htuple Column, const Htuple Tolerance,
const Htuple MinSize )

Segment an image using regiongrowing.


regiongrowing segments images into regions of the same intensity — rastered into rectangles of size Row ×
Column. In order to decide whether two adjacent rectangles belong to the same region only the gray value of their
center points is used. If the gray value difference is less then or equal to Tolerance the rectangles are merged
into one region.
If g1 und g2 are two gray values to be examined, they are merged into the same region if:

|g1 − g2 | < Tolerance

For images of type ’cyclic’, the following formulas are used:

(|g1 − g2 | < Tolerance) ∧ (|g1 − g2 | ≤ 127)

HALCON/C Reference Manual, 2008-5-13


13.3. REGIONGROWING 941

(256 − |g1 − g2 | < Tolerance) ∧ (|g1 − g2 | > 127)

For rectangles larger than one pixel, ususally the images should be smoothed with a lowpass filter with a size of at
least Row × Column before calling regiongrowing (so that the gray values at the centers of the regtangles
are “representative” for the whole rectangle). If the image contains little noise and the rectangles are small, the
smoothing can be omitted in many cases.
The resulting regions are collections of rectangles of the chosen size Row × Column . Only regions containing at
least MinSize points are returned.
Regiongrowing is a very fast operation, and thus suited for time-critical applications.
Attention
Column and Row are automatically converted to odd values if necessary.
Parameter
. Image (input_object) . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / int4 / real
Input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented regions.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Vertical distance between tested pixels (height of the raster).
Default Value : 3
Suggested values : Row ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21}
Typical range of values : 1 ≤ Row ≤ 99 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Row ≥ 1) ∧ odd(Row)
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Horizontal distance between tested pixels (height of the raster).
Default Value : 3
Suggested values : Column ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21}
Typical range of values : 1 ≤ Column ≤ 99 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Column ≥ 1) ∧ odd(Column)
. Tolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Points with a gray value difference less then or equal to tolerance are accumulated into the same object.
Default Value : 6.0
Suggested values : Tolerance ∈ {1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 12.0, 14.0, 18.0, 25.0}
Typical range of values : 1.0 ≤ Tolerance ≤ 127.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 1.0
Restriction : (0 ≤ Tolerance) ∧ (Tolerance < 127)
. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
Minimum size of the output regions.
Default Value : 100
Suggested values : MinSize ∈ {1, 5, 10, 20, 50, 100, 200, 500, 1000}
Typical range of values : 1 ≤ MinSize
Minimum Increment : 1
Recommended Increment : 5
Restriction : MinSize ≥ 1
Example

read_image(&Image,"fabrik");
mean_image(Image,&Mean,Row,Column);
regiongrowing(Mean,&Result,Row,Column,6,100);

Complexity
Let N be the number of found regions and M the number of points in one of these regions. Then the runtime
complexity is O(N ∗ log(M ) ∗ M ).

HALCON 8.0.2
942 CHAPTER 13. SEGMENTATION

Result
regiongrowing returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
regiongrowing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, mean_image, gauss_image, smooth_image, median_image,
anisotropic_diffusion
Possible Successors
select_shape, reduce_domain, select_gray
Alternatives
regiongrowing_n, regiongrowing_mean, label_to_region
Module
Foundation

regiongrowing_mean ( const Hobject Image, Hobject *Regions,


Hlong StartRows, Hlong StartColumns, double Tolerance, Hlong MinSize )

T_regiongrowing_mean ( const Hobject Image, Hobject *Regions,


const Htuple StartRows, const Htuple StartColumns,
const Htuple Tolerance, const Htuple MinSize )

Perform a regiongrowing using mean gray values.


regiongrowing_mean performs a regiongrowing using the mean gray values of a region, starting from points
given by StartRows and StartColumns. At any point in the process the mean gray value of the current region
is calculated. Gray values at the boundary of the region are added to the region if they differ from the current mean
by less than Tolerance. Regions smaller than MinSize are suppressed.
If no starting points are given (empty tuples), the expansion process starts at the upper leftmost point, and is
continued with the first unprocessed point after a region has been created.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4


Input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented regions.
. StartRows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) Hlong
Row coordinates of the starting points.
Default Value : []
Typical range of values : 0 ≤ StartRows
Minimum Increment : 1
Recommended Increment : 1
. StartColumns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) Hlong
Column coordinates of the starting points.
Default Value : []
Typical range of values : 0 ≤ StartColumns
Minimum Increment : 1
Recommended Increment : 1
. Tolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Maximum deviation from the mean.
Default Value : 5.0
Suggested values : Tolerance ∈ {0.5, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 12.0, 15.0, 17.0,
20.0, 25.0, 30.0, 40.0}
Restriction : Tolerance > 0.0

HALCON/C Reference Manual, 2008-5-13


13.3. REGIONGROWING 943

. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong


Minimum size of a region.
Default Value : 100
Suggested values : MinSize ∈ {0, 10, 30, 50, 100, 500, 1000, 2000}
Typical range of values : 0 ≤ MinSize
Minimum Increment : 1
Recommended Increment : 100
Restriction : MinSize ≥ 0
Result
regiongrowing_mean returns H_MSG_TRUE if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
regiongrowing_mean is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, anisotropic_diffusion, median_image,
mean_image
Possible Successors
select_shape, reduce_domain, opening, expand_region
Alternatives
regiongrowing, regiongrowing_n
Module
Foundation

regiongrowing_n ( const Hobject MultiChannelImage, Hobject *Regions,


const char *Metric, double MinTolerance, double MaxTolerance,
Hlong MinSize )

T_regiongrowing_n ( const Hobject MultiChannelImage, Hobject *Regions,


const Htuple Metric, const Htuple MinTolerance,
const Htuple MaxTolerance, const Htuple MinSize )

Segment an image using regiongrowing for multi-channel images.


regiongrowing_n performs a multi-channel regiongrowing. The n channels give rise to an n-dimensional
feature vector. Neighboring points are aggregated into the same region if the difference of their feature vectors
with respect to the given metric lies in the interval [MinTolerance, MaxTolerance]. Only neighbors of the
4-neighborhood are examined. The following metrics can be used:
Let gA denote the gray value in the feature vector A at point a of the image, and likewise be gB the gray value
in the feature vector B at point a neighboring point b. Let g(d) be the gray value with index d. Furthermore, let
M inT denote MinTolerance and M axT denote MaxTolerance.

’1-norm’: Sum of absolute values


1X
M inT ≤ |gA − gB | ≤ M axT
n
’2-norm’: Euclidian distance rP
(gA − gB )2
M inT ≤ ≤ M axT
n
’3-norm’: p - Norm with p = 3 rP
3 (gA − gB )3
M inT ≤ ≤ M axT
n
’4-norm’: p - Norm with p = 4 rP
4 (gA − gB )4
M inT ≤ ≤ M axT
n

HALCON 8.0.2
944 CHAPTER 13. SEGMENTATION

’n-norm’: Minkowski distance rP


n (gA − gB )n
M inT ≤ ≤ M axT
n
’max-diff’: Supremum distance
M inT ≤ max {|gA − gB |} ≤ M axT

’min-diff’: Infimum distance


M inT ≤ min {|gA − gB |} ≤ M axT

’variance’: Variance of gray value differences

M inT ≤ V ar(gA − gB ) ≤ M axT

’dot-product’: Dot product


1
qX
M inT ≤ (gA gB ) ≤ M axT
n
’correlation’: Correlation
1X
mA = gA
n
1 X
q
V arA = (gA − mA )2
n
1X
mB = gB
n
1 X
q
V arB = (gB − mB )2
n
1 X (gA − mA )(gB − mB )
M inT ≤ 2 ≤ M axT
n (V arA V arB )
’mean-diff’: Difference of arithmetic means
1X
a= gA
n
1X
b= gB
n
M inT ≤ |a − b| ≤ M axT

’mean-ratio’: Ratio of arithmetic means


1X
a= gA
n
1X
b= gB
n
 
a b
M inT ≤ min , ≤ M axT
b a
’length-diff’: Difference of the vector lengths rP
2
gA
a=
n
rP
gB2
b=
n
M inT ≤ |a − b| ≤ M axT

’length-ratio’: Ratio of the vector lengths rP


2
gA
a=
n
rP
2
gB
b=
n
 
a b
M inT ≤ min , ≤ M axT
b a

HALCON/C Reference Manual, 2008-5-13


13.3. REGIONGROWING 945

’n-norm-ratio’: Ratio of the vector lengths w.r.t the p-norm with p = n


rP
n
gA
n
a=
n
rP
n
gB
n
b=
n
 
a b
M inT ≤ min , ≤ M axT
b a
’gray-max-diff’: Difference of the maximum gray values

a = max {|gA |}

b = max {|gB |}
M inT ≤ |a − b| ≤ M axT
’gray-max-ratio’: Ratio of the maximum gray values

a = max {|gA |}

b = max {|gB |}
 
a b
M inT ≤ min , ≤ M axT
b a
’gray-min-diff’: Difference of the minimum gray values

a = min {|gA |}

b = min {|gB |}
M inT ≤ |a − b| ≤ M axT

’gray-min-ratio’: Ratio of the minimum gray values

a = min {|gA |}

b = min {|gB |}
 
a b
M inT ≤ min , ≤ M axT
b a
’variance-diff’: Difference of the variances over all gray values (channels)

M inT ≤ |V ar(gA ) − V ar(gB )| ≤ M axT

’variance-ratio’: Ratio of the variances over all gray values (channels)

V ar(gB )
M inT ≤ ≤ M axT
V ar(gA )

’mean-abs-diff’: Difference of the sum of absolute values over all gray values (channels)
X
a= |gA (d) − gA (k)|
d,k,k<d

X
b= |gB (d) − gB (k)|
d,k,k<d

|a − b|
M inT ≤ ≤ M axT
Anzahl der Summen

HALCON 8.0.2
946 CHAPTER 13. SEGMENTATION

’mean-abs-ratio’: Ratio of the sum of absolute values over all gray values (channels)
X
a= |gA (d) − gA (k)|
d,k,k<d
X
b= |gB (d) − gB (k)|
d,k,k<d
 
a b
M inT ≤ min , ≤ M axT
b a
’max-abs-diff’: Difference of the maximum distance of the components

a = max {gA (d), gA (k)}

b = max {gB (d), gB (k)}


M inT ≤ |a − b| ≤ M axT
’max-abs-ratio’: Ratio of the maximum distance of the components

a = max {gA (d), gA (k)}

b = max {gB (d), gB (k)}


 
a b
M inT ≤ min , ≤ M axT
b a
’min-abs-diff’: Difference of the minimum distance of the components

a = min {gA (d), gA (k)}, k < d

b = min {gB (d), gB (k)}, k < d


M inT ≤ |a − b| ≤ M axT
’min-abs-ratio’: Ratio of the minimum distance of the components

a = min {gA (d), gA (k)}, k < d

b = min {gB (d), gB (k)}, k < d


 
a b
M inT ≤ min , ≤ M axT
b a
’plane’: The following has to hold for all d1 ,d2 ∈ [1, n]:

gA (d1 ) > gA (d2 ) ⇒ gB (d1 ) > gB (d2 )

gA (d1 ) < gA (d2 ) ⇒ gB (d1 ) < gB (d2 )

Regions with an area less than MinSize are suppressed.


Parameter
. MultiChannelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented regions.
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Metric for the distance of the feature vectors.
Default Value : "2-norm"
List of values : Metric ∈ {"1-norm", "2-norm", "3-norm", "4-norm", "n-norm", "max-diff", "min-diff",
"variance", "dot-product", "correlation", "mean-diff", "mean-ratio", "length-diff", "length-ratio",
"n-norm-ratio", "gray-max-diff", "gray-max-ratio", "gray-min-diff", "gray-min-ratio", "variance-diff",
"variance-ratio", "mean-abs-diff", "mean-abs-ratio", "max-abs-diff", "max-abs-ratio", "min-abs-diff",
"min-abs-ratio", "plane"}

HALCON/C Reference Manual, 2008-5-13


13.4. THRESHOLD 947

. MinTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number ; double / Hlong


Lower threshold for the features’ distance.
Default Value : 0.0
Suggested values : MinTolerance ∈ {0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 12.0, 14.0, 16.0,
18.0, 20.0, 25.0, 30.0}
. MaxTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number ; double / Hlong
Upper threshold for the features’ distance.
Default Value : 20.0
Suggested values : MaxTolerance ∈ {0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 12.0, 14.0, 16.0,
18.0, 20.0, 25.0, 30.0}
. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
Minimum size of the output regions.
Default Value : 30
Suggested values : MinSize ∈ {1, 10, 25, 50, 100, 200, 500, 1000}
Typical range of values : 1 ≤ MinSize
Minimum Increment : 1
Recommended Increment : 5
Result
regiongrowing_n returns H_MSG_TRUE if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
regiongrowing_n is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
compose2, compose3
Alternatives
class_2dim_sup, class_ndim_norm, class_ndim_box
See also
regiongrowing_mean
Module
Foundation

13.4 Threshold
auto_threshold ( const Hobject Image, Hobject *Regions, double Sigma )
T_auto_threshold ( const Hobject Image, Hobject *Regions,
const Htuple Sigma )

Segment an image using thresholds determined from its histogram.


auto_threshold segments a single-channel image using multiple thresholding. First, the absolute histogram of
the gray values is determined. Then, relevant minima are extracted from the histogram, which are used successively
as parameters for a thresholding operation. The thresholds used for byte images are 0, 255, and all minima extracted
from the histogram (after the histogram has been smoothed with a Gaussian filter with standard deviation Sigma).
For each gray value interval one region is generated. Thus, the number of regions is the number of minima +
1. For uint2 images, the above procedure is used analogously. However, here the highest threshold is 65535.
Furthermore, the value of Sigma (virtually) refers to a histogram with 256 values, although internally histograms
with a higher resolution are used. This is done to facilitate switching between image types without having to
change the parameter Sigma. The larger the value of Sigma is chosen, the fewer regions will be extracted. This
operator is useful if the regions to be extracted exhibit similar gray values (homogeneous regions).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Regions with gray values within the automatically determined intervals.

HALCON 8.0.2
948 CHAPTER 13. SEGMENTATION

. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong


Sigma for the Gaussian smoothing of the histogram.
Default Value : 2.0
Suggested values : Sigma ∈ {0.0, 0.5, 1.0, 2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.0 ≤ Sigma ≤ 100.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.3
Restriction : Sigma ≥ 0.0
Example

read_image(&Image,"fabrik");
median_image(Image,&Median,"circle",3,"mirrored");
auto_threshold(Median,&Seg,2.0);
connection(Seg,&Connected);
set_colored(WindowHandle,12);
disp_obj(Connected,WindowHandle);

Parallelization Information
auto_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
anisotropic_diffusion, median_image, illuminate
Possible Successors
connection, select_shape, select_gray
Alternatives
bin_threshold, char_threshold
See also
gray_histo, gray_histo_abs, histo_to_thresh, smooth_funct_1d_gauss, threshold
Module
Foundation

bin_threshold ( const Hobject Image, Hobject *Region )


T_bin_threshold ( const Hobject Image, Hobject *Region )

Segment an image using an automatically determined threshold.


bin_threshold segments a single-channel gray value image using an automatically determined threshold.
First, the relative histogram of the gray values is determined. Then, relevant minima are extracted from the his-
togram, which are used as parameters for a thresholding operation. In order to reduce the number of minima, the
histogram is smoothed with a Gaussian, as in auto_threshold. The mask size is enlarged until there is only
one minimum in the smoothed histogram. The selected region contains the pixels with gray values from 0 to the
minimum. This operator is, for example useful for the segmentation of dark characters on a light paper.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Input image.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Dark regions of the image.
Example

read_image(&Image,"letters");
bin_threshold(Image,&Seg);
connection(Seg,&Connected);
set_shape(WindowHandle,"rectangle1");
set_colored(WindowHandle,6);
disp_region(Connected,WindowHandle);

HALCON/C Reference Manual, 2008-5-13


13.4. THRESHOLD 949

Parallelization Information
bin_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
anisotropic_diffusion, median_image, illuminate
Possible Successors
connection, select_shape, select_gray
Alternatives
auto_threshold, char_threshold
See also
gray_histo, smooth_funct_1d_gauss, threshold
Module
Foundation

char_threshold ( const Hobject Image, const Hobject HistoRegion,


Hobject *Characters, double Sigma, double Percent, Hlong *Threshold )

T_char_threshold ( const Hobject Image, const Hobject HistoRegion,


Hobject *Characters, const Htuple Sigma, const Htuple Percent,
Htuple *Threshold )

Perform a threshold segmentation for extracting characters.


The main application of char_threshold is to segment single-channel images of dark characters on bright
paper. The operator works as follows: First, a histogram of the gray values in the image Image is computed for
the points in the region HistoRegion. To eliminate noise, the histogram is smoothed with the given Sigma
(Gaussian smoothing). In the histogram, the background (white paper) corresponds to a large peak at high gray
values, while the characters form a small peak at low gray values. In contrast to the operator bin_threshold,
which locates the minimum between the two peaks, here the threshold for the segmentation is determined in
relation to the maximum of the histogram, i.e., the background, with the following condition:

histogram[threshold] ∗ 100.0 < histogram[maximum] ∗ (100.0 − Percent)

For example, if you choose Percent = 95 the operator locates the gray value whose frequency is at most 5
percent of the maximum frequency. Because char_threshold assumes that the characters are darker than the
background, the threshold is searched for “to the left” of the maximum.
In comparison to bin_threshold, this operator should be used if there is no clear minimum between the
histogram peaks corresponding to the characters and the background, respectively, or if there is no peak corre-
sponding to the characters at all. This may happen, e.g., if the image contains only few characters or in the case of
a non-uniform illumination.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Input image.
. HistoRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region in which the histogram is computed.
. Characters (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Dark regions (characters).
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Sigma for the Gaussian smoothing of the histogram.
Default Value : 2.0
Suggested values : Sigma ∈ {0.0, 0.5, 1.0, 2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.0 ≤ Sigma ≤ 50.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.2

HALCON 8.0.2
950 CHAPTER 13. SEGMENTATION

. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double / Hlong


Percentage for the gray value difference.
Default Value : 95
Suggested values : Percent ∈ {90, 92, 95, 96, 97, 98, 99, 99.5, 100}
Typical range of values : 0.0 ≤ Percent ≤ 100.0 (lin)
Minimum Increment : 0.1
Recommended Increment : 0.5
. Threshold (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Calculated threshold.
Example

read_image(&Image,"letters");
char_threshold(Image,Image,&Seg,0.0,5.0,&Threshold);
connection(Seg,&Connected);
set_colored(WindowHandle,12);
set_shape(WindowHandle,"rectangle1");
disp_region(Connected,WindowHandle);

Parallelization Information
char_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
anisotropic_diffusion, median_image, illuminate
Possible Successors
connection, select_shape, select_gray
Alternatives
bin_threshold, auto_threshold, gray_histo, smooth_funct_1d_gauss, threshold
Module
Foundation

check_difference ( const Hobject Image, const Hobject Pattern,


Hobject *Selected, const char *Mode, Hlong DiffLowerBound,
Hlong DiffUpperBound, Hlong GrayOffset, Hlong AddRow, Hlong AddCol )

T_check_difference ( const Hobject Image, const Hobject Pattern,


Hobject *Selected, const Htuple Mode, const Htuple DiffLowerBound,
const Htuple DiffUpperBound, const Htuple GrayOffset,
const Htuple AddRow, const Htuple AddCol )

Compare two images pixel by pixel.


check_difference selects from the input image Image those pixels (go = gImage ), whose
gray value difference to the corresponding pixels in Pattern is inside (outside) of the interval
[DiffLowerBound, DiffUpperBound]. The pixels of Pattern are translated by (AddRow, AddCol) with
respect to Image. Let gp be the gray value from Pattern translated by (AddRow, AddCol) with respect to go .
If the selected mode Mode is ’diff_inside’, a pixel go is selected if

g_o − g_p − GrayOffset > DiffLowerBound and


g_o − g_p − GrayOffset < DiffUpperBound.

If the mode is set to ’diff_outside’, a pixel go is selected if

g_o − g_p − GrayOffset ≤ DiffLowerBound or


g_o − g_p − GrayOffset ≥ DiffUpperBound.

HALCON/C Reference Manual, 2008-5-13


13.4. THRESHOLD 951

This test is performed for all points of the domain (region) of Image, intersected with the domain of the translated
Pattern. All points fulfilling the above condition are aggregated in the output region. The two images may be
of different size. Typically, Pattern is smaller than Image.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image.
. Pattern (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Comparison image.
. Selected (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Points in which the two images are similar/different.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Mode: return similar or different pixels.
Default Value : "diff_outside"
Suggested values : Mode ∈ {"diff_inside", "diff_outside"}
. DiffLowerBound (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Lower bound of the tolerated gray value difference.
Default Value : -5
Suggested values : DiffLowerBound ∈ {0, -1, -2, -3, -5, -7, -10, -12, -15, -17, -20, -25, -30}
Typical range of values : -255 ≤ DiffLowerBound ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 2
Restriction : (-255 ≤ DiffLowerBound) ∧ (DiffLowerBound ≤ 255)
. DiffUpperBound (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Upper bound of the tolerated gray value difference.
Default Value : 5
Suggested values : DiffUpperBound ∈ {0, 1, 2, 3, 5, 7, 10, 12, 15, 17, 20, 25, 30}
Typical range of values : -255 ≤ DiffUpperBound ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 2
Restriction : (-255 ≤ DiffUpperBound) ∧ (DiffUpperBound ≤ 255)
. GrayOffset (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Offset gray value subtracted from the input image.
Default Value : 0
Suggested values : GrayOffset ∈ {-30, -25, -20, -17, -15, -12, -10, -7, -5, -3, -2, -1, 0, 1, 2, 3, 5, 7, 10, 12,
15, 17, 20, 25, 30}
Typical range of values : -255 ≤ GrayOffset ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 2
Restriction : (-255 ≤ GrayOffset) ∧ (GrayOffset ≤ 255)
. AddRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate by which the comparison image is translated.
Default Value : 0
Suggested values : AddRow ∈ {-200, -100, -20, -10, 0, 10, 20, 100, 200}
Typical range of values : -32000 ≤ AddRow ≤ 32000 (lin)
Minimum Increment : 1
Recommended Increment : 1
. AddCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate by which the comparison image is translated.
Default Value : 0
Suggested values : AddCol ∈ {-200, -100, -20, -10, 0, 10, 20, 100, 200}
Typical range of values : -32000 ≤ AddCol ≤ 32000 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let A be the number of valid pixels. Then the runtime complexity is O(A).
Result
check_difference returns H_MSG_TRUE if all parameters are correct. The behavior with respect to

HALCON 8.0.2
952 CHAPTER 13. SEGMENTATION

the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
check_difference is reentrant and automatically parallelized (on tuple level).
Possible Successors
connection, select_shape, reduce_domain, select_gray, rank_region, dilation1,
opening
Alternatives
sub_image, dyn_threshold
Module
Foundation

dual_threshold ( const Hobject Image, Hobject *RegionCrossings,


Hlong MinSize, double MinGray, double Threshold )

T_dual_threshold ( const Hobject Image, Hobject *RegionCrossings,


const Htuple MinSize, const Htuple MinGray, const Htuple Threshold )

Threshold operator for signed images.


dual_threshold segments the input image into a region with gray values ≥ Threshold (“positive” regions)
and a region with gray values ≤ -Threshold (“negative” regions). “Positive” or “negative” regions having a size
of less than MinSize are suppressed, as well as regions whose maximum gray value is less than MinGray in
absolute value.
The segmentation performed is not complete, i.e., the “positive” and “negative” regions together do not necessarily
cover the entire image: Areas with a gray value between −Threshold and Threshold, −MinGray and
MinGray, respectively, are not taken into account.
dual_threshold is usually called after applying a Laplace operator ( laplace, laplace_of_gauss,
derivate_gauss or diff_of_gauss) to an image or with the difference of two images ( sub_image).
The zero crossings of a Laplace image correspond to edges in an image, and are the separating regions of the
“positive” and “negative” regions in the Laplace image. They can be determined by calling dual_threshold
with Threshold = 1 and then creating the complement regions with complement. The parameter MinGray
determines the noise invariance, while MinSize determines the resolution of the edge detection.
Using byte images, only the positive part of the operator is applied. Therefore dual_threshold behaves like
a standard threshold operator ( threshold) with successive connection and select_gray.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int1 / int2 / int4 / real
Input image.
. RegionCrossings (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Positive and negative regions.
. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
Regions smaller than MinSize are suppressed.
Default Value : 20
Suggested values : MinSize ∈ {0, 10, 20, 50, 100, 200, 500, 1000}
Typical range of values : 0 ≤ MinSize ≤ 10000 (lin)
Minimum Increment : 1
Recommended Increment : 10
. MinGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Regions whose maximum absolute gray value is smaller than MinGray are suppressed.
Default Value : 5.0
Suggested values : MinGray ∈ {1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 9.0, 11.0, 15.0, 20.0}
Typical range of values : 0.001 ≤ MinGray ≤ 10000.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : MinGray > 0

HALCON/C Reference Manual, 2008-5-13


13.4. THRESHOLD 953

. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


Regions that have a gray value smaller than Threshold (or larger than -Threshold) are suppressed.
Default Value : 2.0
Suggested values : Threshold ∈ {1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 9.0, 11.0, 15.0, 20.0}
Typical range of values : 0.001 ≤ Threshold ≤ 10000.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : (Threshold ≥ 1) ∧ (Threshold ≤ MinGray)
Example

/* Edge detection with the Laplace operator (and edge thinning) */


diff_of_gauss(Image,&Laplace,2.0,1.6) ;
/* find "‘positive"’ and "‘negative"’ regions: */
dual_threshold(Laplace,&Region,20,2,1) ;
/*The zero runnings are the complement to these image section: */
complement("full",Region,ZeroCrossings) ;

Result
dual_threshold returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
dual_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
min_max_gray, sobel_amp, binomial_filter, gauss_image, reduce_domain,
diff_of_gauss, sub_image, derivate_gauss, laplace_of_gauss, laplace,
expand_region
Possible Successors
connection, dilation1, erosion1, opening, closing, rank_region, shape_trans,
skeleton
Alternatives
threshold, dyn_threshold, check_difference
See also
connection, select_shape, select_gray
Module
Foundation

dyn_threshold ( const Hobject OrigImage, const Hobject ThresholdImage,


Hobject *RegionDynThresh, double Offset, const char *LightDark )

T_dyn_threshold ( const Hobject OrigImage,


const Hobject ThresholdImage, Hobject *RegionDynThresh,
const Htuple Offset, const Htuple LightDark )

Segment an image using a local threshold.


dyn_threshold selects from the input image those regions in which the pixels fulfill a threshold condition. Let
go = gOrigImage , and gt = gThresholdImage . Then the condition for LightDark = ’light’ is:

go ≥ gt + Offset

For LightDark = ’dark’ the condition is:

go ≤ gt − Offset

For LightDark = ’equal’ it is:


gt − Offset ≤ go ≤ gt + Offset

HALCON 8.0.2
954 CHAPTER 13. SEGMENTATION

Finally, for LightDark = ’not_equal’ it is:

gt − Offset > go ∨ go > gt + Offset

Typically, the threshold images are smoothed versions of the original image (e.g., by applying mean_image,
binomial_filter, gauss_image, etc.). Then the effect of dyn_threshold is similar to applying
threshold to a highpass-filtered version of the original image (see highpass_image).
With dyn_threshold, contours of an object can be extracted, where the objects’ size (diameter) is determined
by the mask size of the lowpass filter and the amplitude of the objects’ edges:
The larger the mask size is chosen, the larger the found regions become. As a rule of thumb, the mask size should
be about twice the diameter of the objects to be extracted. It is important not to set the parameter Offset to zero
because in this case too many small regions will be found (noise). Values between 5 and 40 are a useful choice.
The larger Offset is chosen, the smaller the extracted regions become.
All points of the input image fulfilling the above condition are stored jointly in one region. If necessary, the
connected components can be obtained by calling connection.
Attention
If Offset is chosen from −1 to 1 usually a very noisy region is generated, requiring large storage. If Offset
is chosen too large (> 60, say) it may happen that no points fulfill the threshold condition (i.e., an empty region is
returned). If Offset is chosen too small (< -60, say) it may happen that all points fulfill the threshold condition
(i.e., a full region is returned).
Parameter

. OrigImage (input_object) . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2 / int4 / real


Input image.
. ThresholdImage (input_object) . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Image containing the local thresholds.
. RegionDynThresh (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Segmented regions.
. Offset (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Offset applied to ThresholdImage.
Default Value : 5.0
Suggested values : Offset ∈ {1.0, 3.0, 5.0, 7.0, 10.0, 20.0, 30.0}
Typical range of values : -255.0 ≤ Offset ≤ 255.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 5
Restriction : (-255 < Offset) ∧ (Offset < 255)
. LightDark (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Extract light, dark or similar areas?
Default Value : "light"
List of values : LightDark ∈ {"dark", "light", "equal", "not_equal"}
Example

/* Looking for regions with the diameter D */


mean_image(Image,&Mean,D*2+1,D*2+1);
dyn_threshold(Image,Mean,&Seg,5.0,"light");
connection(Seg,&Region);

Complexity
Let A be the area of the input region. Then the runtime complexity is O(A).
Result
dyn_threshold returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
dyn_threshold is reentrant and automatically parallelized (on tuple level, domain level).

HALCON/C Reference Manual, 2008-5-13


13.4. THRESHOLD 955

Possible Predecessors
mean_image, smooth_image, binomial_filter, gauss_image
Possible Successors
connection, select_shape, reduce_domain, select_gray, rank_region, dilation1,
opening, erosion1
Alternatives
check_difference, threshold
See also
highpass_image, sub_image
Module
Foundation

fast_threshold ( const Hobject Image, Hobject *Region, double MinGray,


double MaxGray, Hlong MinSize )

T_fast_threshold ( const Hobject Image, Hobject *Region,


const Htuple MinGray, const Htuple MaxGray, const Htuple MinSize )

Fast thresholding of images using global thresholds.


fast_threshold selects the pixels from the input image whose gray values g fulfill the following condition:

MinGray ≤ g ≤ MaxGray .

To reduce procesing time, the selection is done in two steps: At first all pixels along rows and columns with dis-
tances MinSize are processed. In the next step the neighborhood (size MinSize × MinSize) of all previously
selected points are processed.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / direction / cyclic
Input image.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Segmented regions.
. MinGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Lower threshold for the gray values.
Default Value : 128
Suggested values : MinGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Typical range of values : 0.0 ≤ MinGray ≤ 255.0 (lin)
Minimum Increment : 1
Recommended Increment : 5.0
. MaxGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Upper threshold for the gray values.
Default Value : 255.0
Suggested values : MaxGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Typical range of values : 0.0 ≤ MaxGray ≤ 255.0 (lin)
Minimum Increment : 1
Recommended Increment : 5.0
. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Minimum size of objects to be extracted.
Default Value : 20
Suggested values : MinSize ∈ {5, 10, 15, 20, 25, 30, 40, 50, 60, 70, 100}
Typical range of values : 2 ≤ MinSize ≤ 200 (lin)
Minimum Increment : 1
Recommended Increment : 2
Complexity
Let A be the area of the ouput region and height the height of Image. Then the runtime complexity is O(A +
height/MinSize).

HALCON 8.0.2
956 CHAPTER 13. SEGMENTATION

Result
fast_threshold returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
fast_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
histo_to_thresh, min_max_gray, sobel_amp, binomial_filter, gauss_image,
reduce_domain, fill_interlace
Possible Successors
connection, dilation1, erosion1, opening, closing, rank_region, shape_trans,
skeleton
Alternatives
threshold, gen_grid_region, dilation_rectangle1, dyn_threshold
See also
class_2dim_sup, hysteresis_threshold
Module
Foundation

T_histo_to_thresh ( const Htuple Histogramm, const Htuple Sigma,


Htuple *MinThresh, Htuple *MaxThresh )

Determine gray value thresholds from a histogram.


histo_to_thresh determines gray value thresholds from a histogram for a segmentation of an image using
threshold. The thresholds returned are 0, the maximum gray value in the histogram, and all minima extracted
from the histogram. Before the thresholds are determined, the histogram is smoothed with a Gaussian smoothing
function.
histo_to_thresh can process the absolute and relative histograms that are returned by gray_histo. Note,
however, that here only byte images should be used, because otherwise the returned thresholds cannot easily be
transformed to the thresholds for the actual image. For images of type uint2, the histograms should be computed
with gray_histo_abs since this facilitates a simple transformation of the thresholds by simply multiplying the
thresholds with the quantization selected in gray_histo_abs. For uint2 images, it is important to ensure that
the quantization must be chosen in such a manner that the histogram still contains salient information. For example,
a 640 × 480 image with 16 bits per pixel gray value resolution contains on average only 307200/65536 = 4.7
entries per histogram bin, i.e., the histogram is too sparsely populated to derive any useful statistics from it. To
be able to extract useful thresholds from such a histogram, Sigma would have to be set to an extremely large
value, which would lead to very high run times and numerical problems. The quantization in gray_histo_abs
should therefore normally be chosen such that the histogram contains a maximum of 1024 entries. Hence, for
images with more than 10 bits per pixel, the quantization must be chosen greater than 1. The histogram returned
by gray_histo_abs should furthermore be restricted to the parts that contain salient information. For example,
for an image with 12 bits per pixel, the quantization should be set to 4. Only the first 1024 entries of the computed
histogram (which contains 16384 entries in this example) should be passed to histo_to_thresh. Finally,
MinThresh must be multiplied by 4 (i.e., the quantization), while MaxThresh must be multiplied by 4 and
increased by 3 (i.e., the quantization minus 1).
Parameter

. Histogramm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; Htuple . Hlong / double


Gray value histogram.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Sigma for the Gaussian smoothing of the histogram.
Default Value : 2.0
Suggested values : Sigma ∈ {0.5, 1.0, 2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.5 ≤ Sigma ≤ 30.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.2

HALCON/C Reference Manual, 2008-5-13


13.4. THRESHOLD 957

. MinThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *


Minimum thresholds.
. MaxThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Maximum thresholds.
Example (Syntax: HDevelop)

/* Calculate thresholds from a byte image and threshold the image. */


gray_histo (Image, Image, AbsoluteHisto, RelativeHisto)
histo_to_thresh (AbsoluteHisto, 4, MinThresh, MaxThresh)
threshold (Image, Region, MinThresh, MaxThresh)

/* Calculate thresholds from a 12 bit uint2 image and threshold the image. */
gray_histo_abs (Image, Image, 4, AbsoluteHisto)
AbsoluteHisto := AbsoluteHisto[0:1023]
histo_to_thresh (AbsoluteHisto, 16, MinThresh, MaxThresh)
MinThresh := MinThresh*4
MaxThresh := MaxThresh*4+3
threshold (Image, Region, MinThresh, MaxThresh)

Parallelization Information
histo_to_thresh is reentrant and processed without parallelization.
Possible Predecessors
gray_histo
Possible Successors
threshold
See also
auto_threshold, bin_threshold, char_threshold
Module
Foundation

threshold ( const Hobject Image, Hobject *Region, double MinGray,


double MaxGray )

T_threshold ( const Hobject Image, Hobject *Region,


const Htuple MinGray, const Htuple MaxGray )

Segment an image using global threshold.


threshold selects the pixels from the input image whose gray values g fulfill the following condition:

MinGray ≤ g ≤ MaxGray .

All points of an image fulfilling the condition are returned as one region. If more than one gray value interval is
passed (tuples for MinGray and MaxGray), one separate region is returned for each interval.
Parameter

. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / vector_field
Input image.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Segmented region.
. MinGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Lower threshold for the gray values.
Default Value : 128.0
Suggested values : MinGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}

HALCON 8.0.2
958 CHAPTER 13. SEGMENTATION

. MaxGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong


Upper threshold for the gray values.
Default Value : 255.0
Suggested values : MaxGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Restriction : MaxGray ≥ MinGray
Example

read_image(&Image,"fabrik");
sobel_amp(Image,&EdgeAmp,"sum_abs",3);
threshold(EdgeAmp,&Seg,50.0,255.0);
skeleton(Seg,&Rand);
connection(Rand,&Lines);
select_shape(Lines,&Edges,"area","and",10.0,1000000.0);

Complexity
Let A be the area of the input region. Then the runtime complexity is O(A).
Result
threshold returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the input images
and output regions can be determined by setting the values of the flags ’no_object_result’, ’empty_region_result’,
and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
threshold is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
histo_to_thresh, min_max_gray, sobel_amp, binomial_filter, gauss_image,
reduce_domain, fill_interlace
Possible Successors
connection, dilation1, erosion1, opening, closing, rank_region, shape_trans,
skeleton
Alternatives
class_2dim_sup, hysteresis_threshold, dyn_threshold, bin_threshold,
char_threshold, auto_threshold, dual_threshold
See also
zero_crossing, background_seg, regiongrowing
Module
Foundation

threshold_sub_pix ( const Hobject Image, Hobject *Border,


double Threshold )

T_threshold_sub_pix ( const Hobject Image, Hobject *Border,


const Htuple Threshold )

Extract level crossings from an image with subpixel accuracy.


threshold_sub_pix extracts the level crossings at the level Threshold of the input image Image with
subpixel accuracy. The extracted level crossings are returned as XLD-contours in Border. In contrast to the
operator threshold, threshold_sub_pix does not return regions, but the lines that separate regions with
a gray value less than Threshold from regions with a gray value greater than Threshold.
For the extraction, the input image is regarded as a surface, in which the gray values are interpolated bilinearly
between the centers of the individual pixels. Consistent with the surface thus defined, level crossing lines are
extracted for each pixel and linked into topologically sound contours. This means that the level crossing contours
are correctly split at junction points. If the image contains extended areas of constant gray value Threshold,
only the border of such areas is returned as level crossings.

HALCON/C Reference Manual, 2008-5-13


13.4. THRESHOLD 959

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Border (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted level crossings.
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Threshold for the level crossings.
Default Value : 128
Suggested values : Threshold ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Example

/* Detection zero crossings of the Laplacian-of-Gaussian of aerial image */


read_image(&Image,"fabrik");
threshold_sub_pix(Laplace,&Border,35);
disp_xld(Border,WindowHandle);

Result
threshold_sub_pix usually returns the value H_MSG_TRUE. If necessary, an exception is raised.
Parallelization Information
threshold_sub_pix is reentrant and processed without parallelization.
Alternatives
threshold
See also
zero_crossing_sub_pix
Module
2D Metrology

var_threshold ( const Hobject Image, Hobject *Region, Hlong MaskWidth,


Hlong MaskHeight, double StdDevScale, double AbsThreshold,
const char *LightDark )

T_var_threshold ( const Hobject Image, Hobject *Region,


const Htuple MaskWidth, const Htuple MaskHeight,
const Htuple StdDevScale, const Htuple AbsThreshold,
const Htuple LightDark )

Threshold an image by local mean and standard deviation analysis.


The operator var_threshold selects from the input image Image those regions Region in which the pixels
fulfill a threshold condition. The threshold is calculated from the mean gray value and the standard deviation in a
local window of size MaskWidth x MaskHeight around each pixel (x, y). If MaskWidth or MaskHeight
is even, the next larger odd value is used. The mask window should be greater than the image features to be
segmented and it should comprise at least three pixels.
Let g(x, y) be the gray value at position (x, y) in the input image Image and m(x, y) and d(x, y) the corresponding
mean and standard deviation of the gray values in the window around that pixel and
v(x, y) = max(StdDevScale ∗ d(x, y), AbsThreshold) for StdDevScale ≥ 0
or
v(x, y) = min(StdDevScale ∗ d(x, y), AbsThreshold) for StdDevScale < 0.
The standard deviation is used as a measure of noise in the image and scaled by StdDevScale to reflect the
desired sensitivity. The threshold condition is determined by the parameter LightDark:
LightDark = ’light’:
g(x, y) ≥ m(x, y) + v(x, y).
LightDark = ’dark’:

HALCON 8.0.2
960 CHAPTER 13. SEGMENTATION

g(x, y) ≤ m(x, y) − v(x, y).


LightDark = ’equal’:
m(x, y) − v(x, y) ≤ g(x, y) ≤ m(x, y) + v(x, y).
LightDark = ’not_equal’:
m(x, y) − v(x, y) > g(x, y) ∨ g(x, y) < m(x, y) + v(x, y).
All pixels fulfilling the above condition are aggregated into the resulting region Region.
For the parameter StdDevScale values between −1.0 and 1.0 are sensible choices, with 0.2 as a suggested
value. If the parameter is too high or too low, an empty or full region may be returned. The parameter
AbsThreshold places an additional threshold on StdDevScale ∗ dev(x, y). If StdDevScale ∗ dev(x, y)
is below AbsThreshold for positive values of StdDevScale or above for negative values StdDevScale,
AbsThreshold is taken instead.
Parameter

. Image (input_object) . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : byte / int2 / int4 / uint2 / real


Input image.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Segmented regions.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Mask width for mean and deviation calculation.
Default Value : 15
Suggested values : MaskWidth ∈ {9, 11, 13, 15}
Restriction : MaskWidth ≥ 1
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Mask height for mean and deviation calculation.
Default Value : 15
Suggested values : MaskHeight ∈ {9, 11, 13, 15}
Restriction : MaskHeight ≥ 1
. StdDevScale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Factor for the standard deviation of the gray values.
Default Value : 0.2
Suggested values : StdDevScale ∈ {-0.2, -0.1, 0.1, 0.2}
. AbsThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Minimum gray value difference from the mean.
Default Value : 2
Suggested values : AbsThreshold ∈ {-2, -1, 0, 1, 2}
. LightDark (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Threshold type.
Default Value : "dark"
List of values : LightDark ∈ {"dark", "light", "equal", "not_equal"}
Complexity
Let A be the area of the input region, then the runtime is O(A).
Result
var_threshold returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
var_threshold is reentrant and automatically parallelized (on tuple level, domain level).
Alternatives
dyn_threshold, threshold
References
W.Niblack, ”An Introduction to Digital Image Processing”, Page 115-116, Englewood Cliffs, N.J., Prentice Hall,
1986
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


13.4. THRESHOLD 961

zero_crossing ( const Hobject Image, Hobject *RegionCrossing )


T_zero_crossing ( const Hobject Image, Hobject *RegionCrossing )

Extrakt zero crossings from an image.


zero_crossing returns the zero crossings of the input image as a region. A pixel is accepted as a zero crossing
if its gray value (in Image) is zero, or if at least one of its neighbors of the 4-neighborhood has a different sign.
This operator is intended to be used after edge operators returning the second derivative of the image (e.g.,
laplace_of_gauss), which were possibly followed by a smoothing operator. In this case, the zero cross-
ings are (candidates for) edges.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : int1 / int2 / int4 / real


Input image.
. RegionCrossing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Zero crossings.
Result
zero_crossing usually returns the value H_MSG_TRUE. If necessary, an exception is raised.
Parallelization Information
zero_crossing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
laplace, laplace_of_gauss, derivate_gauss
Possible Successors
connection, skeleton, boundary, select_shape, fill_up
Alternatives
threshold, dual_threshold, zero_crossing_sub_pix
Module
Foundation

zero_crossing_sub_pix ( const Hobject Image, Hobject *ZeroCrossings )


T_zero_crossing_sub_pix ( const Hobject Image,
Hobject *ZeroCrossings )

Extract zero crossings from an image with subpixel accuracy.


zero_crossing_sub_pix extracts the zero crossings of the input image Image with subpixel ac-
curacy. The extracted zero crossings are returned as XLD-contours in ZeroCrossings. Thus,
zero_crossing_sub_pix can be used as a sub-pixel precise edge extractor if the input image is a Laplace-
filtered image (see laplace, laplace_of_gauss, derivate_gauss).
For the extraction, the input image is regarded as a surface, in which the gray values are interpolated bilinearly
between the centers of the individual pixels. Consistent with the surface thus defined, zero crossing lines are
extracted for each pixel and linked into topologically sound contours. This means that the zero crossing contours
are correctly split at junction points. If the image contains extended areas of constant gray value 0, only the border
of such areas is returned as zero crossings.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : int1 / int2 / int4 / real


Input image.
. ZeroCrossings (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted zero crossings.
Example

/* Detection zero crossings of the Laplacian-of-Gaussian of aerial image */

HALCON 8.0.2
962 CHAPTER 13. SEGMENTATION

read_image(&Image,"mreut");
derivate_gauss(Image,&Laplace,3,"laplace");
zero_crossing_sub_pix(Laplace,&ZeroCrossings);
disp_xld(ZeroCrossings,WindowHandle);

/* Detection of edges, i.e, zero crossings of the Laplacian-of-Gaussian


that have a large gradient magnitude, in an aerial image */
read_image(&Image,"mreut");
Sigma = 1.5;
/* Compensate the threshold for the fact that derivate_gauss(...,’gradient’)
calculates a Gaussian-smoothed gradient, in which the edge amplitudes
are too small because of the Gaussian smoothing, to correspond to a true
edge amplitude of 20. */
Threshold = 20/(Sigma*sqrt(2*PI));
derivate_gauss(Image,&Gradient,Sigma,"gradient");
threshold(Gradient,&Region,Threshold,255);
reduce_domain(Image,Region,&ImageReduced);
derivate_gauss(ImageReduced,&Laplace,Sigma,"laplace");
zero_crossing_sub_pix(Laplace,&Edges);
disp_xld(Edges,WindowHandle);

Result
zero_crossing_sub_pix usually returns the value H_MSG_TRUE. If necessary, an exception is raised.
Parallelization Information
zero_crossing_sub_pix is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
laplace, laplace_of_gauss, diff_of_gauss, derivate_gauss
Alternatives
zero_crossing
See also
threshold_sub_pix
Module
2D Metrology

13.5 Topography

T_critical_points_sub_pix ( const Hobject Image, const Htuple Filter,


const Htuple Sigma, const Htuple Threshold, Htuple *RowMin,
Htuple *ColMin, Htuple *RowMax, Htuple *ColMax, Htuple *RowSaddle,
Htuple *ColSaddle )

Subpixel precise detection of critical points in an image.


critical_points_sub_pix extracts critical points, i.e., local maxima, local minima, and saddle points, from
the image Image with subpixel precision. To do so, in each point the input image is approximated by a quadratic
polynomial in x and y and subsequently the polynomial is examined for extremal values and saddle points. The
partial derivatives, which are necessary for setting up the polynomial, are calculated either with various Gaussian
derivatives or using the facet model, depending on Filter. In the first case, Sigma determines the size of the
Gaussian kernels, while in the second case, before being processed the input image is smoothed by a Gaussian
whose size is determined by Sigma. Therefore, ’facet’ results in a faster extraction at the expense of slightly less
accurate results. A point is accepted to be a critical point if the absolute values of both eigenvalues of the Hessian
matrix are greater than Threshold. The eigenvalues correspond to the curvature of the gray value surface. If
both eigenvalues are negative, the point is a local maximum, if both are positive, a local minimum, and if they have
different signs, a saddle point.

HALCON/C Reference Manual, 2008-5-13


13.5. TOPOGRAPHY 963

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method for the calculation of the partial derivatives.
Default Value : "facet"
List of values : Filter ∈ {"facet", "gauss"}
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Sigma of the Gaussian. If Filter is ’facet’, Sigma may be 0.0 to avoid the smoothing of the input image.
Suggested values : Sigma ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Restriction : Sigma ≥ 0.0
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum absolute value of the eigenvalues of the Hessian matrix.
Default Value : 5.0
Suggested values : Threshold ∈ {2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0}
Restriction : Threshold ≥ 0.0
. RowMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected minima.
. ColMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected minima.
. RowMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected maxima.
. ColMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected maxima.
. RowSaddle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected saddle points.
. ColSaddle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected saddle points.
Result
critical_points_sub_pix returns H_MSG_TRUE if all parameters are correct and no error oc-
curs during the execution. If the input is empty the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
critical_points_sub_pix is reentrant and processed without parallelization.
Possible Successors
gen_cross_contour_xld, disp_cross
Alternatives
local_min_sub_pix, local_max_sub_pix, saddle_points_sub_pix
See also
local_min, local_max, plateaus, plateaus_center, lowlands, lowlands_center
Module
Foundation

local_max ( const Hobject Image, Hobject *LocalMaxima )


T_local_max ( const Hobject Image, Hobject *LocalMaxima )

Detect all local maxima in an image.


local_max extracts all points from Image having a gray value larger than the gray value of all its
neighbors and returns them in LocalMaxima. The neighborhood used can be set by set_system
(’neighborhood’,<4/8>).

HALCON 8.0.2
964 CHAPTER 13. SEGMENTATION

Parameter

. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Input image.
. LocalMaxima (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Extracted local maxima as a region.
Number of elements : LocalMaxima = Image
Example

read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
local_max(CornerResp,&Maxima);
set_colored(WindowHandle,12);
disp_region(Maxima,WindowHandle);
T_get_region_points(Maxima,&Row,&Col);

Parallelization Information
local_max is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
get_region_points, connection
Alternatives
nonmax_suppression_amp, plateaus, plateaus_center
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation

T_local_max_sub_pix ( const Hobject Image, const Htuple Filter,


const Htuple Sigma, const Htuple Threshold, Htuple *Row, Htuple *Col )

Subpixel precise detection of local maxima in an image.


local_max_sub_pix extracts local maxima from the image Image with subpixel precision. To do so, in each
point the input image is approximated by a quadratic polynomial in x and y and subsequently the polynomial
is examined for local maxima. The partial derivatives, which are necessary for setting up the polynomial, are
calculated either with various Gaussian derivatives or using the facet model, depending on Filter. In the first
case, Sigma determines the size of the Gaussian kernels, while in the second case, before being processed the
input image is smoothed by a Gaussian whose size is determined by Sigma. Therefore, ’facet’ results in a faster
extraction at the expense of slightly less accurate results. A point is accepted to be a local maximum if both
eigenvalues of the Hessian matrix are smaller than -Threshold. The eigenvalues correspond to the curvature of
the gray value surface.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method for the calculation of the partial derivatives.
Default Value : "facet"
List of values : Filter ∈ {"facet", "gauss"}
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Sigma of the Gaussian. If Filter is ’facet’, Sigma may be 0.0 to avoid the smoothing of the input image.
Suggested values : Sigma ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Restriction : Sigma ≥ 0.0

HALCON/C Reference Manual, 2008-5-13


13.5. TOPOGRAPHY 965

. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double


Minimum absolute value of the eigenvalues of the Hessian matrix.
Default Value : 5.0
Suggested values : Threshold ∈ {2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0}
Restriction : Threshold ≥ 0.0
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected maxima.
. Col (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected maxima.
Result
local_max_sub_pix returns H_MSG_TRUE if all parameters are correct and no error occurs during the exe-
cution. If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>).
If necessary, an exception handling is raised.
Parallelization Information
local_max_sub_pix is reentrant and processed without parallelization.
Possible Successors
gen_cross_contour_xld, disp_cross
Alternatives
critical_points_sub_pix, local_min_sub_pix, saddle_points_sub_pix
See also
local_max, plateaus, plateaus_center
Module
Foundation

local_min ( const Hobject Image, Hobject *LocalMinima )


T_local_min ( const Hobject Image, Hobject *LocalMinima )

Detect all local minima in an image.


local_min extracts all points from Image having a gray value smaller than the gray value of all its
neighbors and returns them in LocalMinima. The neighborhood used can be set by set_system
(’neighborhood’,<4/8>).
Parameter
. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Image to be processed.
. LocalMinima (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Extracted local minima as regions.
Number of elements : LocalMinima = Image
Example

read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
local_min(CornerResp,&Minima);
set_colored(WindowHandle,12);
disp_region(Minima,WindowHandle);
T_get_region_points(Minima,&Row,&Col);

Parallelization Information
local_min is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
get_region_points, connection

HALCON 8.0.2
966 CHAPTER 13. SEGMENTATION

Alternatives
gray_skeleton, lowlands, lowlands_center
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation

T_local_min_sub_pix ( const Hobject Image, const Htuple Filter,


const Htuple Sigma, const Htuple Threshold, Htuple *Row, Htuple *Col )

Subpixel precise detection of local minima in an image.


local_min_sub_pix extracts local minima from the image Image with subpixel precision. To do so, in each
point the input image is approximated by a quadratic polynomial in x and y and subsequently the polynomial
is examined for local minima. The partial derivatives, which are necessary for setting up the polynomial, are
calculated either with various Gaussian derivatives or using the facet model, depending on Filter. In the first
case, Sigma determines the size of the Gaussian kernels, while in the second case, before being processed the
input image is smoothed by a Gaussian whose size is determined by Sigma. Therefore, ’facet’ results in a faster
extraction at the expense of slightly less accurate results. A point is accepted to be a local minimum if both
eigenvalues of the Hessian matrix are greater than Threshold. The eigenvalues correspond to the curvature of
the gray value surface.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method for the calculation of the partial derivatives.
Default Value : "facet"
List of values : Filter ∈ {"facet", "gauss"}
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Sigma of the Gaussian. If Filter is ’facet’, Sigma may be 0.0 to avoid the smoothing of the input image.
Suggested values : Sigma ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Restriction : Sigma ≥ 0.0
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum absolute value of the eigenvalues of the Hessian matrix.
Default Value : 5.0
Suggested values : Threshold ∈ {2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0}
Restriction : Threshold ≥ 0.0
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected minima.
. Col (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected minima.
Result
local_min_sub_pix returns H_MSG_TRUE if all parameters are correct and no error occurs during the exe-
cution. If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>).
If necessary, an exception handling is raised.
Parallelization Information
local_min_sub_pix is reentrant and processed without parallelization.
Possible Successors
gen_cross_contour_xld, disp_cross
Alternatives
critical_points_sub_pix, local_max_sub_pix, saddle_points_sub_pix
See also
local_min, lowlands, lowlands_center
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


13.5. TOPOGRAPHY 967

lowlands ( const Hobject Image, Hobject *Lowlands )


T_lowlands ( const Hobject Image, Hobject *Lowlands )

Detect all gray value lowlands.


lowlands extracts all points from Image with a gray value less or equal to the gray value of its neighbors
(8-neighborhood) and returns them in Lowlands. Each lowland is returned as a separate region.
Parameter
. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Image to be processed.
. Lowlands (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Extracted lowlands as regions (one region for each lowland).
Example

read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
lowlands(CornerResp,&Minima);
set_colored(WindowHandle,12);
disp_region(Minima,WindowHandle);
T_area_center(Minima,_,&Row,&Col);

Parallelization Information
lowlands is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
lowlands_center, gray_skeleton, local_min
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation

lowlands_center ( const Hobject Image, Hobject *Lowlands )


T_lowlands_center ( const Hobject Image, Hobject *Lowlands )

Detect the centers of all gray value lowlands.


lowlands_center extracts all points from Image with a gray value less or equal to the gray value of its
neighbors (8-neighborhood) and returns them in Lowlands. If more than one of these points are connected
(lowland), their center of gravity is returned. Each lowland is returned as a separate region.
Parameter
. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Image to be processed.
. Lowlands (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Centers of gravity of the extracted lowlands as regions (one region for each lowland).
Example

read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);

HALCON 8.0.2
968 CHAPTER 13. SEGMENTATION

lowlands_center(CornerResp,&Minima);
set_colored(WindowHandle,12);
disp_region(Minima,WindowHandle);
T_area_center(Minima,_,&Row,&Col);

Parallelization Information
lowlands_center is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
lowlands, gray_skeleton, local_min
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation

plateaus ( const Hobject Image, Hobject *Plateaus )


T_plateaus ( const Hobject Image, Hobject *Plateaus )

Detect all gray value plateaus.


plateaus extracts all points from Image with a gray value greater or equal to the gray value of its neighbors
(8-neighborhood) and returns them in Plateaus. Each maximum is returned as a separate region.
Parameter

. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Input image.
. Plateaus (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Extracted plateaus as regions (one region for each plateau).
Example

read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
plateaus(CornerResp,&Maxima);
set_colored(WindowHandle,12);
disp_region(Maxima,WindowHandle);
T_area_center(Maxima,_,&Row,&Col);

Parallelization Information
plateaus is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
plateaus_center, nonmax_suppression_amp, local_max
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


13.5. TOPOGRAPHY 969

plateaus_center ( const Hobject Image, Hobject *Plateaus )


T_plateaus_center ( const Hobject Image, Hobject *Plateaus )

Detect the centers of all gray value plateaus.


plateaus_center extracts all points from Image with a gray value greater or equal to the gray value of
its neighbors (8-neighborhood) and returns them in Plateaus. If more than one of these points are connected
(plateau), their center of gravity is returned. Each plateau center is returned as a separate region.
Parameter

. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Input image.
. Plateaus (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Centers of gravity of the extracted plateaus as regions (one region for each plateau).
Example

read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
plateaus_center(CornerResp,&Maxima);
set_colored(WindowHandle,12);
disp_region(Maxima,WindowHandle);
T_area_center(Maxima,_,&Row,&Col);

Parallelization Information
plateaus_center is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
plateaus, nonmax_suppression_amp, local_max
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation

pouring ( const Hobject Image, Hobject *Regions, const char *Mode,


Hlong MinGray, Hlong MaxGray )

T_pouring ( const Hobject Image, Hobject *Regions, const Htuple Mode,


const Htuple MinGray, const Htuple MaxGray )

Segment an image by “pouring water” over it.


pouring regards the input image as a “mountain range.” Larger gray values correspond to mountain peaks, while
smaller gray values correspond to valley bottoms. pouring segments the input image in several steps. First,
the local maxima are extracted, i.e., pixels which either alone or in the form of an extended plateau have larger
gray values than their immediate neighbors (in 4-neighborhood). In the next step, the maxima thus found are the
starting points for an expansion until “valley bottoms” are reached. The expansion is done as long as there are
chains of pixels in which the gray value becomes smaller (like water running downhill from the maxima in all
directions). Again, the 4-neighborhood is used, but with a weaker condition (smaller or equal). This means that
points at valley bottoms may belong to more than one maximum. These areas are at first not assigned to a region,
but rather are split among all competing segments in the last step. The split is done by a uniform expansion of
all involved segments, until all ambiguous pixels were assigned. The parameter Mode determines which steps are
executed. The following values are possible:

HALCON 8.0.2
970 CHAPTER 13. SEGMENTATION

’all’ This is the normal mode of operation. All steps of the segmentation are performed. The regions are assigned
to maxima, and overlapping regions are split.
’maxima’ The segmentation only extracts the local maxima of the input image. No corresponding regions are
extracted.
’regions’ The segmentation extracts the local maxima of the input image and the corresponding regions, which
are uniquely determined. Areas that were assigned to more than one maximum are not split.

In order to prevent the algorithm from splitting a uniform background that is different from the rest of the image,
the parameters MinGray and MaxGray determine gray value thresholds for regions in the image that should
be regarded as background. All parts of the image having a gray value smaller than MinGray or larger than
MaxGray are disregarded for the extraction of the maxima as well as for the assignment of regions. For a complete
segmentation of the image, MinGray = 0 und MaxGray = 255 should be selected. MinGray < MaxGray must
be observed.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented regions.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Mode of operation.
Default Value : "all"
List of values : Mode ∈ {"all", "maxima", "regions"}
. MinGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
All gray values smaller than this threshold are disregarded.
Default Value : 0
Suggested values : MinGray ∈ {0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110}
Typical range of values : 0 ≤ MinGray ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : MinGray ≥ 0
. MaxGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
All gray values larger than this threshold are disregarded.
Default Value : 255
Suggested values : MaxGray ∈ {100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240,
250, 255}
Typical range of values : 0 ≤ MaxGray ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (MaxGray ≤ 255) ∧ (MaxGray > MinGray)
Example

/* Segmentation of a filtered image */


read_image(&Image,"br2");
mean_image(Image,&Mean,11,11);
pouring(Mean,&Seg,"all",0,255);
disp_image(Mean,WindowHandle);
set_colored(WindowHandle,12);
disp_region(Seg,WindowHandle);

/* Segmentation of an image with masking of a dark backround */


read_image(&Image,"hand");
mean_image(Image,&Mean,15,15);
pouring(Mean,&Seg,"all",40,255);
disp_image(Mean,WindowHandle);
set_colored(WindowHandle,12);
disp_region(Seg,WindowHandle);

HALCON/C Reference Manual, 2008-5-13


13.5. TOPOGRAPHY 971

/* Segmentation of a 2D-histogram */
read_image(&Image,"monkey");
texture_laws(Image,&Texture,"el",2,5);
disp_image(Image,WindowHandle);
draw_region(&Region,draw_region);
reduce_domain(Texture,Region,&Testreg);
histo_2dim(Testreg,Texture,Region,&Histo);
pouring(Histo,Seg,"all",0,255);

Complexity
Let N be the number of pixels in the input image and M be the number of found segments, where the enclosing
rectangle of the segment i contains mi pixels. Furthermore, let Ki be the number of chords in segment i. Then the
runtime complexity is

O(3 ∗ N + sumM (3 ∗ mi ) + sumM (Ki )) .

Result
pouring usually returns the value H_MSG_TRUE. If necessary, an exception is raised.
Parallelization Information
pouring is processed under mutual exclusion against itself and without parallelization.
Possible Predecessors
binomial_filter, gauss_image, smooth_image, mean_image
Alternatives
watersheds, local_max
See also
histo_2dim, expand_region, expand_gray, expand_gray_ref
Module
Foundation

T_saddle_points_sub_pix ( const Hobject Image, const Htuple Filter,


const Htuple Sigma, const Htuple Threshold, Htuple *Row, Htuple *Col )

Subpixel precise detection of saddle points in an image.


saddle_points_sub_pix extracts saddle points from the image Image with subpixel precision, i.e., points
where along one direction the image intensity is minimal while at the same time along a different direction the
image intensity is maximal. To do so, in each point the input image is approximated by a quadratic polynomial in x
and y and subsequently the polynomial is examined for saddle points. The partial derivatives, which are necessary
for setting up the polynomial, are calculated either with various Gaussian derivatives or using the facet model,
depending on Filter. In the first case, Sigma determines the size of the Gaussian kernels, while in the second
case, before being processed the input image is smoothed by a Gaussian whose size is determined by Sigma.
Therefore, ’facet’ results in a faster extraction at the expense of slightly less accurate results. A point is accepted
to be a saddle point if the absolute values of both eigenvalues of the Hessian matrix are greater than Threshold
but their signs differ. The eigenvalues correspond to the curvature of the gray value surface.
saddle_points_sub_pix is especially useful for the detection of corners, where fields of different intensity
join together like the black and white fields of a chess board. Their high contrast and shape facilitate the location
and the determination of the precise position of such corners.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method for the calculation of the partial derivatives.
Default Value : "facet"
List of values : Filter ∈ {"facet", "gauss"}

HALCON 8.0.2
972 CHAPTER 13. SEGMENTATION

. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double


Sigma of the Gaussian. If Filter is ’facet’, Sigma may be 0.0 to avoid the smoothing of the input image.
Suggested values : Sigma ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Restriction : Sigma ≥ 0.0
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum absolute value of the eigenvalues of the Hessian matrix.
Default Value : 5.0
Suggested values : Threshold ∈ {2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0}
Restriction : Threshold ≥ 0.0
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected saddle points.
. Col (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected saddle points.
Result
saddle_points_sub_pix returns H_MSG_TRUE if all parameters are correct and no error oc-
curs during the execution. If the input is empty the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
saddle_points_sub_pix is reentrant and processed without parallelization.
Possible Successors
gen_cross_contour_xld, disp_cross
Alternatives
critical_points_sub_pix, local_min_sub_pix, local_max_sub_pix
Module
Foundation

watersheds ( const Hobject Image, Hobject *Basins, Hobject *Watersheds )


T_watersheds ( const Hobject Image, Hobject *Basins,
Hobject *Watersheds )

Extract watersheds and basins from an image.


watersheds segments an image based on the topology of the gray values. The image is interpreted as a “moun-
tain range.” Higher gray values correspond to “mountains,” while lower gray values correspond to “valleys.” In the
resulting mountain range watersheds are extracted. These correspond to the bright ridges between dark basins. On
output, the parameter Basins contains these basins, while Watersheds contains the watersheds, which are at
most one pixel wide. Watersheds always is a single region per input image, while Basins contains a separate
region for each basin.
It is advisable to apply a smoothing operator (e.g., binomial_filter or gauss_image) to the in-
put image before calling watersheds in order to reduce the number of output regions. A more sophisti-
cated way to reduce the number of output regions is to merge neighboring basins based on a threshold cri-
terion by using watersheds_threshold instead (for more details please refer to the documentation of
watersheds_threshold).
Attention
If the image contains many fine structures or is noisy, many output regions result, and thus the runtime increases
considerably.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real


Input image.
. Basins (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented basins.
. Watersheds (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Watersheds between the basins.

HALCON/C Reference Manual, 2008-5-13


13.5. TOPOGRAPHY 973

Example

read_image(&Cells,"meningg5");
gauss_image(Cells,&CellsGauss,9);
invert_image(CellsGauss,&CellsInvert);
watersheds(CellsInvert,&Bassins,&Watersheds);
set_colored(WindowHandle,12);
disp_region(Bassins,WindowHandle);

Result
watersheds always returns H_MSG_TRUE. The behavior with respect to the input images and output re-
gions can be determined by setting the values of the flags ’no_object_result’, ’empty_region_result’, and
’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
watersheds is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image, invert_image
Possible Successors
expand_region, select_shape, reduce_domain, opening
Alternatives
watersheds_threshold, pouring
References
L. Vincent, P. Soille: “Watersheds in Digital Space: An Efficient Algorithm Based on Immersion Simulations”;
IEEE Transactions on Pattern Analysis and Machine Intelligence; vol. 13, no. 6; pp. 583-598; 1991.
Module
Foundation

watersheds_threshold ( const Hobject Image, Hobject *Basins,


Hlong Threshold )

T_watersheds_threshold ( const Hobject Image, Hobject *Basins,


const Htuple Threshold )

Extract watershed basins from an image using a threshold.


The operator watersheds_threshold segments regions (basins) that are separated from each other by a
watershed that has a height of at least Threshold.
In the first step, watersheds_threshold computes the watersheds without applying a threshold, resulting in
the same basins that would be obtained when calling watersheds (for more details please refer to the description
of watersheds). In the second step, the basins are successively merged if they are separated by a watershed
that is smaller than Threshold. Let B1 and B2 be the minimum gray values of two neighboring basins and W
the minimum gray value of the watershed that separates the two basins. The watershed is eliminated and the two
basins are merged if
max{W − B1 , W − B2 } < Threshold.

The thus obtained basins are returned in Basins.


If Threshold is set to 0, watersheds_threshold is comparable to watersheds except that no water-
sheds but only expanded basins are returned. If Threshold is set to the maximum gray value range of Image
then no two basins are separated by a watershed exceeding Threshold, and hence, Basins will contain only
one region.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2 / real
Image to be segmented.
. Basins (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segments found (dark basins).

HALCON 8.0.2
974 CHAPTER 13. SEGMENTATION

. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong / double


Threshold for the watersheds.
Default Value : 10
Suggested values : Threshold ∈ {0, 5, 10, 20, 30, 50}
Restriction : Threshold ≥ 0
Result
watersheds always returns H_MSG_TRUE. The behavior with respect to the input image and output re-
gions can be determined by setting the values of the flags ’no_object_result’, ’empty_region_result’, and
’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
watersheds_threshold is reentrant and processed without parallelization.
Possible Predecessors
binomial_filter, gauss_image, smooth_image, invert_image
Possible Successors
expand_region, select_shape, reduce_domain, opening
Alternatives
watersheds, pouring
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


Chapter 14

System

14.1 Database
count_relation ( const char *RelationName, Hlong *NumOfTuples )
T_count_relation ( const Htuple RelationName, Htuple *NumOfTuples )

Number of entries in the HALCON database.


The operator count_relation counts the number of entries in one of the four relations of the HALCON
database. The HALCON database is organized as follows:
There are two basic relations for region-data and image-matrices. The HALCON objects region and image are
constructed from elements from these two relations: a region consists of a pointer to a tuple in the region-data
relation. An image consists also of a pointer to a tuple in the region-data relation (like a region) and additionally of
one or more pointers to tuples in the matrix relation. If there is more than one matrix pointer, the image is called a
multi-channel image.
Both regions and images are called objects. A region can be considered as the special case of an iconic object
having no image matrixes. For reasons of an efficient memory managment, the tuples of the region-data relation
and the image-matrix relation will be used by different objects together. Therefore there may be for example more
images than image matrices. Only the two lowlevel relations are of relevance to the memory consumption. Image
objects (regions as well as images) consist only of references on region and matrix data and therefore only need a
couple of bytes of memory.
Possible values for RelationName:

’image’: Image matrices. One matrix may also be the component of more than one image (no redundant storage).
’region’: Regions (the full and the empty region are always available). One region may of course also be the
component of more than one image object (no redundant storage).
’XLD’: eXtended Line Description: Contours, Polygons, paralles, lines, etc. XLD data types don’t have gray
values and are stored with subpixel accuracy.
’object’: Iconic objects. Composed of a region (called region) and optionally image matrices (called image).
’tuple’: In the compact mode, tuples of iconic objects are stored as a surrogate in this relation. Instead of working
with the individual object keys, only this tuple key is used. It depends on the host language, whether the
objects are passed individually (Prolog and C++) or as tuples (C, Smalltalk, Lisp, OPS-5).

Certain database objects will be created already by the operator reset_obj_db and therefore have to be avail-
able all the time (the undefined gray value component, the objects ’full’ (FULL_REGION in HALCON/C) and
’empty’ (EMPTY_REGION in HALCON/C) as well as the herein included empty and full region). By calling
get_channel_info, the operator therefore appears correspondingly also as ’creator’ of the full and empty
region. The procedure can be used for example to check the completeness of the clear_obj operation.

975
976 CHAPTER 14. SYSTEM

Parameter
. RelationName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Relation of interest of the HALCON database.
Default Value : "object"
List of values : RelationName ∈ {"image", "region", "XLD", "object", "tuple"}
. NumOfTuples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of tuples in the relation.
Example

reset_obj_db(512,512,3) ;
count_relation("image",&I1) ;
count_relation("region",&R1) ;
count_relation("XLD",&X1) ;
count_relation("object",&O1) ;
count_relation("tuple",&T1) ;
read_image(&X,"monkey") ;
count_relation("image",&I2) ;
count_relation("region",&R2) ;
count_relation("XLD",&X2) ;
count_relation("object",&O2) ;
count_relation("tuple",&T2) ;

/*
Result: I1 = 1 (undefined image)
R1 = 2 (full and empty region)
X1 = 0 (no XLD data)
O1 = 2 (full and empty objects)
T1 = 0 (always 0 in the normal mode)

I2 = 2 (additionally the image ’monkey’)


R2 = 2 (read_image uses the full region)
X2 = 0 (no XLD data)
O2 = 3 (additionally the image object X)
T2 = 0.
*/

Result
If the parameter is correct, the operator count_relation returns the value H_MSG_TRUE. Otherwise an
exception is raised.
Parallelization Information
count_relation is reentrant and processed without parallelization.
Possible Predecessors
reset_obj_db
See also
clear_obj
Module
Foundation

T_get_modules ( Htuple *UsedModules, Htuple *ModuleKey )

Query of used modules and the module key.


get_modules returns the module numbers of all operators used up to this point. Each operator belongs to
one module (maximum 32). Each module has a name, which is returned in UsedModules. Based on the used
modules, a key is generated that is needed for the licence manager. get_modules is normally called at the end
of a programm to check the used modules.

HALCON/C Reference Manual, 2008-5-13


14.1. DATABASE 977

Parameter

. UsedModules (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *


Names of used modules.
. ModuleKey (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Key for licence manager.
Parallelization Information
get_modules is reentrant and processed without parallelization.
Module
Foundation

reset_obj_db ( Hlong DefaultImageWidth, Hlong DefaultImageHeight,


Hlong DefaultChannels )

T_reset_obj_db ( const Htuple DefaultImageWidth,


const Htuple DefaultImageHeight, const Htuple DefaultChannels )

Initialization of the HALCON system.


The operator reset_obj_db initializes the HALCON system. With this procedure the four relations (grayvalue
data, region data, iconic objects and object tuplets) which are necessary for image processing with HALCON will
be installed (see also count_relation). In case the relations already exist, all tuplets in the relations will be
deallocated!
The parameters DefaultImageWidth and DefaultImageHeight provide the initial values for the global
maximum image size. If the first created object is an image, (e.g. read_image), the set values will be overruled
in Standard-HALCON by the size of this picture. Instead of this, in Parallel HALCON the set values will only be
changed, if they are smaller than the size of the created object. If on the other hand the first object to be created
is a region, both in Standard- and in Parallel HALCON the values will only be adjusted in case the new image is
larger than the set values. This is not only the case for the first image which is created or read: the global image
size will always be enlarged, if larger images are created.
The global image size is of consequence for the opening of windows ( open_window) and the clipping of regions.
Whenever the clip mode is activated, regions will be clipped according to the global image size ( set_system
(’clip_region’,’true’)). This can lead to problems if images of various sizes are used. In this case only
the fact that a region is smaller or of the same size as the largest image can be guaranteed.
The parameter DefaultChannels returns the most frequent number of channels of an image object. This value
can be set to 0 if for the most part regions are used. If more channels than those having been set at the initialization
are necessary for one image, the number will be enlarged dynamically for this image. If less channels than those
having been set at the initialization are necessary for the image, the superfluous channels will be set as undefined.
For the user this will seem as if they were non existent, however, memory is allocated unnecessarily.
The parameter values can be queried using the operator get_system.
Attention
If the operator reset_obj_db is not called at the beginning of a HALCON session, HALCON will be initialized
automatically by the operator reset_obj_db(128,128,0). In case the operator reset_obj_db is called
again, all image objects in the database will be deallocated.
Parameter

. DefaultImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Default image width (in pixels).
Default Value : 128
Suggested values : DefaultImageWidth ∈ {64, 128, 256, 512, 525, 1024}
. DefaultImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
Default image height (in pixels).
Default Value : 128
Suggested values : DefaultImageHeight ∈ {64, 128, 256, 512, 768, 1024}

HALCON 8.0.2
978 CHAPTER 14. SYSTEM

. DefaultChannels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Usual number of channels by which the system constant ’max_channels’ is limited.
Default Value : 0
Suggested values : DefaultChannels ∈ {0, 1, 2, 3, 4, 5, 6, 7}
Result
The operator reset_obj_db returns the value H_MSG_TRUE if the parameter values are correct. Otherwise
an exception will be raised.
Parallelization Information
reset_obj_db is reentrant and processed without parallelization.
See also
get_channel_info, count_relation
Module
Foundation

14.2 Error-Handling

T_get_check ( Htuple *Check )

State of the HALCON control modes.


Executing the operator get_check the user can inquire what kind of control modes are currently activated and
which are not. Check gives the tuplet containing the names of the control modes (see also set_check) which
are preceded by a tilde (˜, e.g. ’˜data’), if the corresponding control is deactivated.
Parameter

. Check (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *


Tuplet of the currently activated control modes.
Result
get_check always returns the value H_MSG_TRUE.
Parallelization Information
get_check is reentrant and processed without parallelization.
Possible Predecessors
set_check
See also
set_check
Module
Foundation

get_error_text ( Hlong ErrorNumber, char *ErrorText )


T_get_error_text ( const Htuple ErrorNumber, Htuple *ErrorText )

Inquiry after the error text of a HALCON error number.


The operator get_error_text returns the error text for the corresponding HALCON error number. This is
indeed the same text which will be given during an exception. The operator get_error_text is especially
useful if the error treatment is programmed by the users themselves (see also set_check(’˜give_error’:
)).
Attention
Unknown error numbers will trigger a standard message.

HALCON/C Reference Manual, 2008-5-13


14.2. ERROR-HANDLING 979

Parameter

. ErrorNumber (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Number of the HALCON error.
Restriction : (1 ≤ ErrorNumber) ∧ (ErrorNumber ≤ 36000)
. ErrorText (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Corresponding error text.
Example

Herror err;
char message[MAX_STRING];

set_check("~give_error");
err = send_region(region,socket_id);
set_check("give_error");
if (err != H_MSG_TRUE) {
get_error_text((long)err,message);
fprintf(stderr,"my error message: %s\n",message);
exit(1);
}

Result
The operator get_error_text always returns the value H_MSG_TRUE.
Parallelization Information
get_error_text is reentrant and processed without parallelization.
Possible Predecessors
set_check
See also
set_check
Module
Foundation

get_spy ( const char *Class, char *Value )


T_get_spy ( const Htuple Class, Htuple *Value )

Current configuration of the HALCON debugging-tool.


The operator get_spy returns the current configuration of spy, the HALCON debugging tool. The available
control modes (possible choices for Class) as well as the corresponding tuning possibilities (possible values for
Value) can be called up by using the operator query_spy. You will find a more detailed description under
set_spy.
Parameter

. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *


Control mode
Default Value : "mode"
List of values : Class ∈ {"mode", "procedure", "input_control", "output_control", "parameter_values",
"db", "input_gray_window", "input_region_window", "input_region_window", "halt", "timeout",
"button_window", "button_window", "button_click", "button_notify", "log_file", "error", "internal"}
. Value (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char * / Hlong * / double *
State of the control mode.
Result
The operator get_spy returns the value H_MSG_TRUE if the parameter Class is correct. Otherwise an
exception is raised.

HALCON 8.0.2
980 CHAPTER 14. SYSTEM

Parallelization Information
get_spy is reentrant and processed without parallelization.
Possible Predecessors
reset_obj_db
See also
set_spy, query_spy
Module
Foundation

T_query_spy ( Htuple *Classes, Htuple *Values )

Inquiring for possible settings of the HALCON debugging tool.


The operator query_spy returns all possible settings of spy, the HALCON debugging tool, i.e. all the available
control modes (Classes) as well as the corresponding possible ways of setting (Values). For a more detailed
description of spy see operator set_spy.
Attention
The values of Values cannot be used as direct input for set_spy, as they are transmitted as a symbolic constant.
Parameter

. Classes (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *


Available control modes (see also set_spy).
. Values (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Corresponding state of the control modes.
Result
query_spy always returns the value H_MSG_TRUE.
Parallelization Information
query_spy is reentrant and processed without parallelization.
Possible Predecessors
reset_obj_db
See also
set_spy, get_spy
Module
Foundation

set_check ( const char *Check )


T_set_check ( const Htuple Check )

Activating and deactivating of HALCON control modes.


With the help of the operator set_check different control modes of the HALCONsystem can be activated or
deactivated. If a certain control mode is activated, parameters etc. will be checked at runtime. Whenever an
inconsistency is hereby detected, the program will be interrupted by an exception.
It is recommendable to activate the control modes during the development of a program and to deactivate them
only after a successfully concluded testrun. For if the control mode is deactivated and an error occurs, the system
may react in an unpredictable way.
The HALCONsystem provides various possible control modes which can be activated and deactivated indepen-
dently. By calling the operator set_check with the name (Check) of the desired control mode, this control
mode is activated; the control mode is deactivated by passing its name prefixed with a tilde (˜, z.B. ’˜data’).

Available control modes:

HALCON/C Reference Manual, 2008-5-13


14.2. ERROR-HANDLING 981

’color’: If this control mode is activated, only colors may be used which are supported by the display for the
currently active window. Otherwise an error message is displayed.
In case of deactivated control mode and non existent colors, the nearest color is used (see also set_color,
set_gray, set_rgb).
’text’: If this control mode is activated, it will check the coordinates during the setting of the text cursor as well
as during the display of strings ( write_string) to the effect whether a part of a sign would lie outside
the windowframe (a fact which is not forbidden in principle by the system).
If the control mode is deactivaed, the text will be clipped at the windowframe.
’data’: (For program development)
Checks the consistency of image objects (regions and grayvalue components.
’interface’: If this control mode is activated, the interface between the host language and the HALCON proce-
dures will be checked in course (e.g. typifying and counting of the values).
’database’: This is a consistency check of the database (e.g. checks whether an object which shall be canceled
does indeed exist or not.)
’give_error’: Determines whether errors shall trigger exceptions or not. If this control modes is deactivated,
the application program must provide a suitable error treatment itself. Please note that errors which are
not reported usually lead to undefined output parameters which may cause an unpredictable reaction of the
program. Details about how to handle exceptions in the different HALCON language interfaces can be found
in the HALCON Programmer’s Guide and the HDevelop User’s Guide.
’father’: If this control mode is activated when calling the operators open_window or open_textwindow,
HALCON allows only the usage of the number of another HALCON window as the father window of the
new window; otherwise it allows also the usage of IDs of operating system windows as the father window.
This control mode ist only relevant for windows of type ’X-Window’ and ’WIN32-Window’.
’region’: (For program development)
Checks the consistency of chords (this may lead to a notable speed reduction of routines).
’clear’: Normally, if a list of objects shall be canceled by using clear_obj, an exception will be raised, in case
individual objects do not or no longer exist. If the ’clear’ mode is activated, such objects will be ignored.
’memory’: (For program development)
Checks the memory blocks freed by the HALCON memory managemnet on consistency and overwriting of
memory borders.
’all’: Activates all control modes.
’none’: Deactivates all control modes.
’default’: Default settings: [’give_error’,’database’]
Parameter
. Check (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Desired control mode.
Default Value : "default"
List of values : Check ∈ {"color", "text", "database", "data", "interface", "give_error", "father", "region",
"clear", "memory", "all", "none", "default"}
Result
The operator set_check returns the value H_MSG_TRUE, if the parameters are correct. Otherwise an exception
will be raised.
Parallelization Information
set_check is reentrant and processed without parallelization.
See also
get_check, set_color, set_rgb, set_hsi, write_string
Module
Foundation

set_spy ( const char *Class, const char *Value )


T_set_spy ( const Htuple Class, const Htuple Value )

Control of the HALCON Debugging Tools.

HALCON 8.0.2
982 CHAPTER 14. SYSTEM

The operator set_spy is the HALCON debugging tool. This tool allows the flexible control of the input and
output data of HALCON-operators - in graphical as well as in textual form. The datacontrol is activated by using

set_spy(’mode’,’on’),
and deactivated by using

set_spy(’mode’,’off’).
The debugging tool can further be activated with the help of the environment variable HALCONSPY. The definition
of this variable corresponds to calling up ’mode’ and ’on’.
The following control modes can be tuned (in any desired combination of course) with the help of Class/Value:

Class Meaning / Value

’operator’ When a routine is called, its name and the names of its parameters will be given (in TRIAS notation).
Value: ’on’ or ’off’
default: ’off’
’input_control’ When a routine is called, the names and values of the input control parameters will be given.
Value: ’on’ or ’off’
default: ’off’
’output_control’ When a routine is called, the names and values of the output control parameters are given.
Value: ’on’ or ’off’
default: ’off’
’parameter_values’ Additional information on ’input_control’ and ’output_control’: indicates how many values
per parameter shall be displayed at most (maximum tuplet length of the output).
Value: tuplet length (integer)
default: 4
’db’ Information concerning the 4 relations in the HALCON-database. This is especially valuable in looking for
forgotten clear_obj.
Value: ’on’ or ’off’
default: ’off’
’input_gray_window’ Any reading access of the gray-value component of an (input) image object will cause the
gray-value component to be shown in the indicated window (Window-ID; ’none’ will deactivate this control
).
Value: Window-ID (integer) or ’none’
default: ’none’
’input_region_window’ Any reading access of the region of an (input) iconic object will cause this region to be
shown in the indicated (Window-ID; ’none’ will deactivate this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’input_xld_window’ Any reading access of the xld will cause this xld to be shown in the indicated (Window-ID;
’none’ will deactivate this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’time’ Processing time of the operator
Value: ’on’ or ’off’
default: ’off’
’halt’ Determines whether there is a halt after every individual action (’multiple’) or only at the end of each oper-
ator (’single’). The parameter is only effective if the halt has been activated by ’timeout’ or ’button_window’.
Value: ’single’ or ’multiple’
default: ’multiple’
’timeout’ After every output there will be a halt of the indicated number of seconds.
Value: seconds (real)
default 0.0

HALCON/C Reference Manual, 2008-5-13


14.2. ERROR-HANDLING 983

’button_window’ Alternative to ’timeout’: after every output spy waits until the cursor indicates (’button_click’
= ’false’) or clicks into (’button_click’ = ’true’) the indicated window. (Window-ID; ’none’ will deactivate
this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’button_click’ Additional option for ’button_window’: determines whether or not a mouse-click has to be waited
for after an output.
Value: ’on’ or ’off’
default: ’off’
’button_notify’ If ’button_notify’ is activated, spy generates a beep after every output. This is useful in combi-
nation with ’button_window’.
Value: ’on’ or ’off’
default: ’off’
’log_file’ Spy can hereby divert the text output into a file having been opened with open_file.
Value: a file handle (see open_file)
’error’ If ’error’ is activated and an internal error occurs, spy will show the internal procedures (file/line) con-
cerned.
Value: ’on’ or ’off’
default: ’off’
’internal’ If ’internal’ is activated, spy will display the internal procedures and their parameters (file/line) while
an HALCON-operator is processed.
Value: ’on’ or ’off’
default: ’off’
Parameter
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Control mode
Default Value : "mode"
List of values : Class ∈ {"mode", "operator", "input_control", "output_control", "parameter_values",
"input_gray_window", "input_region_window", "input_xld_window", "db", "time", "halt", "timeout",
"button_window", "button_click", "button_notify", "log_file", "error", "internal"}
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double
State of the control mode to be set.
Default Value : "on"
Suggested values : Value ∈ {"on", "off", 1, 2, 3, 4, 5, 10, 50, 0.0, 1.0, 2.0, 5.0, 10.0}
Example

/* init spy: Setting of the wished control modi */


set_spy("mode","on");
set_spy("operator","on");
set_spy("input_control","on");
set_spy("output_control","on");
/* calling of program section, that will be examined */
set_spy("mode","off");

Result
The operator set_spy returns the value H_MSG_TRUE if the parameters are correct. Otherwise an exception
is raised.
Parallelization Information
set_spy is processed completely exclusively without parallelization.
Possible Predecessors
reset_obj_db
See also
get_spy, query_spy
Module
Foundation

HALCON 8.0.2
984 CHAPTER 14. SYSTEM

14.3 Information
T_get_chapter_info ( const Htuple Chapter, Htuple *Info )

Get information concerning the chapters on procedures.


The operator get_chapter_info gives information concerning the chapters on procedures. If instead of
Chapter the empty string is transmitted, the routine will provide in Info the names of all chapters. If on the
other hand a certain chapter or a chapter and its subchapter(s) are indicated (by a tuple of names), the corresponding
subchapters or - in case there are no further subchapters - the names of the corresponding procedures will be given.
The organization of the chapters on procedures is the same as the organization of chapters and subchapters in the
HALCON-manual. Please note: The chapters on procedures respectively the subchapters concerning an individual
procedure can be called by using the operator get_operator_info(<Name>,’chapter’,Info). The
Online-texts will be taken from the files english.hlp, english.sta, english.num, english.key, and english.idx, which
will be searched by HALCON in the currently used directory or the directory ’help_dir’ (see also get_system
and set_system).
Parameter

. Chapter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *


Procedure class or subclass of interest.
Default Value : ""
. Info (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Procedure classes (Chapter = ”) or procedure subclasses respectively procedures.
Result
If the parameter values are correct and the helpfile is available, the operator get_chapter_info returns the
value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
get_chapter_info is processed completely exclusively without parallelization.
Possible Predecessors
get_system, set_system
See also
get_operator_info, get_system, set_system
Module
Foundation

T_get_keywords ( const Htuple ProcName, Htuple *Keywords )

Get keywords which are assigned to procedures.


The operator get_keywords returns all the keywords in the online-texts corresponding to those procedures
which have the indicated substring ProcName in their name. If instead of ProcName the empty string is trans-
mitted, the operator get_keywords returns all keywords. The keywords of an individual procedure can also be
called by using the operator get_operator_info. The online-texts will be taken from the files english.hlp,
english.sta, english.num, english.key and english.idx, which are searched by HALCON in the currently used direc-
tory and in the directory ’help_dir’ (see also get_system and set_system).
Parameter

. ProcName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; Htuple . const char *


Substring in the names of those procedures for which keywords are needed.
Default Value : "get_keywords"
. Keywords (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Keywords for the procedures.
Result
The operator get_keywords returns the value H_MSG_TRUE if the parameters are correct and the helpfiles
are available. Otherwise an exception handling is raised.

HALCON/C Reference Manual, 2008-5-13


14.3. INFORMATION 985

Parallelization Information
get_keywords is processed completely exclusively without parallelization.
Possible Predecessors
get_chapter_info
Alternatives
get_operator_info
See also
get_operator_name, search_operator, get_param_info
Module
Foundation

get_operator_info ( const char *ProcName, const char *Slot,


char *Information )

T_get_operator_info ( const Htuple ProcName, const Htuple Slot,


Htuple *Information )

Get information concerning a HALCON-procedure.


With the help of the operator get_operator_info the online-texts concerning a certain procedure can be
called (see also get_operator_name). The form of information available for all procedures (Slot) can be
called using the operator query_operator_info. For the time being the following slots are available:

’short’: Short description of the procedure.


’abstract’: Description of the procedure.
’procedure_class’: Name(s) of the chapter(s) in the procedure hierarchy (chapter, subchapter in the HALCON
manual).
’functionality’: Functionality is equivalent to the object class to which the procedure can be assigned.
’keywords’: Keywords of the procedure (optional).
’example’: Example for the use of the procedure (optional). The operator ’example.LANGUAGE’ (LANGUAGE
∈ {c,c++,smalltalk,trias}) calls up examples for a certain language if available. If the language is not indi-
cated or if no example is available in this language, the TRIAS-example will be returned.
’complexity’: Complexity of the procedure (optional).
’effect’: Not in use so far.
’alternatives’: Alternative procedures (optional).
’see_also’: Procedures containing further information (optional).
’predecessor’: Possible and sensible predecessor
’successor’: Possible and sensible successor
’result_state’: Return value of the procedure (TRUE, FALSE, FAIL, VOID or EXCEPTION).
’attention’: Restrictions and advice concering the correct use of the procedure (optional).
’parameter’: Names of the parameter of the procedure (see also get_param_info).
’references’: Literary references (optional).
’module’: The module to which the operator is assigned.
’html_path’: The directory where the HTML documentation of the operator resides.
’warning’: Possible warnings for using the operator.

The texts will be taken from the files english.hlp, english.sta, english.key, english.num und english.idx which
will be searched by HALCON in the currently used directory or in the directory ’help_dir’ (respectively
’user_help_dir’) (see also get_system and set_system). By adding ’.latex’ after the slotname, the text
of slots containing textual information can be made available in LATEX notation.

HALCON 8.0.2
986 CHAPTER 14. SYSTEM

Parameter

. ProcName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; (Htuple .) const char *


Name of the operator on which more information is needed.
Default Value : "get_operator_info"
. Slot (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Desired information.
Default Value : "abstract"
List of values : Slot ∈ {"short", "abstract", "procedure_class", "functionality", "effect", "complexity",
"predecessor", "successor", "alternatives", "see_also", "keywords", "example", "attention", "result_state",
"return_value", "references", "module", "html_path", "warning"}
. Information (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Information (empty if no information is available)
Result
The operator get_operator_info returns the value H_MSG_TRUE if the parameters are correct and the
helpfiles are availabe. Otherwise an exception handling is raised.
Parallelization Information
get_operator_info is processed completely exclusively without parallelization.
Possible Predecessors
get_keywords, search_operator, get_operator_name, query_operator_info,
query_param_info, get_param_info
Possible Successors
get_param_names, get_param_num, get_param_types
Alternatives
get_param_names
See also
query_operator_info, get_param_info, get_operator_name, get_param_num,
get_param_types
Module
Foundation

T_get_operator_name ( const Htuple Pattern, Htuple *ProcNames )

Get procedures with the given string as a substring of their name.


The operator get_operator_name takes a string (Pattern) as input and searches all HALCON-procedures
having this string as a substring in their name. If an empty string is entered, the names of all procedures available
will be returned.
Parameter

. Pattern (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Substring of the seeked names (empty <=> all names).
Default Value : "info"
. ProcNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Detected procedure names.
Result
The operator get_operator_name returns the value H_MSG_TRUE if the helpfiles are available. Otherwise
an exception handling is raised.
Parallelization Information
get_operator_name is reentrant and processed without parallelization.
Possible Successors
get_operator_info, get_param_names, get_param_num, get_param_types
Alternatives
search_operator

HALCON/C Reference Manual, 2008-5-13


14.3. INFORMATION 987

See also
get_operator_info, get_param_names, get_param_num, get_param_types
Module
Foundation

get_param_info ( const char *ProcName, const char *ParamName,


const char *Slot, char *Information )

T_get_param_info ( const Htuple ProcName, const Htuple ParamName,


const Htuple Slot, Htuple *Information )

Get information concerning the procedure parameters.


The operator get_param_info is used for calling up the online-texts assigned to a parameter of an indicated
procedure. The form of information available for each parameter (Slot), can be called up by using the operator
query_param_info. At the moment the following slots are available:

’description’: Description of the parameter.


’description.latex’: Description of the parameter in LATEX notation.
’parameter_class’: Parameter classes: ’input_object’, ’output_object’, ’input_control’ oder ’output_control’.
’type_list’: Permitted type(s) of data for parameter values Values: ’real’, ’integer’ or ’string’ (for control parame-
ters), ’byte’, ’direction’, ’cyclic’, ’int1’, ’int2’, ’uint2’, ’int4’, ’real’, ’complex’, ’vector_field’ (for images).
’default_type’: Default-type for parameter values (for control parameters only). This type of parameter is the one
HALCON/C uses in the "‘simple mode"’. If ’none’ is indicated, the "‘tuple mode"’ must be used. Value:
’real’, ’integer’, ’string’ oder ’none’.
’sem_type’: Semantic type of the parameter. This is important to allow the assignment of the parameters to object
classes in object-oriented languages (C++, .NET, COM). If more than one parameter belongs semantically to
one type, this fact is indicated as well. So far the the following objects are supported:
object, image, region, xld,
xld_cont, xld_para, xld_poly, xld_ext_para, xld_mod_para,
integer, real, number, string,
channel, grayval, window,
histogram, distribution,
point(.x, .y), extent(.x, .y),
angle(.rad oder .deg),
circle(.center.x, .center.y, .radius),
arc(.center.x, .center.y, .angle.rad, .begin.x, .begin.y),
ellipse(.center.x, .center.y, .angle.rad, .radius1, .radius2),
line(.begin.x, .begin.y, .end.x, .end.y)
rectangle(.origin.x, .origin.y, .corner.x, .corner.y
or .extent.x, .extent.y),
polygon(.x, .y), contour(.x, .y),
coordinates(.x, .y), chord(.x1, .x2, .y),
chain(.begin.x, .begin.y, .code).
’default_value’: Default-value for the parameter (for input-control parameters only). It is the question of mere
information only (the parameter value must be transmitted explicitly, even if the default-value is used). This
entry serves only as a notice, a point of departure for own experiments. The values have been selected so that
they normally do not cause any errors but generate something that makes sense.
’multi_value’: ’true’, if more than one value is permitted in this parameter position, otherwise ’false’.
’multichannel’: ’true’, in case the input image object may be multichannel.
’mixed_type’: For control parameters exclusively and only if value tuples (’multivalue’-’true’) and various types
of data are permitted for the parameter values (’type_list’ having more than one value). In this case Slot
indicates, whether values of various types may be mixed in one tuple (’true’ or ’false’).
’values’: Selection of values (optional).
’value_list’: In case a parameter can take only a limited number of values, this fact will be indicated explicitly
(optional).

HALCON 8.0.2
988 CHAPTER 14. SYSTEM

’valuemin’: Minimum value of a value interval.


’valuemax’: Maximum value of a value interval.
’valuefunction’: Function discribing the course of the values for a series of tests (lin, log, quadr, ...).
’steprec’: Recommended step width for the parameter values in a series of tests.
’steprec’: Minimum step width of the parameter values in a series of tests.
’valuenumber’: Expression describing the number of parameters as such or in relation to other parameters.
’assertion’: Expression describing the parameter values as such or in relation to other parameters.

The online-texts will be taken from the files english.hlp, english.sta, english.key, english.num and english.idx
which will be searched by HALCON in the currently used directory or the directory ’help_dir’ (see also
get_system and set_system).
Parameter
. ProcName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; (Htuple .) const char *
Name of the procedure on whose parameter more information is needed.
Default Value : "get_param_info"
. ParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Name of the parameter on which more information is needed.
Default Value : "Slot"
. Slot (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Desired information.
Default Value : "description"
List of values : Slot ∈ {"description", "type_list", "default_type", "sem_type", "default_value", "values",
"value_list", "valuemin", "valuemax", "valuefunction", "valuenumber", "assertion", "steprec", "stepmin",
"mixed_type", "multivalue", "multichannel"}
. Information (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Information (empty in case there is no information available).
Result
The operator get_param_info returns the value H_MSG_TRUE if the parameters are correct and the helpfiles
are available. Otherwise an exception handling is raised.
Parallelization Information
get_param_info is processed completely exclusively without parallelization.
Possible Predecessors
get_keywords, search_operator
Alternatives
get_param_names, get_param_num, get_param_types
See also
query_param_info, get_operator_info, get_operator_name
Module
Foundation

T_get_param_names ( const Htuple ProcName, Htuple *InpObjPar,


Htuple *OutpObjPar, Htuple *InpCtrlPar, Htuple *OutpCtrlPar )

Get the names of the parameters of a HALCON-procedure.


For the HALCON-procedure indicated in ProcName the operator get_param_names returns the names of
the input objects, the output objects, of the input control parameters and the output control parameters.
Parameter
. ProcName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; Htuple . const char *
Name of the procedure.
Default Value : "get_param_names"
. InpObjPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Names of the input objects.

HALCON/C Reference Manual, 2008-5-13


14.3. INFORMATION 989

. OutpObjPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *


Names of the output objects.
. InpCtrlPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Names of the input control parameters.
. OutpCtrlPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Names of the output control parameters.
Result
The operator get_param_names returns the value H_MSG_TRUE if the name of the procedure exists and the
helpfiles are available. Otherwise an exception handling is raised.
Parallelization Information
get_param_names is reentrant and processed without parallelization.
Possible Predecessors
get_keywords, search_operator, get_operator_name, get_operator_info
Possible Successors
get_param_num, get_param_types
Alternatives
get_operator_info, get_param_info
See also
get_param_num, get_param_types, get_operator_name
Module
Foundation

get_param_num ( const char *ProcName, char *CName, Hlong *InpObjPar,


Hlong *OutpObjPar, Hlong *InpCtrlPar, Hlong *OutpCtrlPar, char *Type )

T_get_param_num ( const Htuple ProcName, Htuple *CName,


Htuple *InpObjPar, Htuple *OutpObjPar, Htuple *InpCtrlPar,
Htuple *OutpCtrlPar, Htuple *Type )

Get number of the different parameter classes of a HALCON-procedure.


The operator get_param_num returns the number of the input and output object parameters, as well as the input
and output control parameters for the indicated HALCON-procedure. Further, you will receive the name of the
C-function (CName) called by the procedure. The output parameter Type indicates, whether the procedure is a
system procedure or an user procedure.
Parameter
. ProcName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; const char *
Name of the procedure.
Default Value : "get_param_num"
. CName (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Name of the called C-function.
. InpObjPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of the input object parameters.
. OutpObjPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of the output object parameters.
. InpCtrlPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of the input control parameters.
. OutpCtrlPar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of the output control parameters.
. Type (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
System procedure or user procedure.
Suggested values : Type ∈ {"system", "user"}
Result
The operator get_param_num returns the value H_MSG_TRUE if the name of the procedure exists. Otherwise
an exception handling is raised.

HALCON 8.0.2
990 CHAPTER 14. SYSTEM

Parallelization Information
get_param_num is reentrant and processed without parallelization.
Possible Predecessors
get_keywords, search_operator, get_operator_name, get_operator_info
Possible Successors
get_param_types
Alternatives
get_operator_info, get_param_info
See also
get_param_names, get_param_types, get_operator_name
Module
Foundation

T_get_param_types ( const Htuple ProcName, Htuple *InpCtrlParType,


Htuple *OutpCtrlParType )

Get default data type for the control parameters of a HALCON-procedure.


The operator get_param_types returns the default data type for each input and output control parameter.
The default type of a parameter is the type used in "‘simple mode"’ in HALCON/C. This concerns parameters
which allow more than one type as for example write_string. Hereby the types of input parameters are
combined in the variable InpCtrlParType, whereas the types of output parameters are combined in the variable
OutpCtrlParType. The following types are possible:

’integer’: an integer.
’integer tuple’: an integer or a tuple of integers.
’real’: a floating point number.
’real tuple’: a floating point number or a tuple of floating point numbers.
’string’: a string.
’string tuple’: a string or a tuple of strings.
’no_default’: individual value of which the type cannot be determined.
’no_default tuple’: individual value or tuple of values of which the type cannot be determined.
’default’: individual value of unknown type, whereby the systems assumes it to be an ’integer’.

Parameter
. ProcName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; Htuple . const char *
Name of the procedure.
Default Value : "get_param_types"
. InpCtrlParType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Default type of the input control parameters.
. OutpCtrlParType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Default type of the output control parameters.
Result
The operator get_param_types returns the value H_MSG_TRUE if the indicated procedure name exists.
Otherwise an exception handling is raised.
Parallelization Information
get_param_types is reentrant and processed without parallelization.
Possible Predecessors
get_keywords, search_operator, get_operator_name, get_operator_info
Alternatives
get_param_info
See also
get_param_names, get_param_num, get_operator_info, get_operator_name

HALCON/C Reference Manual, 2008-5-13


14.3. INFORMATION 991

Module
Foundation

T_query_operator_info ( Htuple *Slots )

Query slots concerning information with relation to the operator get_operator_info.


The operator query_operator_info returns the names of those online texts (Slots) which are available
online for each procedure. The information itself can be called up using
get_operator_info(<ProcName>,<Slot>,<Information>).
Parameter
. Slots (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Slotnames of the operator get_operator_info.
Result
The operator query_operator_info always returns the value H_MSG_TRUE.
Parallelization Information
query_operator_info is local and processed completely exclusively without parallelization.
Possible Successors
get_operator_info
See also
get_operator_info
Module
Foundation

T_query_param_info ( Htuple *Slots )

Query slots of the online-information concerning the operator get_param_info.


The operator query_param_info returns the names of those pieces of information (Slots) which are avail-
able online for each parameter (online texts). The online texts themselves can be called up using
get_param_info(<Procedurname>,<Parametername>,<Slot>,<Information>).
Parameter
. Slots (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Slotnames for the operator get_param_info.
Result
query_param_info always returns the value H_MSG_TRUE.
Parallelization Information
query_param_info is reentrant and processed without parallelization.
Possible Successors
get_param_info
See also
get_param_info
Module
Foundation

T_search_operator ( const Htuple Keyword, Htuple *ProcNames )

Search names of all procedures assigned to one keyword.


The operator search_operator returns the names of all procedures whose online-texts include the key-
word Keyword (see also get_operator_info). All available keywords are called by using the operator

HALCON 8.0.2
992 CHAPTER 14. SYSTEM

get_keywords(”, <keywords>). The online-texts are taken from the files english.hlp, english.sta, en-
glish.key, english.num and Halcon.idx, which are searched by HALCON in the currently used directory or the
directory ’help_dir’ (see also get_system and get_system).
Parameter

. Keyword (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Keyword for which corresponding procedures are searched.
Default Value : "Information"
. ProcNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Procedures whose slot ’keyword’ contains the keyword.
Result
The operator search_operator returns the value H_MSG_TRUE if the parameters are correct and the help-
files are available. Otherwise an exception handling is raised.
Parallelization Information
search_operator is processed completely exclusively without parallelization.
Possible Predecessors
get_keywords
See also
get_keywords, get_operator_info, get_param_info
Module
Foundation

14.4 Operating-System

count_seconds ( double *Seconds )


T_count_seconds ( Htuple *Seconds )

Elapsed processing time since the last call of count_seconds.


The operator count_seconds helps to measure time. Each operator call returns a time value. The difference
of the values of two successive calls provides the time interval in seconds. The mode of measuring time can be set
with set_system(’clock_mode’,...).
Attention
The time measurement is not exact and depends on the load of the computer.
Parameter

. Seconds (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *


Processtime since the program start.
Example

count_seconds(&Start);
/* program segment to be measured */
count_seconds(&End);
printf("RunTime = %g\n",End-Start);

Result
The operator count_seconds always returns the value H_MSG_TRUE.
Parallelization Information
count_seconds is reentrant and processed without parallelization.
See also
set_system
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


14.4. OPERATING-SYSTEM 993

system_call ( const char *Command )


T_system_call ( const Htuple Command )

Executes a system command.


The operator system_call executes the system command specified by the string pointed to by Command
(C-procedure "‘system"’). If the string is empty, an interactive shell will be started (’csh -i’).
Parameter

. Command (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *


Command to be called by the system.
Default Value : "ls"
Result
If the entered operator can be executed by the system, the operator system_call returns the value
H_MSG_TRUE. Otherwise an exception will be raised.
Parallelization Information
system_call is reentrant and processed without parallelization.
Possible Predecessors
count_seconds
See also
wait_seconds, count_seconds
Module
Foundation

wait_seconds ( double Seconds )


T_wait_seconds ( const Htuple Seconds )

Delaying the execution of the program.


The operator wait_seconds delays the execution by the number of seconds indicated in Seconds.
Parameter

. Seconds (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


Number of seconds by which the execution of the program will be delayed.
Default Value : 10
Restriction : Seconds ≥ 0
Result
The operator wait_seconds always returns the value H_MSG_TRUE.
Parallelization Information
wait_seconds is reentrant and processed without parallelization.
Possible Successors
system_call
See also
system_call, count_seconds
Module
Foundation

HALCON 8.0.2
994 CHAPTER 14. SYSTEM

14.5 Parallelization
check_par_hw_potential ( Hlong AllInpPars )
T_check_par_hw_potential ( const Htuple AllInpPars )

Check hardware regarding its potential for parallel processing.


check_par_hw_potential is necessary for an efficient automatic parallelization, which is used by HALCON
to better utilize multiprocessor hardware in order to speed up the processing of operators. As the parallelization
of operators is done automatically, there is no need for the user to explicitely prepare or change programs for their
parallelization. Thus, all HALCON-based programs can be used unchanged on multiprocessor hardware and nev-
ertheless utilize the potential of parallel hardware. check_par_hw_potential checks a given hardware with
respect to a parallel processing of HALCON operators. At this, it examines every operator, which can be sped up
in principle by an automatic parallelization. Each examined operator is processed several times - both sequentially
and in parallel - with a changing set of input parameter values/images. The latter helps to evaluate dependencies
between an operator’s input parameter characteristics (e.g. the size of an input image) and the efficiency of its
parallel processing. At this, AllInpPars is used in the following way: In the normal case, i.e. if AllInpPars
contains the default value 0 (“false”), only those input parameters are examined which are supposed to show influ-
ence on the processing time. Other parameters are not examined so that the whole process is sped up. However, in
some rare cases, the internal implementation of a HALCON operator might change from one HALCON release to
another. Then, a parameter which did not show any direct influence on the processing time in former releases, may
now show such an influence. In this case it is necessary to set AllInpPars to 1 (“true”) in order to force the
examination of all input parameters. If this happens, the HALCON release notes will most likely contain an appro-
priate note about this fact. Overall, check_par_hw_potential performs several test loops and collects a lot
of hardware-specific informations, which enable HALCON to optimize the automatic parallelization for a given
hardware. The hardware information is stored so that it can be used again in future HALCON sessions. Thus, it is
sufficient, to start check_par_hw_potential once on each multiprocessor machine that is used for parallel
processing. Of course, it should be started again, if the hardware of the machine changes, for example, by installing
a new cpu, or if the operating system of the machine changes, or if the machine gets a new host name. The latter
is necessary, because HALCON identifies the machine-specific parallelization information by the machine’s host
name. If the same multiprocessor machine is used with different operating systems, such as Windows and Linux, it
is necessary to start check_par_hw_potential once for each operating system in order to correctly measure
the rather strong influence of the operating system on the potential of exploiting multiprocessor hardware. Under
Windows, HALCON stores the parallelization knowledge, which belongs to a specific machine, in the machine’s
registry. At this, it uses a machine-specific registry key, which can be used by different users simultaneously. In
the normal case, this key can be written or changed by any user under Windows NT. However, under Windows
2000 the key may only be changed by users with administrator privileges or by users which at least belong to the
“power user” group. For all other users check_par_hw_potential shows no effect (but does not return an
error). Under Linux/UNIX the parallelization information is stored in a file in the HALCON installation directory
($HALCONROOT). Again this means that check_par_hw_potential must be called by users with the ap-
propriate privileges, here by users which have write access to the HALCON directory. If HALCON is used within
a network under Linux/UNIX, the denoted file contains the information about every computer in the network for
which the hardware check has been successfully completed.
Attention
During its test loops check_par_hw_potential has to start every examined operator several times. Thus,
the processing of check_par_hw_potential can take rather a long time. check_par_hw_potential
bases on the automatic parallelization of operators which is exclusively supported by Parallel HALCON. Thus,
check_par_hw_potential always returns an appropriate error, if it used with a non-parallel HALCON ver-
sion. check_par_hw_potential must be called by users with the appropriate privileges for storing the
parallelization information permanently (see the operator’s description above for more details about this subject).
Parameter

. AllInpPars (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Check every input parameter?
Default Value : 0
List of values : AllInpPars ∈ {0, 1}
Result
check_par_hw_potential returns H_MSG_TRUE if all parameters are correct.

HALCON/C Reference Manual, 2008-5-13


14.5. PARALLELIZATION 995

Parallelization Information
check_par_hw_potential is local and processed completely exclusively without parallelization.
Possible Successors
store_par_knowledge
See also
store_par_knowledge, load_par_knowledge
Module
Foundation

load_par_knowledge ( const char *FileName )


T_load_par_knowledge ( const Htuple FileName )

Load knowledge about automatic parallelization from file.


load_par_knowledge supports the automatic parallelization of HALCON operators, which is used to bet-
ter utilize multiprocessor hardware in order to speed up the processing of operators. To parallelize the pro-
cessing of operators automatically HALCON needs some specific knowledge about the used hardware. This
hardware-specific knowledge can be obtained by using the operator check_par_hw_potential. In
the normal case, HALCON stores this knowledge in a specific file in the HALCON installation directory
(Linux/UNIX) or within the “registry” (Windows). This enables HALCON to use the knowledge again later
on. With load_par_knowledge it is possible to load this knowledge explicitely from an ASCII file. At
this, FileName denotes the name of this file (incl. path and file extension). The file must conform to a spe-
cific syntax and must have been stored beforehand by using store_par_knowledge. While reading the file
load_par_knowledge checks whether its content was written for the currently used computer and whether
the contained parallelization information regards the currently used HALCON version (and revision). If this is the
case, load_par_knowledge adopts the information so that it will also be used with further HALCON ses-
sions. Otherwise, the information is ignored and load_par_knowledge returns an appropriate error message.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of parallelization knowledge file.
Default Value : ""
Result
load_par_knowledge returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
load_par_knowledge is local and processed completely exclusively without parallelization.
Possible Predecessors
store_par_knowledge
See also
store_par_knowledge, check_par_hw_potential
Module
Foundation

store_par_knowledge ( const char *FileName )


T_store_par_knowledge ( const Htuple FileName )

Store knowledge about automatic parallelization in file.


store_par_knowledge supports the automatic parallelization of HALCON operators, which is used to better
utilize multiprocessor hardware in order to speed up the processing of operators. To parallelize the processing
of operators automatically HALCON needs some specific knowledge about the used hardware. This hardware-
specific knowledge can be obtained by calling the operator check_par_hw_potential. There, HALCON
stores the knowledge in a specific file in the HALCON installation directory (Linux/UNIX) or within the “registry”

HALCON 8.0.2
996 CHAPTER 14. SYSTEM

(Windows). This enables HALCON to use the knowledge again later on. With store_par_knowledge it is
possible to store this knowledge explicitely as an ASCII file. At this, FileName denotes the name of this file (incl.
path and file extension). The stored knowledge can be read again later on by using load_par_knowledge.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of parallelization knowledge file.
Default Value : ""
Result
store_par_knowledge returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
store_par_knowledge is local and processed completely exclusively without parallelization.
Possible Predecessors
check_par_hw_potential
Possible Successors
load_par_knowledge
See also
load_par_knowledge, check_par_hw_potential
Module
Foundation

14.6 Parameters
get_system ( const char *Query, Hlong *Information )
T_get_system ( const Htuple Query, Htuple *Information )

Information concerning the currently used HALCON system parameter.


The operator get_system returns information concerning the currently activated HALCON system parameters.
Some of these parameters can be changed dynamically by using the operator set_system. They are marked by
a + in the list below. By passing the string ’?’ as the parameter Query, the names of all system parameters are
provided with Information.
The following system parameters can be queried:

Versions
’parallel_halcon’: The currently used variant of HALCON: Parallel HALCON (’true’) or Standard HAL-
CON (’false’)
’version’: HALCON version number, e.g.: 6.0
’last_update’: Date of creation of the HALCON library
’revision’: Revision number of the HALCON library, e.g.: 1
Upper Limits
’max_contour_length’: Maximum number of contour respectively polygone control points of a region.
’max_images’: Maximum total of images.
’max_channels’: Maximum number of channels of an image.
’max_obj_per_par’: Maximum number of image objects which may be used during one call up per param-
eter
’max_inp_obj_par’: Maximum number of input parameters.
’max_outp_obj_par’: Maximum number of output parameters.
’max_inp_ctrl_par’: Maximum number of input control parameters.
’max_outp_ctrl_par’: Maximum number of output control parameters.
’max_window’: Maximum number of windows.
’max_window_types’: Maximum number of window systems.
’max_proc’: Maximum number of HALCON procedures (system defined + user defined).

HALCON/C Reference Manual, 2008-5-13


14.6. PARAMETERS 997

Graphic
+’flush_graphic’: Determines, whether the flush operation is called or not after each visualization operation
in HALCON. Unix operating systems flash the display buffer auto- matically and make this parameter
effectless on respective operating systems, therefore.
+’int2_bits’: Number of significant bits of int2 images. This number is used when scaling the gray values.
If the values is -1 the gray values will be automatically scaled (default).
+’backing_store’: Storage of the window contents in case of overlaps.
+’icon_name’: Name of iconified graphics windows under X-Window. By default the number of the graph-
ics window is displayed.
+’window_name’: (no description available)
+’default_font’: Name of the font to set at opening the window.
+’update_lut’: (no description available)
+’x_package’: Number of bytes which are sent to the X server during each transfer of data.
+’num_gray_4’: Number of colors reserved under X Xindows concerning the output of graylevels (
disp_channel) on a machine with 4 bitplanes (16 colors).
+’num_gray_6’: Number of colors reserved under X Windows concerning the output of graylevels (
disp_channel) on a machine with 6 bitplanes (64 colors).
+’num_gray_8’: Number of colors reserved under X Windows concerning the output of graylevels (
disp_channel) on a machine with 8 bitplanes (256 colors).
+’num_gray_percentage’: HALCON reserves a certain amount of the available colors under X Windows
for the representation of graylevels ( disp_image). This shall interfere with other X applications
as little as possible. However, if HALCON does not succeed in reserving a minimum percentage of
’num_gray_percentage’ of the necessary colors on the X server, a certain amount of the lookup-table
will be claimed for the HALCON graylevels regardless of the consequences for other applications.
This may result in undesired shifts of color when switching between HALCON windows and windows
of other applications, or if (outside HALCON) a window-dump is generated. The number of the real
graylevels to be reserved depends on the number of available bitplanes on the outputmachine (see also
’num_gray_*’. Naturally no colors will be reserved on monochrome machines - the graylevels will
instead be dithered when displayed. If graylevel displays are used, only different shades of gray will
be applied (’black’, ’white’, ’gray’, etc.). ’num_gray_percentage’ is only used on machines with 8 bit
pseudo-color displays. For machines with displays with 16 bits or more (true color machines), no colors
are reserved for the display of gray levels in this case.
Note: Before the first window on a machine with x bitplanes is opened, num_gray_x indicates the
number of colors which have to be reserved for the display of graylevels, afterwards, however, it will
indicate the number of colors which actually have been reserved.
+’num_graphic_percentage’: Similar to ’num_gray_percentage’, ’num_graphic_percentage’ determines
how many graphics colors (for use with set_color) should be reserved in the LUT on an 8 bit pseudo-
color display under X windows.
+’num_graphic_2’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 2 bitplanes (4 colors).
+’num_graphic_4’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 4 bitplanes (16 colors).
+’num_graphic_6’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 6 bitplanes (64 colors).
+’num_graphic_8’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 8 bitplanes (256 colors).
Image Processing
+’neighborhood’: Using the 4 or 8 neighborhood.
+’init_new_image’: Initialization of images before applying grayvalue transformations.
+’no_object_result’: Behavior for an empty object lists.
+’empty_region_result’: Reaction of procedures concerning input objects with empty regions which
actually are not useful for such objects (e.g. certain region features, segmentation, etc.). Possible return
values:
’true’: the error will be ignored if possible
’false’: the procedure returns FALSE

HALCON 8.0.2
998 CHAPTER 14. SYSTEM

’fail’: the procedure returns FAIL


’void’: the procedure returns VOID
’exception’: an exception is raised

+’store_empty_region’: Storing of objects with empty regions.


+’clip_region’: Clipping of output regions so that they fit the global image size.
+’int_zooming’: Determines if the zooming of images is done with integer arithmetic or with floating point
arithmetic.
+’pregenerate_shape_models’: This parameter determines whether the shape models created with
create_shape_model or create_scaled_shape_model are pregenerated completely or not,
if this is not explicitly specified in create_shape_model or create_scaled_shape_model.
+’border_shape_models’: This parameter determines whether the shape models to be found
with find_shape_model, find_shape_models, find_scaled_shape_model, or
find_scaled_shape_models may lie partially outside the image (i.e., whether they may cross
the image border).
+’image_dpi’: This parameter determines the DPI resolution that is stored in image files written with
write_image in formats that support the storing of the DPI resolution.
’width’: Global maximum image width - in Standard-HALCON this value contains the maximum image
width of all HALCON image objects which are currently stored in memory. In Parallel HALCON
this value contains the maximum image width of all HALCON image objects which are or were in
memory since the start of the current HALCON session (this also includes objects which may be deleted
meanwhile).
’height’: Global maximum image height - in Standard-HALCON this value contains the maximum image
height of all HALCON image objects which are currently stored in memory. In Parallel HALCON
this value contains the maximum image height of all HALCON image objects which are or were in
memory since the start of the current HALCON session (this also includes objects which may be deleted
meanwhile).
’obj_images’: Current number of grayvalue components per image object.
+’current_runlength_number’: Currently used number of chords which can be used for the encoding of
regions.
Parallelization
+’parallelize_operators’: Determines whether Parallel HALCON uses an automatic parallelization to speed
up the processing of operators on multiprocessor machines.
+’reentrant’: Denotes whether Parallel HALCON currently supports reentrancy (default case), or whether
this feature has been switched off. Reentrancy is necessary for the automatic parallelization of Parallel
HALCON and for calling and processing multiple HALCON operators in parallel within multithreaded
applications.
’processor_num’: Returns the number of processors which Parallel HALCON has found on the hardware it
is running on. This also indicates the number of processors which is used by Parallel HALCON for the
automatic parallelization of operators.
+’thread_num’: Denotes the number of threads that Parallel HALCON uses for automatic parallelization.
The number contains the main thread and cannot exeed the number of processors for efficiency reasons.
+’thread_pool’: Denotes whether Parallel HALCON always creates new threads for automatic paralleliza-
tion (’false’) or uses an existing pool of threads (’true’). Using a pool is more efficient for automatic
parallelization.
File
+’flush_file’: Buffering of file output.
+’ocr_trainf_version’: This parameter returns the file format used for writing an OCR training file. The op-
erators write_ocr_trainf, write_ocr_trainf_image and concat_ocr_trainf write
training data in ASCII format for version number 1 or in binary format for version number 2 and 3.
Version number 3 stores images of type byte and uint2. Depending on the file version, the OCR training
files can be read by the following HALCON releases:
File Version HALCON Release
1 All
2 7.0.2 and higher
3 7.1 and higher

HALCON/C Reference Manual, 2008-5-13


14.6. PARAMETERS 999

+’filename_encoding’: This parameter returns how file and directory names are interpreted that are passed
as string parameters to and from HALCON. With the value ’locale’ these names are used unaltered,
while with the value ’utf8’ these names are interpreted as being UTF-8 encoded. In the latter case,
HALCON tries to translate input parameters from UTF-8 to the locale encoding according to the current
system settings, and output parameters from locale to UTF-8 encoding.
Directories
+’image_dir’: Path which will searched for the image file after the default directory (see also:
read_image).
+’lut_dir’: Path for the default directory for color tables (see also: set_lut).
+’help_dir’: Path for the default help directory for the online help files:
{german,english}.{hlp,sta,idx,num,key}.
Other
+’do_low_error’: Flag, if low level error should be printed.
’hostids’: The hostids of the computer that can be used for licensing HALCON.
’num_proc’: Total number of the available HALCON procedures (’num_sys_proc’ + ’num_user_proc’).
’num_sys_proc’: Number of the system procedures (supported procedures).
’num_user_proc’: Number of the user defined procedures (see also ’Extension Packages’ manual).
’byte_order’: Byte order of the processor (’msb_first’ or ’lsb_first’).
’operating_system’: Name of the operating system of the computer on which the HALCONprocess is being
executed.
’operating_system_version’: Version number of the operating system of the computer on which the HAL-
CON process is being executed.
’halcon_arch’: Name of the HALCON architecture of the running HALCON process.
+’clock_mode’ Method used for measuring the time in count_seconds (’processor_time’,
’elapsed_time’, or ’performance_counter’).
+’max_connection’ Maximum number of regions returned by connection.
+’extern_alloc_funct’: Pointer to external function for memory allocation of result images.
’extern_free_funct’: Pointer to external function for memory deallocation of result images.
+’image_cache_capacity’: Upper limit in bytes of the image memory cache.
This parameter is only available in Standard HALCON but ignored in Parallel HALCON.
+’global_mem_cache’: Cache mode of global memory, i.e., memory that is visible beyond an operator. It
specifies whether unused global memory should be cached (’shared’) or freed (’idle’). Additionally,
Parallel HALCON offers the option to cache global memory for each thread separately (’exclusive’).
This mde can accelerate processing at the cost of memory consumption. However, Standard HALCON
treats the value ’exclusive’ like the value ’shared’.
+’temporary_mem_cache’: Flag for unused temporary memory of an operator. It specifies whether mem-
ory that is only used within an operator should be cached (’true’, default) or freed (’false’).
+’alloctmp_max_blocksize’: Maximum size of memory blocks to be allocated within temporary memory
management. (No effect, if ’alloctmp_max_blocksize’ == -1 or ’temporary_mem_cache’ == ’false’)
’temp_mem’: Amount of temporary memory used by the last operator in byte. The return value is only
defined if set_check(’memory’) was called before the operator to be measured. Additionally, in
Parallel HALCON the memory value is not specified when calling operators not sequentially but parallel
in multiple threads.
’mmx_supported’: Flag, if the processor supports MMX operations (’true’) or not (’false’).
+’mmx_enable’: Flag, if MMX operations are used to accelerate selected image processing operators
(’true’) or not (’false’).
+’language’: Language used for error messages (’english’ or ’german’).

Parameter
. Query (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Desired system parameter.
Default Value : "width"
List of values : Query ∈ {"?", "alloctmp_max_blocksize", "backing_store", "border_shape_models",
"byte_order", "clip_region", "clock_mode", "current_runlength_number", "default_font", "do_low_error",

HALCON 8.0.2
1000 CHAPTER 14. SYSTEM

"empty_region_result", "extern_alloc_funct", "extern_free_funct", "filename_encoding", "flush_file",


"flush_graphic", "global_mem_cache", "halcon_arch", "height", "help_dir", "hostids", "icon_name",
"image_cache_capacity", "image_dir", "image_dpi", "init_new_image", "int2_bits", "int_zooming",
"language", "last_update", "lut_dir", "max_channels", "max_connection", "max_images",
"max_inp_ctrl_par", "max_inp_obj_par", "max_obj_per_par", "max_outp_ctrl_par", "max_outp_obj_par",
"max_proc", "max_window", "max_window_types", "mmx_enable", "mmx_supported", "neighborhood",
"no_object_result", "num_graphic_2", "num_graphic_4", "num_graphic_6", "num_graphic_8",
"num_graphic_percentage", "num_gray_4", "num_gray_6", "num_gray_8", "num_gray_percentage",
"num_proc", "num_sys_proc", "num_user_proc", "obj_images", "ocr_trainf_version", "operating_system",
"operating_system_version", "parallel_halcon", "parallelize_operators", "pregenerate_shape_models",
"processor_num", "reentrant", "revision", "store_empty_region", "temp_mem", "temporary_mem_cache",
"thread_num", "thread_pool", "update_lut", "version", "width", "window_name", "x_package"}
. Information (output_control) . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong * / double * / char *
Current value of the system parameter.
Result
The operator get_system returns the value H_MSG_TRUE if the parameters are correct. Otherwise an excep-
tion is raised.
Parallelization Information
get_system is local and processed completely exclusively without parallelization.
Possible Predecessors
reset_obj_db
Possible Successors
set_system
See also
set_system
Module
Foundation

set_system ( const char *SystemParameter, const char *Value )


T_set_system ( const Htuple SystemParameter, const Htuple Value )

Setting of HALCON system parameters.


The operator set_system allows to change different system parameters with relation to the runlength.
Available system parameters:

’neighborhood’: This parameter is used with all procedures which examine neighborhood relations:
connection, get_region_contour, get_region_chain, get_region_polygon,
get_region_thickness, boundary, paint_region, disp_region, fill_up,
contlength, shape_histo_all.
Value: 4 or 8
default: 8
’default_font’: Whenever a window is opened, a font will be set for the text output, whereby the ’default_font’
will be used. If the preset font cannot be found, another fontname can be set before opening the window.
Value: Filename of the fonts
default: fixed
’update_lut’ Determines whether the HALCON color tables are adapted according to their environment or not.
Value: ’true’ or ’false’
default: ’false’
’image_dir’: Image files (e.g. read_image and read_sequence) will be looked for in the currently used
directory and in ’image_dir’ (if no absolute paths are indicated). More than one directory name can be indi-
cated (searchpaths), seperated by semicolons (Windows) or colons (Unix). The path can also be determined
using the environment variable HALCONIMAGES.
Value: Name of the filepath
default: ’$HALCONROOT/images’ bzw. ’%HALCONROOT%/images’

HALCON/C Reference Manual, 2008-5-13


14.6. PARAMETERS 1001

’lut_dir’: Color tables ( set_lut) which are realized as an ASCII-file will be looked for in the currently used
directory and in ’lut_dir’ (if no absolute paths are indicated). If HALCONROOT is set, HALCON will search
the color tables in the sub-directory "‘lut"’.
Value: Name of the filepath
default: ’$HALCONROOT/lut’ bzw. ’%HALCONROOT%/lut’
’help_dir’: The online text files german or english.hlp, .sta, .key .num and .idx will be looked for in the cur-
rently used directory or in ’help_dir’. This system parameter is necessary for instance using the operators
get_operator_info and get_param_info. This parameter can also be set by the environment vari-
able HALCONROOT before initializing HALCON. In this case the variable must indicate the directory above
the helpdirectories (that is the HALCON-Homedirectory): e.g.: ’/usr/local/halcon’
Value: Name of the filepath
default: ’$HALCONROOT/help’ bzw. ’%HALCONROOT%/help’
’init_new_image’: Determines whether new images shall be set to 0 before using filters. This is not necessary if
always the whole image is filtered of if the data of not filtered image areas are unimportant.
Value: ’true’ or ’false’
default: ’true’
’no_object_result’: Determines how operations processing iconic objects shall react if the object tuplet is empty
(= no objects). Available values for Value:
’true’: the error will be ignored
’false’: the procedure returns FALSE
’fail’: the procedure returns FAIL
’void’: the procedure returns VOID
’exception’: an exception is raised
default: ’true’
’empty_region_result’: Controls the reaction of procedures concerning input objects with empty regions which
actually are not useful for such objects (e.g. certain region features, segmentation, etc.). Available values for
Value:
’true’: the error will be ignored if possible
’false’: the procedure returns FALSE
’fail’: the procedure returns FAIL
’void’: the procedure returns VOID
’exception’: an exception is raised
default: ’true’
’store_empty_region’: Quite a number of operations will lead to the creation of objects with an empty region (=
no image points) (e.g. intersection, threshold, etc.). This parameter determines whether the object
with an empty region will be returned as a result (’true’) or whether it will be ignored (’false’) that is no result
will be returned.
Value: ’true’ or ’false’
default: ’true’
’pregenerate_shape_models’: This parameter determines whether the shape models created with
create_shape_model or create_scaled_shape_model are pregenerated completely or
not, if this is not explicitly specified in create_shape_model or create_scaled_shape_model.
This parameter mainly serves to achieve a switch between the two modes with minimal code changes.
Normally, only one line needs to be inserted or changed.
Value: ’true’ or ’false’
default: ’false’
’border_shape_models’: This parameter determines whether the shape models to be found with
find_shape_model, find_shape_models, find_scaled_shape_model, or
find_scaled_shape_models may lie partially outside the image (i.e., whether they may cross
the image border).
Value: ’true’ or ’false’
default: ’false’
’image_dpi’: This parameter determines the DPI resolution that is stored in image files written with
write_image in formats that support the storing of the DPI resolution.
default: 300
’backing_store’: Determines whether the window content will be refreshed in case of overlapping of the win-
dows. Some implementations of X Windows are faulty; in order to avoid these errors, the storing of contents

HALCON 8.0.2
1002 CHAPTER 14. SYSTEM

can be deactivated. It may be recommendable in some cases to deactivate the security mechanism, if e.g.
performance / memory is what matters.
Value: true or false
default: true
’flush_graphic’: After each HALCON operation which creates a graphic output, a flush operation will be ex-
ecuted in order to display the data immediately on screen. This is not necessary with all programs (e.g. if
everything is done with the help of the mouse). In this case ’flush_graphic’ can be set to ’false’ to improve the
runlength. Unix window manager flash the display buffer automatically and make this parameter effectless
on respective operating systems, therefore.
Value: ’true’ or ’false’
default: ’true’
’flush_file’: This parameter determines whether the output into a file (also to the terminal) shall be buffered or
not. If the output is to be buffered, in general the data will be displayed on the terminal only after entering
the operator fnew_line.
Value: ’true’ or ’false’
default: ’true’
’ocr_trainf_version’ This parameter determines the format that is used for writing an OCR training file. The
operators write_ocr_trainf, write_ocr_trainf_image and concat_ocr_trainf write
training data in ASCII format for version number 1 or in binary format for version number 2 and 3. Version
number 3 stores images of type byte and uint2. The binary version is faster in reading and writing data and
stores training files more packed. The ASCII format is compabtible to older HALCON releases. Depending
on the file version, the OCR training files can be read by the following HALCON releases:
File Version HALCON Release
1 All
2 7.0.2 and higher
3 7.1 and higher
Value: 1, 2, 3
default: 3
’filename_encoding’: This parameter determines how file and directory names are interpreted that are passed as
string parameters to and from HALCON. With the value ’locale’ these names are used unaltered, while with
the value ’utf8’ these names are interpreted as being UTF-8 encoded. In the latter case, HALCON tries to
translate input parameters from UTF-8 to the locale encoding according to the current system settings, and
output parameters from locale to UTF-8 encoding.
Value: ’locale’ or ’utf8’
default: ’locale’
’x_package’: The output of image data via the network may cause errors owing to the heavy load on the computer
or on the network. In order to avoid this, the data are transmitted in small packages. If the computer is used
locally, these units can be enlarged at will. This can lead to a notably improved output performance.
Value: package size (in bytes)
default: 20480
’int2_bits’: Number of significant bits of int2 images. This number is used when scaling the gray values. If the
values is -1 the gray values will be automatically scaled (default).
Value: -1 or 9..16
default: -1
’num_gray_4’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 4 bitplanes (16 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
Value: 2 - 12
default: 8
’num_gray_6’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 6 bitplanes (64 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
Value: 2 - 62
default: 50

HALCON/C Reference Manual, 2008-5-13


14.6. PARAMETERS 1003

’num_gray_8’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 8 bitplanes (256 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
Value: 2 - 254
default: 140
’num_gray_percentage’: Under X Windows HALCON reserves a part of the available colors for the represen-
tation of gray values ( disp_channel). This shall interfere with other X applications as little as possible.
However, if HALCON does not succeed in reserving a minimum percentage of ’num_gray_percentage’ of
the necessary colors on the X server, a certain amount of the lookup table will be claimed for the HALCON
graylevels regardless of the consequences. This may result in undesired shifts of color when switching be-
tween HALCON windows and windows of other applications, or (outside HALCON) if a window-dump is
generated. The number of the real graylevels to be reserved depends on the number of available bitplanes on
the outputmachine (see also ’num_gray_*’. Naturally no colors will be reserved on monochrome machines -
the graylevels will instead be dithered when displayed. If graylevel-displays are used, only different shades
of gray will be applied (’black’, ’white’, ’gray’, etc.). ’num_gray_percentage’ is only used on machines with
8 bit pseudo-color displays. For machines with displays with 16 bits or more (true color machines), no colors
are reserved for the display of gray levels in this case.
Note: This value may only be changed before the first window has been opened on the machine. For before
opening the first window on a machine with x bitplanes, num_gray_x indicates the number of colors which
have to be reserved for the display of graylevels, afterwards, however, it will indicate the number of colors
which actually have been reserved.
Value: 0 - 100
default: 30
’num_graphic_percentage’: Similar to ’num_gray_percentage’, ’num_graphic_percentage’ determines how
many graphics colors (for use with set_color) should be reserved in the LUT on an 8 bit pseudo-color display
under X windows.
default: 60
’int_zooming’: Determines if the zooming of images is done with integer arithmetic or with floating point arith-
metic. default: ’true’
’icon_name’: Name of iconified graphics windows under X-Window. By default the number of the graphics
window is displayed. default: ’default’
’num_graphic_2’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 2 bitplanes (4 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 2
default: 2
’num_graphic_4’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 4 bitplanes (16 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 14
default: 5
’num_graphic_6’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 6 bitplanes (64 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 62
default: 10
’num_graphic_8’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 8 bitplanes (256 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 64
default: 20
’graphic_colors’ HALCON reserves the first num_graphic_x colors form this list of color names as graphic col-
ors. As a default HALCON uses this same list which is also returned by using query_all_colors.
However, the list can be changed individually: hereby a tuplet of color names will be returned as value. It is
recommendable that such a tuplet always includes the colors ’black’ and ’white’, and optionally also ’red’,
’green’ and ’blue’. If ’default’ is set as Value, HALCON returns to the initial setting. Note: On graylevel

HALCON 8.0.2
1004 CHAPTER 14. SYSTEM

machines not the first x colors will be reserved, but the first x shades of gray from the list.
Attention: This value may only be changed before the first window has been opened on the machine.
Value: Tuplets of X Windows color names
default: see also query_all_colors
’current_runlength_number’: Regions will be stored internally in a certain runlengthcode. This parameter can
determine the maximum number of chords which may be used for representing a region. Please note that
some procedures raise the number on their own if necessary.
The value can be enlarged as well as reduced.
Value: maximum number of chords
default: 50000
’clip_region’: Determines whether the regions of iconic objects of the HALCON database will be clipped to
the currently used image size or not. This is the case for example in procedures like gen_circle,
gen_rectangle1 or dilation1.
See also: reset_obj_db
Value: ’true’ or ’false’
default: ’true’
’do_low_error’ Determines whether the HALCON should print low level error or not.
Value: ’true’ or ’false’
default: ’false’
’reentrant’ Determines whether HALCON must be reentrant for being used within a parallel programming en-
vironment (e.g. a multithreaded application). This parameter is only of importance for Parallel HALCON,
which can process several operators concurrently. Thus, the parameter is ignored by the sequentially working
HALCON-Version. If it is set to ’true’, Parallel HALCON internally uses synchronization mechanisms to
protect shared data objects from concurrent accesses. Though this is inevitable with any effectively paral-
lel working application, it may cause undesired overhead, if used within an application which works purely
sequentially. The latter case can be signalled by setting ’reentrant’ to ’false’. This switches off all internal
synchronization mechanisms and thus reduces overhead. Of course, Parallel HALCON then is no longer
thread-safe, which causes another side-effect: Parallel HALCON will then no longer use the internal paral-
lelization of operators, because this needs reentrancy. Setting ’reentrant’ to ’true’ resets Parallel HALCON
to its default state, i.e. it is reentrant (and thread-safe) and it uses the automatic parallelization to speed up
the processing of operators on multiprocessor machines.
Value: ’true’ or ’false’
default: Parallel HALCON: ’true’, otherwise: ’false’
’parallelize_operators’ Determines whether Parallel HALCON uses an automatic parallelization to speed up the
processing of operators on multiprocessor machines. This feature can be switched off by setting ’paral-
lelize_operators’ to ’false’. Even then, Parallel HALCON will remain reentrant (and thread-safe), unless
the parameter ’reentrant’ is changed via set_system accordingly. Changing ’parallelize_operators’ can
be helpful, for example, if HALCON operators are called by a multithreaded application that also does the
scheduling and load-balancing of operators and data by itself. Then, it may be undesired that HALCON
performs additional parallelization steps, which may disturb the application’s scheduling and load-balancing
concepts. For a more detailed control of automatic parallelization single methods of data parallelization
can be switched. ’split_tuple’ enables the tuple parallelization method, ’split_channel’ the parallelization on
image channels, and ’split_domain’ the parallelization on the image domain. A preceding ’˜’ disables the
respective method. The method strings can also be passed within a control tuple to switch on or off methods
of automatic data parallelization at once. E.g., [’split_tuple’,’split_channel’,’split_domain’] is equivalent to
’true’.
The parameter ’parallelize_operators’ is only supported by Parallel HALCON and thus ignored by the se-
quentially working HALCON-Version.
Value:’true’, ’false’, ’split_tuple’, ’split_channel’, ’split_domain’, ’s̃plit_tuple’, ’s̃plit_channel’,
’s̃plit_domain’ default: Parallel HALCON: ’true’, else: ’false’
’thread_num’ Sets the number of threads used by the automatic parallelization of Parallel HALCON. The number
includes the main thread and is restricted to the number of processors for efficiency reasons. Decreasing the
number of threads is helpful if processors are occupied by user worker threads besides the threads of the
automatic parallelization. With this, the number of processing threads can be adapted to the number of
processors for best efficiency. Standard HALCON ignores this parameter value. Value: 1 <= Value <=
processor_num default: Parallel HALCON: processor_num, else: 1
’thread_pool’ Denotes whether Parallel HALCON always creates new threads for automatic parallelization
(’false’) or uses an existing pool of threads (’true’). Using a pool is more efficient for automatic paral-

HALCON/C Reference Manual, 2008-5-13


14.6. PARAMETERS 1005

lelization. When switching off atomatic parallelization permanently, deactivating the pool can save resources
of the operating system. Standard HALCON ignores this parameter value. Value: ’true’, ’false’ default:
Parallel HALCON: ’true’, else: ’false’
’clock_mode’ Determines the mode of the measurement of time intervals with count_seconds. For
Value=’processor_time’, the time the running HALCON process occupies the cpu is measured. This kind
of measuring time is independend from the cpu load caused by other processes, but it features a lower reso-
lution on most systems and is more inaccurate for smaller time intervals, therefore.
For Value=’elapsed_time’, the actual elapsed system time is measured. It includes the waiting time of the
current process as well as the cpu time of other processes. Therefore, to get a reliable measurement make
sure that no other process causes any cpu load.
Value=’performance_counter’ measures the actual system time by using a performance counter,
which results in a higher resolution. If the system does not support any performance counter,
Value=’processor_time’ is used.
Value: ’processor_time’, ’elapsed_time’, ’performance_counter’
default: ’performance_counter’
’max_connection’ Determines the maximum number of regions returned by connection. For Value=0, all
regions are returned.
’extern_alloc_funct’ Pointer to external function for memory allocation of result images. default: 0
’extern_free_funct’ Pointer to external function for memory deallocation of result images. default: 0
’image_cache_capacity’ Upper limit in bytes of the internal image memory cache. To speed up allocation of
new images HALCON does not free image memory but caches it to reuse it. Caching of freed images
is done as long as the upper limit is not reached. This functionality can be switched off by setting ’im-
age_cache_capacity’ to 0.
This parameter is only available in Standard HALCON and ignored in Parallel HALCON.
default: Standard HALCON: 4194304 (4MByte), else: 0
’global_mem_cache’ Cache mode of global memory,i.e., memory that is visible beyond an operator. It specifies
whether unused global memory should be cached (’shared’) or freed (’idle’). Generally, caching speeds up
memory allocation and processing at the cost of memory consumption. Additionally, Parallel HALCON of-
fers the option to cache global memory for each thread separately (’exclusive’). This mode can also accelerate
processing at the cost of higher memory consumption. Standard HALCON treats the value ’exclusive’ like
the value ’shared’.
Value: ’idle’,’exclusive’,’shared’
default: ’false’
’temporary_mem_cache’ Flag if unused temporary memory of an operator should be cached (’true’, default) or
freed (’false’). A single-threaded application can be speeded up by caching global memory, whereas freeing
reduces the memory consumption of a multithreaded application at the expense of speed.
Value: ’true’ or ’false’
default: ’true’
’alloctmp_max_blocksize’ Maximum size of memory blocks to be allocated within temporary memory manage-
ment. (No effect, if ’temporary_mem_cache’ == ’false’ ) Value: -1 or >= 0
default: -1
’mmx_enable’ Flag, if MMX operations were used to accelerate selected image processing operators (’true’) or
not (’false’). (No effect, if ’mmx_supported’ == ’false’, see also operator get_system) default: ’true’ if cpu
supports MMX, else ’false’
’language’ Language used for error messages. Value: ’english’ or ’german’. default: ’ english’

Parameter
. SystemParameter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Name of the system parameter to be changed.
Default Value : "image_dir"
List of values : SystemParameter ∈ {"alloctmp_max_blocksize", "backing_store",
"border_shape_models", "clip_region", "clock_mode", "current_runlength_number", "default_font",
"do_low_error", "empty_region_result", "extern_alloc_funct", "extern_free_funct", "filename_encoding",
"flush_file", "flush_graphic", "global_mem_cache", "graphic_colors", "help_dir", "icon_name",
"image_cache_capacity", "image_dir", "image_dpi", "init_new_image", "int2_bits", "int_zooming",
"language", "lut_dir", "max_connection", "mmx_enable", "neighborhood", "no_object_result",

HALCON 8.0.2
1006 CHAPTER 14. SYSTEM

"num_graphic_2", "num_graphic_4", "num_graphic_6", "num_graphic_8", "num_graphic_percentage",


"num_gray_4", "num_gray_6", "num_gray_8", "num_gray_percentage", "ocr_trainf_version",
"parallelize_operators", "pregenerate_shape_models", "reentrant", "store_empty_region",
"temporary_mem_cache", "thread_num", "thread_pool", "update_lut", "x_package"}
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char * / Hlong / double
New value of the system parameter.
Default Value : "true"
Suggested values : Value ∈ {"true", "false", 0, 4, 8, 100, 140, 255}
Result
The operator set_system returns the value H_MSG_TRUE if the parameters are correct. Otherwise an excep-
tion will be raised.
Parallelization Information
set_system is local and processed completely exclusively without parallelization.
Possible Predecessors
reset_obj_db, get_system, set_check
See also
get_system, set_check, count_seconds
Module
Foundation

14.7 Serial
clear_serial ( Hlong SerialHandle, const char *Channel )
T_clear_serial ( const Htuple SerialHandle, const Htuple Channel )

Clear the buffer of a serial connection.


clear_serial discards data written to the serial device referred to by SerialHandle, but not transmitted
(Channel = ’output’), or data received, but not read (Channel = ’input’), or performs both these operations at
once (Channel = ’in_out’).
Parameter

. SerialHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; Hlong


Serial interface handle.
. Channel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Buffer to be cleared.
Default Value : "input"
List of values : Channel ∈ {"input", "output", "in_out"}
Result
If the parameters are correct and the buffers of the serial device could be cleared, the operator clear_serial
returns the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
clear_serial is reentrant and processed without parallelization.
Possible Predecessors
open_serial
Possible Successors
read_serial, write_serial
See also
read_serial
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


14.7. SERIAL 1007

close_all_serials ( )
T_close_all_serials ( )

Close all serial devices.


close_all_serials closes all serial devices that have been opened with open_serial.
Result
close_all_serials returns always H_MSG_TRUE.
Parallelization Information
close_all_serials is reentrant and processed without parallelization.
Possible Predecessors
open_serial
Alternatives
close_serial
See also
open_serial, close_file
Module
Foundation

close_serial ( Hlong SerialHandle )


T_close_serial ( const Htuple SerialHandle )

Close a serial device.


close_serial closes a serial device that was opened with open_serial.
Parameter
. SerialHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; Hlong
Serial interface handle.
Result
If the parameters are correct and the device could be closed, the operator close_serial returns the value
H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
close_serial is reentrant and processed without parallelization.
Possible Predecessors
open_serial
See also
open_serial, close_file
Module
Foundation

get_serial_param ( Hlong SerialHandle, Hlong *BaudRate,


Hlong *DataBits, char *FlowControl, char *Parity, Hlong *StopBits,
Hlong *TotalTimeOut, Hlong *InterCharTimeOut )

T_get_serial_param ( const Htuple SerialHandle, Htuple *BaudRate,


Htuple *DataBits, Htuple *FlowControl, Htuple *Parity,
Htuple *StopBits, Htuple *TotalTimeOut, Htuple *InterCharTimeOut )

Get the parameters of a serial device.


get_serial_param returns the current parameter settings of the serial device passed in SerialHandle. For
a description of the parameters of a serial device, see set_serial_param.

HALCON 8.0.2
1008 CHAPTER 14. SYSTEM

Parameter
. SerialHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; Hlong
Serial interface handle.
. BaudRate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Speed of the serial interface.
. DataBits (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of data bits of the serial interface.
. FlowControl (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Type of flow control of the serial interface.
. Parity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Parity of the serial interface.
. StopBits (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of stop bits of the serial interface.
. TotalTimeOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Total timeout of the serial interface in ms.
. InterCharTimeOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Inter-character timeout of the serial interface in ms.
Result
If the parameters are correct and the parameters of the device could be read, the operator get_serial_param
returns the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
get_serial_param is reentrant and processed without parallelization.
Possible Predecessors
open_serial
Possible Successors
get_serial_param, read_serial, write_serial
See also
set_serial_param
Module
Foundation

open_serial ( const char *PortName, Hlong *SerialHandle )


T_open_serial ( const Htuple PortName, Htuple *SerialHandle )

Open a serial device.


open_serial opens a serial device. The name of the device is determined by the parameter PortName and is
operating system specific. On Windows machines, ’COM1’-’COM4’ is typically used, while on Unix systems the
serial devices usually are named ’/dev/tty*’. The parameters of the serial device, e.g., its speed or number of data
bits, are set to the system default values for the respective device after the device has been opened. They can be set
or changed by calling set_serial_param.
Parameter
. PortName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename ; const char *
Name of the serial port.
Default Value : "COM1"
Suggested values : PortName ∈ {"COM1", "COM2", "COM3", "COM4", "/dev/ttya", "/dev/ttyb",
"/dev/tty00", "/dev/tty01", "/dev/ttyd1", "/dev/ttyd2", "/dev/cua0", "/dev/cua1"}
. SerialHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; Hlong *
Serial interface handle.
Result
If the parameters are correct and the device could be opened, the operator open_serial returns the value
H_MSG_TRUE. Otherwise an exception is raised.

HALCON/C Reference Manual, 2008-5-13


14.7. SERIAL 1009

Parallelization Information
open_serial is reentrant and processed without parallelization.
Possible Successors
set_serial_param, read_serial, write_serial, close_serial
See also
set_serial_param, get_serial_param, open_file
Module
Foundation

read_serial ( Hlong SerialHandle, Hlong NumCharacters, Hlong *Data )


T_read_serial ( const Htuple SerialHandle, const Htuple NumCharacters,
Htuple *Data )

Read from a serial device.


read_serial tries to read NumCharacters from the serial device given in SerialHandle. The read
characters are returned in Data as a tuple of integers. This allows to read NUL characters, which would otherwise
be interpreted as the end of a string. If the timeout of the serial device has been set to a value greater than 0 with
set_serial_param, read_serial waits at most as long for the arrival of the first character as indicated
by the timeout. Otherwise, the operator returns immediately. In any case, the number of characters available at
the time of return are passed back to the caller, i.e., fewer characters than requested can be returned. This can be
checked by the length of the tuple Data.
Parameter
. SerialHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; (Htuple .) Hlong
Serial interface handle.
. NumCharacters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Number of characters to read.
Default Value : 1
Suggested values : NumCharacters ∈ {1, 2, 3, 4, 5, 10, 20, 40, 100}
. Data (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Read characters (as tuple of integers).
Result
If the parameters are correct and the read from the device was successful, the operator read_serial returns
the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
read_serial is reentrant and processed without parallelization.
Possible Predecessors
open_serial
See also
write_serial
Module
Foundation

set_serial_param ( Hlong SerialHandle, Hlong BaudRate, Hlong DataBits,


const char *FlowControl, const char *Parity, Hlong StopBits,
Hlong TotalTimeOut, Hlong InterCharTimeOut )

T_set_serial_param ( const Htuple SerialHandle, const Htuple BaudRate,


const Htuple DataBits, const Htuple FlowControl, const Htuple Parity,
const Htuple StopBits, const Htuple TotalTimeOut,
const Htuple InterCharTimeOut )

Set the parameters of a serial device.

HALCON 8.0.2
1010 CHAPTER 14. SYSTEM

set_serial_param can be used to set the parameters of a serial device. The parameter BaudRate determines
the input and output speed of the device. It should be noted that not all devices support all possible speeds. The
number of sent and received data bits is set with DataBits. The parameter FlowControl determines if and
what kind of data flow control should be used. In the latter case, a choice between software control (’xon_xoff’) and
hardware control (’cts_rts’, ’dtr_dsr’) can be made. If and what kind of parity check of the transmitted data should
be performed can be determined by Parity. The number of stop bits sent is set with StopBits. Finally, two
timeout for reading from the serial device can be set. The parameter TotalTimeOut determines the maximum
time, which may pass in read_serial until the first character arrives, independent of the actual number of
characters requested. The parameter InterCharTimeOut determines the time which may pass between the
reading of individual characters, if multiple characters are requested with read_serial. If one of the timeouts
is set to -1, a read waits an arbitrary amount of time for the arrival of characters. If both timeouts are set to 0 the
a read doesn’t wait and returns the available or none characters. Thus, on Windows systems, a total timeout of
TotalTimeOut + nInterCharTimeOut results if n characters are to be read. On Unix systems, only one of
the two timeouts can be set. Thus, if both timeouts are passed larger than -1, only the total timeout is used. The
unit of both timeouts is milliseconds. It should be noted, however, that the timeout is specified in increments of one
tenths of a second on Unix systems, i.e., the the minimum timeout that has any effect is 100. For each parameter,
the current values can be left in effect by passing ’unchanged’.
Parameter

. SerialHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; Hlong


Serial interface handle.
. BaudRate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Speed of the serial interface.
Default Value : "unchanged"
List of values : BaudRate ∈ {50, 75, 110, 134, 150, 200, 300, 600, 1200, 1800, 2400, 4800, 9600, 19200,
38400, 57600, 76800, 115200, 153600, 230400, 307200, 460800, "unchanged"}
. DataBits (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Number of data bits of the serial interface.
Default Value : "unchanged"
List of values : DataBits ∈ {5, 6, 7, 8, "unchanged"}
. FlowControl (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of flow control of the serial interface.
Default Value : "unchanged"
List of values : FlowControl ∈ {"none", "xon_xoff", "cts_rts", "dtr_dsr", "unchanged"}
. Parity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Parity of the serial interface.
Default Value : "unchanged"
List of values : Parity ∈ {"none", "odd", "even", "unchanged"}
. StopBits (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Number of stop bits of the serial interface.
Default Value : "unchanged"
List of values : StopBits ∈ {1, 2, "unchanged"}
. TotalTimeOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Total timeout of the serial interface in ms.
Default Value : "unchanged"
Suggested values : TotalTimeOut ∈ {-1, 0, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000,
"unchanged"}
. InterCharTimeOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Inter-character timeout of the serial interface in ms.
Default Value : "unchanged"
Suggested values : InterCharTimeOut ∈ {-1, 0, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000,
"unchanged"}
Result
If the parameters are correct and the parameters of the device could be set, the operator set_serial_param
returns the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
set_serial_param is reentrant and processed without parallelization.

HALCON/C Reference Manual, 2008-5-13


14.8. SOCKETS 1011

Possible Predecessors
open_serial, get_serial_param
Possible Successors
read_serial, write_serial
See also
get_serial_param
Module
Foundation

write_serial ( Hlong SerialHandle, Hlong Data )


T_write_serial ( const Htuple SerialHandle, const Htuple Data )

Write to a serial connection.


write_serial writes the characters given in Data to the serial device given by SerialHandle. The data
to be written is passed as a tuple of integers. This allows to write NUL characters, which would otherwise be
interpreted as the end of a string. write_serial always waits until all data has been transmitted, i.e., a timout
for writing cannot be set.
Parameter

. SerialHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; (Htuple .) Hlong


Serial interface handle.
. Data (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Characters to write (as tuple of integers).
Result
If the parameters are correct and the write to the device was successful, the operator write_serial returns the
value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
write_serial is reentrant and processed without parallelization.
Possible Predecessors
open_serial
See also
read_serial
Module
Foundation

14.8 Sockets
close_socket ( Hlong Socket )
T_close_socket ( const Htuple Socket )

Close a socket.
close_socket closes a socket that was previously opened with open_socket_accept,
open_socket_connect, or socket_accept_connect. For a detailed example, see
open_socket_accept.
Parameter

. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong


Socket number.
Parallelization Information
close_socket is reentrant and processed without parallelization.

HALCON 8.0.2
1012 CHAPTER 14. SYSTEM

See also
open_socket_accept, open_socket_connect, socket_accept_connect
Module
Foundation

get_next_socket_data_type ( Hlong Socket, char *DataType )


T_get_next_socket_data_type ( const Htuple Socket, Htuple *DataType )

Determine the HALCON data type of the next socket data.


get_next_socket_data_type returns the data type of the next data that are present on the socket Socket
and returns it in DataType. The possible values for DataType are:

’no_data’: No data are present.


’no_halcon_data’: Some data are present, but they are not HALCON data.
’tuple’: The next data is a tuple.
’region’: The next data is a region object.
’image’: The next data is an image object.
’xld_cont’: The next data is an XLD contour.
’xld_poly’: The next data is an XLD polygon.
’xld_para’: The next data is an XLD parallel.
’xld_mod_para’: The next data is a modified XLD parallel.
’xld_ext_para’: The next data is an extended XLD parallel.

Parameter
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
. DataType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Data type of next HALCON data.
Parallelization Information
get_next_socket_data_type is reentrant and processed without parallelization.
See also
send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple
Module
Foundation

get_socket_descriptor ( Hlong Socket, Hlong *SocketDescriptor )


T_get_socket_descriptor ( const Htuple Socket,
Htuple *SocketDescriptor )

Get the socket descriptor of a socket used by the operating system.


get_socket_descriptor returns the socket descriptor used by the operating system for the socket connec-
tion that is passed in Socket. The socket descriptor can be used in operating system calls such as select,
read, write, recv, or send.

HALCON/C Reference Manual, 2008-5-13


14.8. SOCKETS 1013

Parameter
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
. SocketDescriptor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Socket descriptor used by the operating system.
Parallelization Information
get_socket_descriptor is reentrant and processed without parallelization.
Possible Predecessors
open_socket_accept, open_socket_connect, socket_accept_connect
See also
set_socket_timeout
Module
Foundation

get_socket_timeout ( Hlong Socket, double *Timeout )


T_get_socket_timeout ( const Htuple Socket, Htuple *Timeout )

Get the timout of a socket.


get_socket_timeout returns the timout for the socket connection that is passed in Socket. For a description
of possible values of Timeout see set_socket_timeout.
Parameter
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
. Timeout (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double * / char *
Socket timeout.
Parallelization Information
get_socket_timeout is reentrant and processed without parallelization.
Possible Predecessors
open_socket_accept, open_socket_connect, socket_accept_connect
See also
set_socket_timeout
Module
Foundation

open_socket_accept ( Hlong Port, Hlong *AcceptingSocket )


T_open_socket_accept ( const Htuple Port, Htuple *AcceptingSocket )

Open a socket that accepts connection requests.


open_socket_accept opens a socket that accepts incoming connection requests by other HALCON pro-
cesses. This operator is the necessary first step in the establishment of a communication channel between two HAL-
CON processes. The socket listens for incoming connection requests on the port number given by Port. The ac-
cepting socket is returned in AcceptingSocket. open_socket_accept returns immediately without wait-
ing for a request from another process, done by calling open_socket_connect in the other process. This al-
lows multiple other processes to connect to the particular HALCON process that calls open_socket_accept.
To accept an incoming connection request, socket_accept_connect must be called after another process
has called open_socket_connect.

HALCON 8.0.2
1014 CHAPTER 14. SYSTEM

Parameter
. Port (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Port number.
Default Value : 3000
Typical range of values : 1024 ≤ Port ≤ 65535
Minimum Increment : 1
Recommended Increment : 1
. AcceptingSocket (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong *
Socket number.
Example (Syntax: HDevelop)

/* Process 1 */
dev_set_colored (12)
open_socket_accept (3000, AcceptingSocket)
/* Busy wait for an incoming connection */
dev_error_var (Error, 1)
dev_set_check (’~give_error’)
OpenStatus := 5
while (OpenStatus # 2)
socket_accept_connect (AcceptingSocket, ’false’, Socket)
OpenStatus := Error
wait_seconds (0.2)
endwhile
dev_set_check (’give_error’)
/* Connection established */
receive_image (Image, Socket)
threshold (Image, Region, 0, 63)
send_region (Region, Socket)
receive_region (ConnectedRegions, Socket)
area_center (ConnectedRegions, Area, Row, Column)
send_tuple (Socket, Area)
send_tuple (Socket, Row)
send_tuple (Socket, Column)
close_socket (Socket)
close_socket (AcceptingSocket)

/* Process 2 */
dev_set_colored (12)
open_socket_connect (’localhost’, 3000, Socket)
read_image (Image, ’fabrik’)
send_image (Image, Socket)
receive_region (Region, Socket)
connection (Region, ConnectedRegions)
send_region (ConnectedRegions, Socket)
receive_tuple (Socket, Area)
receive_tuple (Socket, Row)
receive_tuple (Socket, Column)
close_socket (Socket)

Parallelization Information
open_socket_accept is reentrant and processed without parallelization.
Possible Successors
socket_accept_connect
See also
open_socket_connect, close_socket, get_socket_timeout, set_socket_timeout,
send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple

HALCON/C Reference Manual, 2008-5-13


14.8. SOCKETS 1015

Module
Foundation

open_socket_connect ( const char *HostName, Hlong Port,


Hlong *Socket )

T_open_socket_connect ( const Htuple HostName, const Htuple Port,


Htuple *Socket )

Open a socket to an existing socket.


open_socket_connect opens a connection to an accepting socket on the computer HostName, which listens
on port Port. The listening socket in the other HALCON process must have been created earlier with the operator
open_socket_accept. The socket thus created is returned in Socket. To establish the connection, the
HALCON process, in which the accepting socket resides, must call socket_accept_connect. For a detailed
example, see open_socket_accept.
Parameter
. HostName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Hostname of the computer to connect to.
Default Value : "localhost"
. Port (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Port number.
Default Value : 3000
Typical range of values : 1024 ≤ Port ≤ 65535
Minimum Increment : 1
Recommended Increment : 1
. Socket (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong *
Socket number.
Parallelization Information
open_socket_connect is reentrant and processed without parallelization.
Possible Successors
send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple
See also
open_socket_accept, socket_accept_connect, get_socket_timeout,
set_socket_timeout, close_socket
Module
Foundation

receive_image ( Hobject *Image, Hlong Socket )


T_receive_image ( Hobject *Image, const Htuple Socket )

Receive an image over a socket connection.


receive_image reads an image object that was sent over the socket connection determined by Socket by an-
other HALCONprocess using the operator send_image. If no image has been sent, the HALCON process call-
ing receive_image blocks until enough data arrives. For a detailed example, see open_socket_accept.
Parameter
. Image (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4
/ real / complex / vector_field
Received image.
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.

HALCON 8.0.2
1016 CHAPTER 14. SYSTEM

Parallelization Information
receive_image is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect, get_socket_timeout,
set_socket_timeout
See also
send_image, send_region, receive_region, send_tuple, receive_tuple,
get_next_socket_data_type
Module
Foundation

receive_region ( Hobject *Region, Hlong Socket )


T_receive_region ( Hobject *Region, const Htuple Socket )

Receive regions over a socket connection.


receive_region reads a region object that was sent over the socket connection determined by Socket
by another HALCONprocess using the operator send_region. If no regions have been sent, the HAL-
CON process calling receive_region blocks until enough data arrives. For a detailed example, see
open_socket_accept.
Parameter

. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *


Received regions.
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
Parallelization Information
receive_region is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect, get_socket_timeout,
set_socket_timeout
See also
send_region, send_image, receive_image, send_tuple, receive_tuple,
get_next_socket_data_type
Module
Foundation

receive_tuple ( Hlong Socket, char *Tuple )


T_receive_tuple ( const Htuple Socket, Htuple *Tuple )

Receive a tuple over a socket connection.


receive_tuple reads a tuple that was sent over the socket connection determined by Socket by another
HALCON process using the operator send_tuple. If no tuple has been sent, the HALCON process calling
receive_tuple blocks until enough data arrives. For a detailed example, see open_socket_accept.
Parameter

. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong


Socket number.
. Tuple (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char * / double * / Hlong *
Received tuple.

HALCON/C Reference Manual, 2008-5-13


14.8. SOCKETS 1017

Parallelization Information
receive_tuple is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect, get_socket_timeout,
set_socket_timeout
See also
send_tuple, send_image, receive_image, send_region, receive_region,
get_next_socket_data_type
Module
Foundation

receive_xld ( Hobject *XLD, Hlong Socket )


T_receive_xld ( Hobject *XLD, const Htuple Socket )

Receive an XLD object over a socket connection.


receive_xld reads an XLD object that was sent over the socket connection determined by Socket by another
HALCONprocess using the operator send_xld. If no XLD object has been sent, the HALCON process calling
receive_xld blocks until enough data arrives. For a detailed example, see send_xld.
Parameter

. XLD (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld(-array) ; Hobject *


Received XLD object.
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
Parallelization Information
receive_xld is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect, get_socket_timeout,
set_socket_timeout
See also
send_xld, send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple, get_next_socket_data_type
Module
Foundation

send_image ( const Hobject Image, Hlong Socket )


T_send_image ( const Hobject Image, const Htuple Socket )

Send an image over a socket connection.


send_image sends an image object over the socket connection determined by Socket. The receiving HAL-
CON process must call receive_image to read the image from the socket. For a detailed example, see
open_socket_accept.
Parameter

. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Image to be sent.
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.

HALCON 8.0.2
1018 CHAPTER 14. SYSTEM

Parallelization Information
send_image is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect
See also
receive_image, send_region, receive_region, send_tuple, receive_tuple,
get_next_socket_data_type
Module
Foundation

send_region ( const Hobject Region, Hlong Socket )


T_send_region ( const Hobject Region, const Htuple Socket )

Send regions over a socket connection.


send_region sends a region object over the socket connection determined by Socket. The receiving HAL-
CON process must call receive_region to read the regions from the socket. For a detailed example, see
open_socket_accept.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be sent.
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
Parallelization Information
send_region is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect
See also
receive_region, send_image, receive_image, send_tuple, receive_tuple,
get_next_socket_data_type
Module
Foundation

send_tuple ( Hlong Socket, const char *Tuple )


T_send_tuple ( const Htuple Socket, const Htuple Tuple )

Send a tuple over a socket connection.


send_tuple sends a tuple over the socket connection determined by Socket. The receiving HAL-
CON process must call receive_tuple to read the tuple from the socket. For a detailed example, see
open_socket_accept.
Parameter
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
. Tuple (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / double / Hlong
Tuple to be sent.
Parallelization Information
send_tuple is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect

HALCON/C Reference Manual, 2008-5-13


14.8. SOCKETS 1019

See also
receive_tuple, send_image, receive_image, send_region, receive_region,
get_next_socket_data_type
Module
Foundation

send_xld ( const Hobject XLD, Hlong Socket )


T_send_xld ( const Hobject XLD, const Htuple Socket )

Send an XLD object over a socket connection.


send_xld sends an XLD object over the socket connection determined by Socket. The receiving HALCON
process must call receive_xld to read the XLD object from the socket.
Parameter
. XLD (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld(-array) ; Hobject
XLD object to be sent.
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
Example (Syntax: HDevelop)

/* Process 1 */
dev_set_colored (12)
open_socket_accept (3000, AcceptingSocket)
socket_accept_connect (AcceptingSocket, ’true’, Socket)
receive_image (Image, Socket)
edges_sub_pix (Image, Edges, ’canny’, 1.5, 20, 40)
send_xld (Edges, Socket)
receive_xld (Polygons, Socket)
split_contours_xld (Polygons, Contours, ’polygon’, 1, 5)
gen_parallels_xld (Polygons, Parallels, 10, 30, 0.15, ’true’)
send_xld (Parallels, Socket)
receive_xld (ModParallels, Socket)
receive_xld (ExtParallels, Socket)
stop ()
close_socket (Socket)
close_socket (AcceptingSocket)

/* Process 2 */
dev_set_colored (12)
open_socket_connect (’localhost’, 3000, Socket)
read_image (Image, ’mreut’)
send_image (Image, Socket)
receive_xld (Edges, Socket)
gen_polygons_xld (Edges, Polygons, ’ramer’, 2)
send_xld (Polygons, Socket)
split_contours_xld (Polygons, Contours, ’polygon’, 1, 5)
receive_xld (Parallels, Socket)
mod_parallels_xld (Parallels, Image, ModParallels, ExtParallels,
0.4, 160, 220, 10)
send_xld (ModParallels, Socket)
send_xld (ExtParallels, Socket)
stop ()
close_socket (Socket)

Parallelization Information
send_xld is reentrant and processed without parallelization.

HALCON 8.0.2
1020 CHAPTER 14. SYSTEM

Possible Predecessors
open_socket_connect, socket_accept_connect
See also
receive_xld, send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple, get_next_socket_data_type
Module
Foundation

set_socket_timeout ( Hlong Socket, double Timeout )


T_set_socket_timeout ( const Htuple Socket, const Htuple Timeout )

Set the timout of a socket.


set_socket_timeout sets the timout for the socket connection that is passed in Socket. The Timeout
is used for reading and writing of data via the socket as well as for calls to socket_accept_connect. If
problems during the transmission of the data cause a timeout, the underlying protocol cannot synchonize itself
with the data any longer. Therefore, in these cases, the only possibility to put the system into a consistent state is to
close both sockets and to open them anew. It should be noted that sometimes while reading data no error message
will be returned if the sending socket is closed while the receiving socket is waiting for data. In these cases, empty
data are returned (either objects or tuples).
The timeout is given in seconds as a floating point number. It can also be set to ’infinite’, causing the read calls to
wait indefinitely.
Parameter
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
. Timeout (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong / const char *
Socket timeout.
Default Value : "infinite"
Suggested values : Timeout ∈ {"infinite", 0, 1, 2, 3, 4, 5, 10, 30, 60}
Parallelization Information
set_socket_timeout is reentrant and processed without parallelization.
Possible Predecessors
open_socket_accept, open_socket_connect, socket_accept_connect
Possible Successors
socket_accept_connect, receive_image, receive_region, receive_xld
See also
get_socket_timeout
Module
Foundation

socket_accept_connect ( Hlong AcceptingSocket, const char *Wait,


Hlong *Socket )

T_socket_accept_connect ( const Htuple AcceptingSocket,


const Htuple Wait, Htuple *Socket )

Accept a connection request on a listening socket.


socket_accept_connect accepts an incoming connection request, generated by
open_socket_connect in another HALCONprocess, on the listening socket AcceptingSocket.
The listening socket must have been created earlier with open_socket_accept. If Wait=’true’,
socket_accept_connect waits until a connection request from another HALCON process arrives. If
Wait=’false’, socket_accept_connect returns with the error FAIL, if currently there are no connection

HALCON/C Reference Manual, 2008-5-13


14.8. SOCKETS 1021

requests from other HALCONprocesses. The result of socket_accept_connect is another socket Socket,
which is used for a two-way communication with another HALCON process. After this connection has been
established, data can be exchanged between the two processes by calling the appropriate send or receive operators.
For a detailed example, see open_socket_accept.
Parameter

. AcceptingSocket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong


Socket number of the accepting socket.
. Wait (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Should the operator wait until a connection request arrives?
List of values : Wait ∈ {"true", "false"}
. Socket (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong *
Socket number.
Parallelization Information
socket_accept_connect is reentrant and processed without parallelization.
Possible Predecessors
open_socket_accept
Possible Successors
send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple
See also
open_socket_connect, close_socket, get_socket_timeout, set_socket_timeout
Module
Foundation

HALCON 8.0.2
1022 CHAPTER 14. SYSTEM

HALCON/C Reference Manual, 2008-5-13


Chapter 15

Tools

15.1 2D-Transformations
T_affine_trans_pixel ( const Htuple HomMat2D, const Htuple Row,
const Htuple Col, Htuple *RowTrans, Htuple *ColTrans )

Apply an arbitrary affine 2D transformation to pixel coordinates.


affine_trans_pixel applies an arbitrary affine 2D transformation, i.e., scaling, rotation, translation, and
slant (skewing), to the input pixels (Row,Col) and returns the resulting pixels in (RowTrans,ColTrans); the
input and output pixels are subpixel precise coordinates. The affine transformation is described by the homoge-
neous transformation matrix given in HomMat2D.
The difference between affine_trans_pixel and affine_trans_point_2d lies in the used coordinate
system: affine_trans_pixel uses a coordinate system with origin in the upper left corner of the image,
while affine_trans_point_2d uses the standard image coordinate system, whose origin lies in the middle
of the upper left pixel and which is also used by operators like area_center.
Applying affine_trans_pixel corresponds to the following chain of tansformations (input and output pixels
as homogeneous vectors):
       
RowTrans 1 0 −0.5 1 0 +0.5 Row
 ColTrans  =  0 1 −0.5  · HomMat2D ·  0 1 +0.5  ·  Col 
1 0 0 1 0 0 1 1

Hence,
affine_trans_pixel (HomMat2D, Row, Col, RowTrans, ColTrans)
corresponds to the following operator sequence:
affine_trans_point_2d (HomMat2D, Row+0.5, Col+0.5, RowTmp, ColTmp)
RowTrans := RowTmp-0.5
ColTrans := ColTmp-0.5
Parameter

. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double


Input transformation matrix.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; Htuple . double / Hlong
Input pixel(s) (row coordinate).
Default Value : 64
Suggested values : Row ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; Htuple . double / Hlong
Input pixel(s) (column coordinate).
Default Value : 64
Suggested values : Col ∈ {0, 16, 32, 64, 128, 256, 512, 1024}

1023
1024 CHAPTER 15. TOOLS

. RowTrans (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; Htuple . double *


Output pixel(s) (row coordinate).
. ColTrans (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; Htuple . double *
Output pixel(s) (column coordinate).
Result
If the matrix HomMat2D represents an affine transformation (i.e., not a projective transformation),
affine_trans_pixel returns H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
affine_trans_pixel is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Alternatives
affine_trans_point_2d
Module
Foundation

T_affine_trans_point_2d ( const Htuple HomMat2D, const Htuple Px,


const Htuple Py, Htuple *Qx, Htuple *Qy )

Apply an arbitrary affine 2D transformation to points.


affine_trans_point_2d applies an arbitrary affine 2D transformation, i.e., scaling, rotation, translation, and
slant (skewing), to the input points (Px,Py) and returns the resulting points in (Qx,Qy). The affine transformation
is described by the homogeneous transformation matrix given in HomMat2D. This corresponds to the following
equation (input and output points as homogeneous vectors):
   
Qx Px
 Qy  = HomMat2D ·  Py 
1 1

If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
The transformation matrix can be created using the operators hom_mat2d_identity,
hom_mat2d_rotate, hom_mat2d_translate, etc., or can be the result of operators like
vector_angle_to_rigid.
For example, if HomMat2D corresponds to a rigid transformation, i.e., if it consists of a rotation and a translation,
the points are transformed as follows:
       
Qx   Px Px
R t R· +t 
 Qy  = ·  Py  =  Py
00 1
1 1 1

Parameter

. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double


Input transformation matrix.
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; Htuple . double / Hlong
Input point(s) (x or row coordinate).
Default Value : 64
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1025

. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; Htuple . double / Hlong


Input point(s) (y or column coordinate).
Default Value : 64
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Qx (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; Htuple . double *
Output point(s) (x or row coordinate).
. Qy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; Htuple . double *
Output point(s) (y or column coordinate).
Result
If the matrix HomMat2D represents an affine transformation (i.e., not a projective transformation),
affine_trans_point_2d returns H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
affine_trans_point_2d is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Module
Foundation

T_bundle_adjust_mosaic ( const Htuple NumImages,


const Htuple ReferenceImage, const Htuple MappingSource,
const Htuple MappingDest, const Htuple HomMatrices2D,
const Htuple Rows1, const Htuple Cols1, const Htuple Rows2,
const Htuple Cols2, const Htuple NumCorrespondences,
const Htuple Transformation, Htuple *MosaicMatrices2D, Htuple *Rows,
Htuple *Cols, Htuple *Error )

Perform a bundle adjustment of an image mosaic.


bundle_adjust_mosaic performs a bundle adjustment of an image mosaic. This can be used to determine
the geometry of a mosaic as robustly as possible, and hence to determine the transformations of the images in the
mosaic more accurately than with single image pairs.
To achieve this, the projective transformation for each overlapping image pair in the mosaic should be determined
with proj_match_points_ransac. For example, for a 2×2 block of images in the following layout
1 2
3 4
the following projective transformations should be determined, assuming that all images overlap each other: 17→2,
17→3, 17→4, 27→3, 27→4 und 37→4. The indices of the images that determine the respective transformation are
given by MappingSource and MappingDest. The indices are start at 1. Consequently, in the above example
MappingSource = [1,1,1,2,2,3] and MappingDest = [2,3,4,3,4,4] must be used. The number of images
in the mosaic is given by NumImages. It is used to check whether each image can be reached by a chain of
transformations. The index of the reference image is given by ReferenceImage. On output, this image has the
identity matrix as its transformation matrix.
The 3 × 3 projective transformation matrices that correspond to the image pairs are passed in HomMatrices2D.
Additionally, the coordinates of the matched point pairs in the image pairs must be passed in Rows1, Cols1,
Rows2, and Cols2. They can be determined from the output of proj_match_points_ransac with
tuple_select or with the HDevelop function subset. To enable bundle_adjust_mosaic to deter-
mine which point pair belongs to which image pair, NumCorrespondences must contain the number of found
point matches for each image pair.

HALCON 8.0.2
1026 CHAPTER 15. TOOLS

The parameter Transformation determines the class of transformations that is used in the bundle adjustment
to transform the image points. This can be used to restrict the allowable transformations. For Transformation
= ’projective’, projective transformations are used (see vector_to_proj_hom_mat2d). For
Transformation = ’affine’, affine transformations are used (see vector_to_hom_mat2d), for
Transformation = ’similarity’, similarity transformations (see vector_to_similarity), and for
Transformation = ’rigid’ rigid transformations (see vector_to_rigid).
The resulting bundle-adjusted transformations are retuned as an array of 3 × 3 projective transformation matrices
in MosaicMatrices2D. In addition, the points reconstructed by the bundle adjustment are returned in (Rows,
Cols). The average projection error of the reconstructed points is returned in Error. This can be used to check
whether the optimization has converged to useful values.
Parameter
. NumImages (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of different images that are used for the calibration.
Restriction : NumImages ≥ 2
. ReferenceImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Index of the reference image.
. MappingSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Indices of the source images of the transformations.
. MappingDest (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Indices of the target images of the transformations.
. HomMatrices2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Array of 3 × 3 projective transformation matrices.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double / Hlong
Row coordinates of corresponding points in the respective source images.
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double / Hlong
Column coordinates of corresponding points in the respective source images.
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double / Hlong
Row coordinates of corresponding points in the respective destination images.
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double / Hlong
Column coordinates of corresponding points in the respective destination images.
. NumCorrespondences (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Number of point correspondences in the respective image pair.
. Transformation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Transformation class to be used.
Default Value : "projective"
List of values : Transformation ∈ {"projective", "affine", "similarity", "rigid"}
. MosaicMatrices2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Array of 3 × 3 projective transformation matrices that determine the position of the images in the mosaic.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Row coordinates of the points reconstructed by the bundle adjustment.
. Cols (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Column coordinates of the points reconstructed by the bundle adjustment.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Average error per reconstructed point.
Example (Syntax: HDevelop)

* Assume that Images contains the four images of the mosaic in the
* layout given in the above description. Then the following example
* computes the bundle-adjusted transformation matrices.
From := [1,1,1,2,2,3]
To := [2,3,4,3,4,4]
HomMatrices2D := []
Rows1 := []
Cols1 := []
Rows2 := []

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1027

Cols2 := []
NumMatches := []
for J := 0 to |From|-1 by 1
select_obj (Images, From[J], ImageF)
select_obj (Images, To[J], ImageT)
points_foerstner (ImageF, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsF, ColsF, _, _, _, _, _, _, _, _)
points_foerstner (ImageT, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsT, ColsT, _, _, _, _, _, _, _, _)
proj_match_points_ransac (ImageF, ImageT, RowsF, ColsF, RowsT, ColsT,
’ncc’, 10, 0, 0, 480, 640, 0, 0.5,
’gold_standard’, 2, 42, HomMat2D,
Points1, Points2)
HomMatrices2D := [HomMatrices2D,HomMat2D]
Rows1 := [Rows1,subset(RowsF,Points1)]
Cols1 := [Cols1,subset(ColsF,Points1)]
Rows2 := [Rows2,subset(RowsT,Points2)]
Cols2 := [Cols2,subset(ColsT,Points2)]
NumMatches := [NumMatches,|Points1|]
endfor
bundle_adjust_mosaic (4, 1, From, To, HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches, ’rigid’, MosaicMatrices)
gen_bundle_adjusted_mosaic (Images, MosaicImage, HomMatrices2D,
’default’, ’false’, TransMat2D)

Result
If the parameters are valid, the operator bundle_adjust_mosaic returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
bundle_adjust_mosaic is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac
Possible Successors
gen_bundle_adjusted_mosaic
See also
gen_projective_mosaic
Module
Matching

T_hom_mat2d_compose ( const Htuple HomMat2DLeft,


const Htuple HomMat2DRight, Htuple *HomMat2DCompose )

Multiply two homogeneous 2D transformation matrices.


hom_mat2d_compose composes a new 2D transformation matrix by multiplying the two input matrices:

HomMat2DCompose = HomMat2DLeft · HomMat2DRight

For example, if the two input matrices correspond to rigid transformations, i.e., to transformations consisting of a
rotation and a translation, the resulting matrix is calculated as follows:
     
Rl tl Rr tr Rl · Rr Rl +tl · tr
HomMat2DCompose = · =
00 1 00 1 0 0 1

HALCON 8.0.2
1028 CHAPTER 15. TOOLS

Parameter
. HomMat2DLeft (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Left input transformation matrix.
. HomMat2DRight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Right input transformation matrix.
. HomMat2DCompose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_compose returns H_MSG_TRUE. If necessary, an excep-
tion is raised.
Parallelization Information
hom_mat2d_compose is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_compose, hom_mat2d_translate, hom_mat2d_translate_local,
hom_mat2d_scale, hom_mat2d_scale_local, hom_mat2d_rotate,
hom_mat2d_rotate_local, hom_mat2d_slant, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Module
Foundation

T_hom_mat2d_determinant ( const Htuple HomMat2D,


Htuple *Determinant )

Compute the determinant of a homogeneous 2D transformation matrix.


hom_mat2d_determinant computes the determinant of the homogeneous 2D transformation matrix given by
HomMat2D and returns it in Determinant.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Determinant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Determinant of the input matrix.
Result
hom_mat2d_determinant always returns H_MSG_TRUE.
Parallelization Information
hom_mat2d_determinant is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Module
Foundation

T_hom_mat2d_identity ( Htuple *HomMat2DIdentity )

Generate the homogeneous transformation matrix of the identical 2D transformation.


hom_mat2d_identity generates the homogeneous transformation matrix HomMat2DIdentity describing
the identical 2D transformation:

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1029

 
1 0 0
HomMat2DIdentity =  0 1 0 
0 0 1

Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. Thus, HomMat2DIdentity is stored as the
tuple [1,0,0,0,1,0].
Parameter

. HomMat2DIdentity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *


Transformation matrix.
Result
hom_mat2d_identity always returns H_MSG_TRUE.
Parallelization Information
hom_mat2d_identity is reentrant and processed without parallelization.
Possible Successors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Module
Foundation

T_hom_mat2d_invert ( const Htuple HomMat2D, Htuple *HomMat2DInvert )

Invert a homogeneous 2D transformation matrix.


hom_mat2d_invert inverts the homogeneous 2D transformation matrix given by HomMat2D. The resulting
matrix is returned in HomMat2DInvert.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. HomMat2DInvert (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
hom_mat2d_invert returns H_MSG_TRUE if the parameters are valid and the input matrix is invertible. Oth-
erwise, an exception is raised.
Parallelization Information
hom_mat2d_invert is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Module
Foundation

HALCON 8.0.2
1030 CHAPTER 15. TOOLS

T_hom_mat2d_rotate ( const Htuple HomMat2D, const Htuple Phi,


const Htuple Px, const Htuple Py, Htuple *HomMat2DRotate )

Add a rotation to a homogeneous 2D transformation matrix.


hom_mat2d_rotate adds a rotation by the angle Phi to the homogeneous 2D transformation matrix
HomMat2D and returns the resulting matrix in HomMat2DRotate. The rotation is described by a 2×2 rotation
matrix R. It is performed relative to the global (i.e., fixed) coordinate system; this corresponds to the following
chain of transformation matrices:
 
0  
R cos(Phi) − sin(Phi)
HomMat2DRotate =  0  · HomMat2D R =
sin(Phi) cos(Phi)
0 0 1

The point (Px,Py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat2DRotate. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the rotation is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
     
1 0 +Px 0 1 0 −Px
R
HomMat2DRotate =  0 1 +Py  ·  0  ·  0 1 −Py  · HomMat2D
0 0 1 0 0 1 0 0 1

To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_rotate_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
 
ra rb tc
 rd re tf 
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1031

. HomMat2DRotate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *


Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_rotate returns H_MSG_TRUE. If necessary, an excep-
tion is raised.
Parallelization Information
hom_mat2d_rotate is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_slant
Possible Successors
hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate, hom_mat2d_slant
See also
hom_mat2d_rotate_local
Module
Foundation

T_hom_mat2d_rotate_local ( const Htuple HomMat2D, const Htuple Phi,


Htuple *HomMat2DRotate )

Add a rotation to a homogeneous 2D transformation matrix.


hom_mat2d_rotate_local adds a rotation by the angle Phi to the homogeneous 2D transformation matrix
HomMat2D and returns the resulting matrix in HomMat2DRotate. The rotation is described by a 2×2 rotation
matrix R. In contrast to hom_mat2d_rotate, it is performed relative to the local coordinate system, i.e., the
coordinate system described by HomMat2D; this corresponds to the following chain of transformation matrices:
 
0  
R cos(Phi) − sin(Phi)
HomMat2DRotate = HomMat2D ·  0  R =
sin(Phi) cos(Phi)
0 0 1

The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat2DRotate.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
 
ra rb tc
 rd re tf 
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter

. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double


Input transformation matrix.

HALCON 8.0.2
1032 CHAPTER 15. TOOLS

. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong


Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. HomMat2DRotate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_rotate_local returns H_MSG_TRUE. If necessary,
an exception is raised.
Parallelization Information
hom_mat2d_rotate_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate_local, hom_mat2d_scale_local,
hom_mat2d_rotate_local, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate_local, hom_mat2d_scale_local, hom_mat2d_rotate_local,
hom_mat2d_slant_local
See also
hom_mat2d_rotate
Module
Foundation

T_hom_mat2d_scale ( const Htuple HomMat2D, const Htuple Sx,


const Htuple Sy, const Htuple Px, const Htuple Py,
Htuple *HomMat2DScale )

Add a scaling to a homogeneous 2D transformation matrix.


hom_mat2d_scale adds a scaling by the scale factors Sx and Sy to the homogeneous 2D transformation matrix
HomMat2D and returns the resulting matrix in HomMat2DScale. The scaling is described by a 2×2 scaling
matrix S. It is performed relative to the global (i.e., fixed) coordinate system; this corresponds to the following
chain of transformation matrices:
 
0  
S  · HomMat2D Sx 0
HomMat2DScale =  0 S =
0 Sy
0 0 1

The point (Px,Py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat2DScale. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the scaling is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
     
1 0 +Px 0 1 0 −Px
S
HomMat2DScale =  0 1 +Py  ·  0 · 0 1 −Py  · HomMat2D
0 0 1 0 0 1 0 0 1

To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_scale_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1033

a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
 
ra rb tc
 rd re tf 
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Sx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the x-axis.
Default Value : 2
Suggested values : Sx ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 16}
Restriction : Sx 6= 0
. Sy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the y-axis.
Default Value : 2
Suggested values : Sy ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 16}
Restriction : Sy 6= 0
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat2DScale (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
hom_mat2d_scale returns H_MSG_TRUE if both scale factors are not 0. If necessary, an exception is raised.
Parallelization Information
hom_mat2d_scale is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_slant
Possible Successors
hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate, hom_mat2d_slant
See also
hom_mat2d_scale_local
Module
Foundation

T_hom_mat2d_scale_local ( const Htuple HomMat2D, const Htuple Sx,


const Htuple Sy, Htuple *HomMat2DScale )

Add a scaling to a homogeneous 2D transformation matrix.


hom_mat2d_scale_local adds a scaling by the scale factors Sx and Sy to the homogeneous 2D transforma-
tion matrix HomMat2D and returns the resulting matrix in HomMat2DScale. The scaling is described by a 2×2

HALCON 8.0.2
1034 CHAPTER 15. TOOLS

scaling matrix S. In contrast to hom_mat2d_scale, it is performed relative to the local coordinate system,
i.e., the coordinate system described by HomMat2D; this corresponds to the following chain of transformation
matrices:
 
0  
S Sx 0
HomMat2DScale = HomMat2D ·  0  S =
0 Sy
0 0 1

The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat2DScale.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
 
ra rb tc
 rd re tf 
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Sx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the x-axis.
Default Value : 2
Suggested values : Sx ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 16}
Restriction : Sx 6= 0
. Sy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the y-axis.
Default Value : 2
Suggested values : Sy ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 16}
Restriction : Sy 6= 0
. HomMat2DScale (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
hom_mat2d_scale_local returns H_MSG_TRUE if both scale factors are not 0. If necessary, an exception
is raised.
Parallelization Information
hom_mat2d_scale_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate_local, hom_mat2d_scale_local,
hom_mat2d_rotate_local, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate_local, hom_mat2d_scale_local, hom_mat2d_rotate_local,
hom_mat2d_slant_local
See also
hom_mat2d_scale
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1035

T_hom_mat2d_slant ( const Htuple HomMat2D, const Htuple Theta,


const Htuple Axis, const Htuple Px, const Htuple Py,
Htuple *HomMat2DSlant )

Add a slant to a homogeneous 2D transformation matrix.


hom_mat2d_slant adds a slant by the angle Theta to the homogeneous 2D transformation matrix HomMat2D
and returns the resulting matrix in HomMat2DSlant. A slant is an affine transformation in which one coordinate
axis remains fixed, while the other coordinate axis is rotated counterclockwise by an angle Theta. The parameter
Axis determines which coordinate axis is slanted. For Axis = ’x’, the x-axis is slanted and the y-axis remains
fixed, while for Axis = ’y’ the y-axis is slanted and the x-axis remains fixed. The slanting is performed relative to
the global (i.e., fixed) coordinate system; this corresponds to the following chains of transformation matrices:

 
cos(Theta) 0 0
Axis = 0 x 0 : HomMat2DSlant =  sin(Theta) 1 0  · HomMat2D
0 0 1
 
1 − sin(Theta) 0
Axis = 0 y 0 : HomMat2DSlant =  0 cos(Theta) 0  · HomMat2D
0 0 1

The point (Px,Py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat2DSlant. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the slant is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations for Axis = ’x’:
     
1 0 +Px cos(Theta) 0 0 1 0 −Px
HomMat2DSlant =  0 1 +Py  ·  sin(Theta) 1 0  ·  0 1 −Py  · HomMat2D
0 0 1 0 0 1 0 0 1

To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_slant_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
 
ra rb tc
 rd re tf 
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Slant angle.
Default Value : 0.78
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Theta ≤ 6.28318530718

HALCON 8.0.2
1036 CHAPTER 15. TOOLS

. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Coordinate axis that is slanted.
Default Value : "x"
List of values : Axis ∈ {"x", "y"}
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat2DSlant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_slant returns H_MSG_TRUE. If necessary, an exception
is raised.
Parallelization Information
hom_mat2d_slant is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_slant
Possible Successors
hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate, hom_mat2d_slant
See also
hom_mat2d_slant_local
Module
Foundation

T_hom_mat2d_slant_local ( const Htuple HomMat2D, const Htuple Theta,


const Htuple Axis, Htuple *HomMat2DSlant )

Add a slant to a homogeneous 2D transformation matrix.


hom_mat2d_slant_local adds a slant by the angle Theta to the homogeneous 2D transformation matrix
HomMat2D and returns the resulting matrix in HomMat2DSlant. A slant is an affine transformation in which
one coordinate axis remains fixed, while the other coordinate axis is rotated counterclockwise by an angle Theta.
The parameter Axis determines which coordinate axis is slanted. For Axis = ’x’, the x-axis is slanted and
the y-axis remains fixed, while for Axis = ’y’ the y-axis is slanted and the x-axis remains fixed. In contrast to
hom_mat2d_slant, the slanting is performed relative to the local coordinate system, i.e., the coordinate system
described by HomMat2D; this corresponds to the following chains of transformation matrices:

 
cos(Theta) 0 0
Axis = 0 x 0 : HomMat2DSlant = HomMat2D ·  sin(Theta) 1 0 
0 0 1
 
1 − sin(Theta) 0
Axis = 0 y 0 : HomMat2DSlant = HomMat2D · 0
 cos(Theta) 0 
0 0 1

The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat2DSlant.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1037

any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
 
ra rb tc
 rd re tf 
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Slant angle.
Default Value : 0.78
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Theta ≤ 6.28318530718
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Coordinate axis that is slanted.
Default Value : "x"
List of values : Axis ∈ {"x", "y"}
. HomMat2DSlant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_slant_local returns H_MSG_TRUE. If necessary, an
exception is raised.
Parallelization Information
hom_mat2d_slant_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate_local, hom_mat2d_scale_local,
hom_mat2d_rotate_local, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate_local, hom_mat2d_scale_local, hom_mat2d_rotate_local,
hom_mat2d_slant_local
See also
hom_mat2d_slant
Module
Foundation

T_hom_mat2d_to_affine_par ( const Htuple HomMat2D, Htuple *Sx,


Htuple *Sy, Htuple *Phi, Htuple *Theta, Htuple *Tx, Htuple *Ty )

Compute the affine transformation parameters from a homogeneous 2D transformation matrix.


hom_mat2d_to_affine_par computes the affine transformation parameters corresponding to the homoge-
neous 2D transformation matrix HomMat2D. The parameters Sx and Sy determine how the transformation scales
the original x- and y-axes, respectively. The two scaling factors are always positive. The angle Theta describes
whether the transformed coordinate axes are orthogonal (Theta = 0) or slanted. If |Theta| > π/2, the transfor-
mation contains a reflection. The angle Phi determines the rotation of the transformed x-axis with respect to the
original x-axis. The parameters Tx and Ty determine the translation of the two coordinate systems. The matrix
HomMat2D can be constructed from the six transformation parameters by the following operator sequence:

HALCON 8.0.2
1038 CHAPTER 15. TOOLS

hom_mat2d_identity (HomMat2DIdentity)
hom_mat2d_scale (HomMat2DIdentity, Sx, Sy, 0, 0, HomMat2DScale)
hom_mat2d_slant (HomMat2DScale, Theta, ’y’, 0, 0, HomMat2DSlant)
hom_mat2d_rotate (HomMat2DSlant, Phi, 0, 0, HomMat2DRotate)
hom_mat2d_translate (HomMat2DRotate, Tx, Ty, HomMat2D)

This is equivalent to the following chain of transformation matrices:



      
1 0 +Tx cos(Phi) − sin(Phi) 0 1 − sin(Theta) 0 Sx 0 0
HomMat2D =  0 1 +Ty  ·  sin(Phi) cos(Phi) 0  ·  0 cos(Theta) 0  ·  0 Sy 0 
0 0 1 0 0 1 0 0 1 0 0 1

Parameter

. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double


Input transformation matrix.
. Sx (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Scaling factor along the x direction.
. Sy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Scaling factor along the y direction.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double *
Rotation angle.
. Theta (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double *
Slant angle.
. Tx (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double *
Translation along the x direction.
. Ty (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double *
Translation along the y direction.
Result
If the matrix HomMat2D is non-degenerate and represents an affine transformation (i.e., not a projective transfor-
mation), hom_mat2d_to_affine_par returns H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
hom_mat2d_to_affine_par is reentrant and processed without parallelization.
Possible Predecessors
vector_to_hom_mat2d, vector_to_rigid, vector_to_similarity
Possible Successors
hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate, hom_mat2d_slant
Module
Foundation

T_hom_mat2d_translate ( const Htuple HomMat2D, const Htuple Tx,


const Htuple Ty, Htuple *HomMat2DTranslate )

Add a translation to a homogeneous 2D transformation matrix.


hom_mat2d_translate adds a translation by the vector t = (Tx,Ty) to the homogeneous 2D transformation
matrix HomMat2D and returns the resulting matrix in HomMat2DTranslate. The translation is performed
relative to the global (i.e., fixed) coordinate system; this corresponds to the following chain of transformation
matrices:
 
1 0  
t  Tx
HomMat2DTranslate =  0 1 · HomMat2D t=
Ty
0 0 1

To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_translate_local.

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1039

Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
 
ra rb tc
 rd re tf 
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter

. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double


Input transformation matrix.
. Tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat2DTranslate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_translate returns H_MSG_TRUE. If necessary, an
exception is raised.
Parallelization Information
hom_mat2d_translate is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_slant
Possible Successors
hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate, hom_mat2d_slant
See also
hom_mat2d_translate_local
Module
Foundation

T_hom_mat2d_translate_local ( const Htuple HomMat2D, const Htuple Tx,


const Htuple Ty, Htuple *HomMat2DTranslate )

Add a translation to a homogeneous 2D transformation matrix.


hom_mat2d_translate_local adds a translation by the vector t = (Tx,Ty) to the homogeneous 2D
transformation matrix HomMat2D and returns the resulting matrix in HomMat2DTranslate. In contrast to
hom_mat2d_translate, the translation is performed relative to the local coordinate system, i.e., the coordi-
nate system described by HomMat2D; this corresponds to the following chain of transformation matrices:

HALCON 8.0.2
1040 CHAPTER 15. TOOLS

 
1 0  
t  Tx
HomMat2DTranslate = HomMat2D ·  0 1 t=
Ty
0 0 1

Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
 
ra rb tc
 rd re tf 
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat2DTranslate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_translate_local returns H_MSG_TRUE. If neces-
sary, an exception is raised.
Parallelization Information
hom_mat2d_translate_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate_local, hom_mat2d_scale_local,
hom_mat2d_rotate_local, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate_local, hom_mat2d_scale_local, hom_mat2d_rotate_local,
hom_mat2d_slant_local
See also
hom_mat2d_translate
Module
Foundation

T_hom_mat2d_transpose ( const Htuple HomMat2D,


Htuple *HomMat2DTranspose )

Transpose a homogeneous 2D transformation matrix.

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1041

hom_mat2d_transpose transposes the homogeneous 2D transformation matrix given by HomMat2D. The


result matrix HomMat2DTranspose is always a 3 × 3 matrix, even if the input matrix is represented by a 2 × 3
matrix.
Parameter

. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double


Input transformation matrix.
. HomMat2DTranspose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
hom_mat2d_transpose always returns H_MSG_TRUE.
Parallelization Information
hom_mat2d_transpose is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Possible Successors
hom_mat2d_compose, hom_mat2d_invert
Module
Foundation

T_hom_mat3d_project ( const Htuple HomMat3D,


const Htuple PrincipalPointRow, const Htuple PrincipalPointCol,
const Htuple Focus, Htuple *HomMat2D )

Project an affine 3D transformation matrix to a 2D projective transformation matrix.


hom_mat3d_project calculates a homogeneous projection matrix from a homogeneous 3×4 transformation
matrix describing an affine transformation in 3D. The result can be used to project a plane, in particular a plane
containing an image. The projection matrix defines a projective transformation between two (two-dimensional)
planes.
This can be used to create perspective distortions, which occur in a projection of a plane rotated around
an axis other than the z axis. Usually, however, projective transformations are determined from point
correspondences (see vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d, and
proj_match_points_ransac).
Matrices for rotation, scale, and translation can be constructed using the operators hom_mat3d_identity,
hom_mat3d_scale, hom_mat3d_rotate, hom_mat3d_translate and pose_to_hom_mat3d.
Note that for 3D transformations the x-axis represents the column axis while the y-axis represents the row axis (see
also camera_calibration) while in projective_trans_image, the first row of HomMat2D contains
the transformation of the row axis and the second row contains the transformation of the column axis of the image.
The point (PrincipalPointRow, PrincipalPointCol) is the principal point of the projection and the
point (PrincipalPointRow, PrincipalPointCol, 0) can thus be interpreted as the position of the camera
in a virtual three-dimensional space. The direction of view is along the positive z-axis.
In this virtual space the plane containing the input image as well as the image plane are located at z = Focus,
which is Focus pixels away form the camera. As a result, using the identity matrix as the input matrix HomMat3D
leads to a matrix HomMat2D which also represents the identity in 2D.
Consequently, the parameter Focus is the “focal distance” of the virtual camera used and its unit is pixels. Its
value influences the degree of perspective distortions. The same input matrix at a bigger focal distance results in
weaker distortions than at a low focal distance.
Let H be the affine 3D matrix with elements hij , (r, c) = (PrincipalPointRow, PrincipalPointCol)
and f = Focus.
Then the projective transformation matrix is calculated as follows: First, a 3×4 projection matrix is calculated as:

HALCON 8.0.2
1042 CHAPTER 15. TOOLS

 
    h11 h12 h13 h14
f 0 c 1 0 0 −c  h21 h22 h23 h24 
Q= 0 f r · 0 1 0 −r  · 
 h31

h32 h33 h34 
0 0 1 0 0 1 0
0 0 0 1

Since the image of a plane containing points (x, y, f, 1)T is to be calculated the last two columns of Q can be
joined:
 
    1 0 0
r11 r12 r13 q11 q12 f · q13 + q14  0 1 0 
R =  r21 r22 r23  =  q21 q22 f · r23 + q24  =Q· 
 0

0 f 
r31 r32 r33 q31 q32 f · r33 + q34
0 0 1

Finally, the columns and rows of R are swapped in a way that the first row of P contains the transformation of the
row coordinates and the second row contains the transformation of the column coordinates so that P can be used
directly in projective_trans_image:
   
0 1 0 0 1 0
P = 1 0 0  ·R·  1 0 0 
0 0 1 0 0 1

The overall transformation can be written as:


 
      1 0 0  
0 1 0 f 0 c 1 0 0 −c  0 0 1 0
1 0 
P = 1 0 0 · 0 f r · 0 1 0 −r  · H · 
 0
·  1 0 0 
0 f 
0 0 1 0 0 1 0 0 1 0 0 0 1
0 0 1

Parameter

. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double


3 × 4 3D transformation matrix.
. PrincipalPointRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Row coordinate of the principal point.
Default Value : 256
Suggested values : PrincipalPointRow ∈ {16, 32, 64, 128, 240, 256, 512}
. PrincipalPointCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Column coordinate of the principal point.
Default Value : 256
Suggested values : PrincipalPointCol ∈ {16, 32, 64, 128, 256, 320, 512}
. Focus (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Focal length in pixels.
Default Value : 256
Suggested values : Focus ∈ {1, 2, 5, 256, 32768}
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Homogeneous projective transformation matrix.
Parallelization Information
hom_mat3d_project is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_rotate, hom_mat3d_translate, hom_mat3d_scale
Possible Successors
projective_trans_image, projective_trans_point_2d, projective_trans_region,
projective_trans_contour_xld, hom_mat2d_invert
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1043

T_hom_vector_to_proj_hom_mat2d ( const Htuple Px, const Htuple Py,


const Htuple Pw, const Htuple Qx, const Htuple Qy, const Htuple Qw,
const Htuple Method, Htuple *HomMat2D )

Compute a homogeneous transformation matrix using given point correspondences.


hom_vector_to_proj_hom_mat2d determines the homogeneous projective transformation matrix
HomMat2D that optimally fulfills the following equations given by at least 4 point correspondences
  
Px Qx
HomMat2D ·  Py  =  Qy 
Pw Qw

If fewer than 4 pairs of points (Px, Py, Pw), (Qx, Qy, Qw) are given, there exists no unique solution, if exactly 4
pairs are supplied the matrix HomMat2D transforms them in exactly the desired way, and if there are more than
4 point pairs given, hom_vector_to_proj_hom_mat2d seeks to minimize the transformation error. To
achieve such a minimization, two different algorithms are available. The algorithm to use can be chosen using the
parameter Method. For conventional geometric problems Method=’normalized_dlt’ usually yields better results.
However, if one of the coordinates Qw or Pw equals 0, Method=’dlt’ must be chosen.
In contrast to vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d uses homogeneous
coordinates for the points, and hence points at infinity (Pw = 0 or Qw = 0) can be used to determine the transforma-
tion. If finite points are used, typically Pw and Qw are set to 1. In this case, vector_to_proj_hom_mat2d can
also be used. vector_to_proj_hom_mat2d has the advantage that one additional optimization method can
be used and that the covariances of the points can be taken into account. If the correspondence between the points
has not been determined, proj_match_points_ransac should be used to determine the correspondence as
well as the transformation.
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Input points 1 (x coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Input points 1 (y coordinate).
. Pw (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Input points 1 (w coordinate).
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double
Input points 2 (x coordinate).
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double
Input points 2 (y coordinate).
. Qw (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double
Input points 2 (w coordinate).
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Estimation algorithm.
Default Value : "normalized_dlt"
List of values : Method ∈ {"normalized_dlt", "dlt"}
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Homogeneous projective transformation matrix.

HALCON 8.0.2
1044 CHAPTER 15. TOOLS

Parallelization Information
hom_vector_to_proj_hom_mat2d is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac, points_foerstner, points_harris
Possible Successors
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
Alternatives
vector_to_proj_hom_mat2d, proj_match_points_ransac
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Calibration

T_proj_match_points_ransac ( const Hobject Image1,


const Hobject Image2, const Htuple Rows1, const Htuple Cols1,
const Htuple Rows2, const Htuple Cols2, const Htuple GrayMatchMethod,
const Htuple MaskSize, const Htuple RowMove, const Htuple ColMove,
const Htuple RowTolerance, const Htuple ColTolerance,
const Htuple Rotation, const Htuple MatchThreshold,
const Htuple EstimationMethod, const Htuple DistanceThreshold,
const Htuple RandSeed, Htuple *HomMat2D, Htuple *Points1,
Htuple *Points2 )

Compute a projective transformation matrix between two images by finding correspondences between points.
Given a set of coordinates of characteristic points (Cols1, Rows1) and (Cols2, Rows2) in both input images
Image1 and Image2, proj_match_points_ransac automatically determines corresponding points and
the homogeneous projective transformation matrix HomMat2D that best transforms the corresponding points
from the different images into each other. The characteristic points can, for example, be extracted with
points_foerstner or points_harris.
The transformation is determined in two steps: First, gray value correlations of mask windows around the input
points in the first and the second image are determined and an initial matching between them is generated using
the similarity of the windows in both images.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the algorithm’s performance, the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the transformation contains a rotation, i.e., if the first image is rotated with respect to the second image, the
parameter Rotation may contain an estimate for the rotation angle or an angle interval in radians. A good guess
will increase the quality of the gray value matching. If the actual rotation differs too much from the specified
estimate the matching will typically fail. The larger the given interval, the slower the operator is since the entire
algorithm is run for all relevant angles within the interval.
Once the initial matching is complete, a randomized search algorithm (RANSAC) is used to determine the transfor-
mation matrix HomMat2D. It tries to find the matrix that is consistent with a maximum number of correspondences.

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1045

For a point to be accepted, its distance from the coordinates predicted by the transformation must not exceed the
threshold DistanceThreshold.
Once a choice has been made, the matrix is further optimized using all consistent points. For this optimization, the
EstimationMethod can be chosen to either be the slow but mathematically optimal ’gold_standard’ method
or the faster ’normalized_dlt’. Here, the algorithms of vector_to_proj_hom_mat2d are used.
Point pairs that still violate the consistency condition for the final transformation are dropped, the matched points
are returned as control values. Points1 contains the indices of the matched input points from the first image,
Points2 contains the indices of the corresponding points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence to
obtain reproducible results. If RandSeed is set to a positive number, the operator yields the same result on every
call with the same parameters because the internally used random number generator is initialized with the seed
value. If RandSeed = 0, the random number generator is initialized with the current time. Hence, the results
may not be reproducible in this case.
Parameter

. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2


Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double / Hlong
Row coordinates of characteristic points in image 1.
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double / Hlong
Column coordinates of characteristic points in image 1.
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double / Hlong
Row coordinates of characteristic points in image 2.
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double / Hlong
Column coordinates of characteristic points in image 2.
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Gray value comparison metric.
Default Value : "ssd"
List of values : GrayMatchMethod ∈ {"ssd", "sad", "ncc"}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Size of gray value masks.
Default Value : 10
Typical range of values : MaskSize ≤ 90
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Average row coordinate shift.
Default Value : 0
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Average column coordinate shift.
Default Value : 0
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Half height of matching search window.
Default Value : 256
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Half width of matching search window.
Default Value : 256
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Range of rotation angles.
Default Value : 0.0
Suggested values : Rotation ∈ {0.0, 0.7854, 1.571, 3.142}
. MatchThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong / double
Threshold for gray value matching.
Default Value : 10
Suggested values : MatchThreshold ∈ {10, 20, 50, 100, 0.9, 0.7}

HALCON 8.0.2
1046 CHAPTER 15. TOOLS

. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Transformation matrix estimation algorithm.
Default Value : "normalized_dlt"
List of values : EstimationMethod ∈ {"normalized_dlt", "gold_standard"}
. DistanceThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Threshold for transformation consistency check.
Default Value : 0.2
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Seed for the random number generator.
Default Value : 0
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Homogeneous projective transformation matrix.
. Points1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of matched input points in image 1.
. Points2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of matched input points in image 2.
Parallelization Information
proj_match_points_ransac is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
points_foerstner, points_harris
Possible Successors
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
Alternatives
hom_vector_to_proj_hom_mat2d, vector_to_proj_hom_mat2d
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching

T_projective_trans_pixel ( const Htuple HomMat2D, const Htuple Row,


const Htuple Col, Htuple *RowTrans, Htuple *ColTrans )

Project pixel coordinates using a homogeneous projective transformation matrix.


projective_trans_pixel applies the homogeneous projective transformation matrix HomMat2D to all in-
put pixels (Row,Col) and returns an array of output pixels (RowTrans,ColTrans). The transformation is
described by the homogeneous transformation matrix given in HomMat2D.
The difference between projective_trans_pixel and projective_trans_point_2d lies in the
used coordinate system: projective_trans_pixel uses a coordinate system with origin in the upper
left corner of the image, while projective_trans_point_2d uses the standard image coordinate system,
whose origin lies in the middle of the upper left pixel and which is also used by operators like area_center.
projective_trans_pixel corresponds to the following steps (input and output points as homogeneous vec-
tors):

   
RTrans Row
 CTrans  = HomMat2D ·  Col 
WTrans 1
!

RowTrans
 RTrans
= WTrans
ColTrans CTrans
WTrans

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1047

If a point at infinity (WTrans = 0) is created by the transformation, an error is returned.


Parameter

. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double


Homogeneous projective transformation matrix.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; Htuple . double / Hlong
Input pixel(s) (row coordinate).
Default Value : 64
Suggested values : Row ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; Htuple . double / Hlong
Input pixel(s) (column coordinate).
Default Value : 64
Suggested values : Col ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. RowTrans (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; Htuple . double *
Output pixel(s) (row coordinate).
. ColTrans (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; Htuple . double *
Output pixel(s) (column coordinate).
Parallelization Information
projective_trans_pixel is reentrant and processed without parallelization.
Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d
Module
Foundation

T_projective_trans_point_2d ( const Htuple HomMat2D, const Htuple Px,


const Htuple Py, const Htuple Pw, Htuple *Qx, Htuple *Qy, Htuple *Qw )

Project a homogeneous 2D point using a projective transformation matrix.


projective_trans_point_2d applies the homogeneous projective transformation matrix HomMat2D to
all homogeneous input points (Px,Py,Pw) and returns an array of homogeneous output points (Qx,Qy,Qw). The
transformation is described by the homogeneous transformation matrix given in HomMat2D. This corresponds to
the following equation (input and output points as homogeneous vectors):
   
Qx Px
 Qy  = HomMat2D ·  Py 
Qw Pw

To transform the homogeneous coordinates to Euclidean coordinates, they have to be divided by Qw:
!
  Qx
Ex Qw
= Qy
Ey Qw

If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.

HALCON 8.0.2
1048 CHAPTER 15. TOOLS

Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Homogeneous projective transformation matrix.
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Input point (x coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Input point (y coordinate).
. Pw (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Input point (w coordinate).
. Qx (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double *
Output point (x coordinate).
. Qy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double *
Output point (y coordinate).
. Qw (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double *
Output point (w coordinate).
Parallelization Information
projective_trans_point_2d is reentrant and processed without parallelization.
Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_pixel
Module
Foundation

T_vector_angle_to_rigid ( const Htuple Row1, const Htuple Column1,


const Htuple Angle1, const Htuple Row2, const Htuple Column2,
const Htuple Angle2, Htuple *HomMat2D )

Compute a rigid affine transformation from points and angles.


vector_angle_to_rigid computes a rigid affine transformation, i.e., a transformation consisting of a rota-
tion and a translation, from a point correspondence and two corresponding angles and returns it as the homogeneous
transformation matrix HomMat2D. The matrix consists of 2 components: a rotation matrix R and a translation vec-
tor t (also see hom_mat2d_rotate and hom_mat2d_translate):
   
  1 0 0
R t t   R
HomMat2D = = 0 1 · 0  = H(t) · H(R)
00 1
0 0 1 00 1

The coordinates of the original point are passed in (Row1,Column1), while the corresponding angle is passed
in Angle1. The coordinates of the transformed point are passed in (Row2,Column2), while the corresponding
angle is passed in Angle2. The following equation describes the transformation of the point using homogeneous
vectors:
   
Row2 Row1
 Column2  = HomMat2D ·  Column1 
1 1

In particular, the operator vector_angle_to_rigid is useful to construct a rigid affine transformation from
the results of the matching operators (e.g., find_shape_model or best_match_rot_mg), which trans-
forms a reference image to the current image or (if the parameters are passed in reverse order) from the current
image to the reference image.
HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1049

Parameter
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Row coordinate of the original point.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Column coordinate of the original point.
. Angle1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Angle of the original point.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Row coordinate of the transformed point.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Column coordinate of the transformed point.
. Angle2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Angle of the transformed point.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Example (Syntax: HDevelop)

draw_rectangle2 (WindowID, RowTempl, ColumnTempl, PhiTempl, Length1, Length2)


gen_rectangle2 (Rectangle, RowTempl, ColumnTempl, PhiTempl, Length1, Length2)
reduce_domain (ImageTempl, Rectangle, ImageReduced)
create_template_rot (ImageReduced, 4, 0, rad(360), rad(1), ’sort’,
’original’, TemplateID)
while (true)
best_match_rot_mg (Image, TemplateID, 0, rad(360), 30, ’true’, 4, Row,
Column, Angle, ErrMatch)
if (ErrMatch<255)
vector_angle_to_rigid (Row, Column, Angle, RowTempl,
ColumnTempl, 0, HomMat2D)
affine_trans_image (Image, ImageAffinTrans, HomMat2D, ’constant’,
’false’)
endif
endwhile
clear_template (TemplateID)

Parallelization Information
vector_angle_to_rigid is reentrant and processed without parallelization.
Possible Predecessors
best_match_rot_mg, best_match_rot
Possible Successors
hom_mat2d_invert, affine_trans_image, affine_trans_region,
affine_trans_contour_xld, affine_trans_polygon_xld, affine_trans_point_2d
Alternatives
vector_to_rigid
See also
vector_field_to_hom_mat2d
Module
Foundation

T_vector_field_to_hom_mat2d ( const Hobject VectorField,


Htuple *HomMat2D )

Approximate an affine map from a displacement vector field.


vector_field_to_hom_mat2d approximates an affine map from the displacement vector field
VectorField. The affine map is returned in HomMat2D.

HALCON 8.0.2
1050 CHAPTER 15. TOOLS

If the displacement vector field has been computed from the original image Iorig and the second image Ires , the
internally stored transformation matrix (see affine_trans_image) contains a map that describes how to
transform the first image Iorig to the second image Ires .
Parameter
. VectorField (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : vector_field
Input image.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Parallelization Information
vector_field_to_hom_mat2d is reentrant and processed without parallelization.
Possible Predecessors
optical_flow_mg
Possible Successors
affine_trans_image
Alternatives
vector_to_hom_mat2d
Module
Foundation

T_vector_to_hom_mat2d ( const Htuple Px, const Htuple Py,


const Htuple Qx, const Htuple Qy, Htuple *HomMat2D )

Approximate an affine transformation from point correspondences.


vector_to_hom_mat2d approximates an affine transformation from at least three point correspondences and
returns it as the homogeneous transformation matrix HomMat2D (see hom_mat2d_to_affine_par for the
content of the homogeneous transformation matrix).
The point correspondences are passed in the tuples (Px,Py) and (Qx,Qy), where corresponding points must be at
the same index positions in the tuples. If more than three point correspondences are passed the transformation
is overdetermined. In this case, the returned transformation is the transformation that minimizes the distances
between the input points (Px,Py) and the transformed points (Qx,Qy), as described in the following equation
(points as homogeneous vectors):
    2
X Qx[i] Px[i]


 Qy[i]  − HomMat2D ·  Py[i]  = minimum

i 1 1

HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
X coordinates of the original points.
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Y coordinates of the original points.
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
X coordinates of the transformed points.

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1051

. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double


Y coordinates of the transformed points.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Parallelization Information
vector_to_hom_mat2d is reentrant and processed without parallelization.
Possible Successors
affine_trans_image
Alternatives
vector_field_to_hom_mat2d
See also
affine_trans_image, optical_flow_mg
Module
Foundation

T_vector_to_proj_hom_mat2d ( const Htuple Px, const Htuple Py,


const Htuple Qx, const Htuple Qy, const Htuple Method,
const Htuple CovXX1, const Htuple CovYY1, const Htuple CovXY1,
const Htuple CovXX2, const Htuple CovYY2, const Htuple CovXY2,
Htuple *HomMat2D, Htuple *Covariance )

Compute a projective transformation matrix using given point correspondences.


vector_to_proj_hom_mat2d determines the homogeneous projective transformation matrix HomMat2D
that optimally fulfills the following equations given by at least 4 point correspondences
  
Px Qx
HomMat2D ·  Py  =  Qy  .
1 1

If fewer than 4 pairs of points (Px,Py), (Qx,Qy) are given, there exists no unique solution, if exactly 4 pairs
are supplied the matrix HomMat2D transforms them in exactly the desired way, and if there are more than 4
point pairs given, vector_to_proj_hom_mat2d seeks to minimize the transformation error. To achieve
such a minimization, several different algorithms are available. The algorithm to use can be chosen using
the parameter Method. Method=’dlt’ uses a fast and simple, but also rather inaccurate error estimation al-
gorithm while Method=’normalized_dlt’ offers a good compromise between speed and accuracy. Finally,
Method=’gold_standard’ performs a mathematically optimal but slower optimization.
If ’gold_standard’ is used and the input points have been obtained from an operator like points_foerstner,
which provides a covariance matrix for each of the points, which specifies the accuracy of the points, this can be
taken into account by using the input parameters CovYY1, CovXX1, CovXY1 for the points in the first image and
CovYY2, CovXX2, CovXY2 for the points in the second image. The covariances are symmetric 2 × 2 matrices.
CovXX1/CovXX2 and CovYY1/CovYY2 are a list of diagonal entries while CovXY1/CovXY2 contains the non-
diagonal entries which appear twice in a symmetric matrix. If a different Method than ’gold_standard’ is used or
the covariances are unknown the covariance parameters can be left empty.
In contrast to hom_vector_to_proj_hom_mat2d, points at infinity cannot be used to
determine the transformation in vector_to_proj_hom_mat2d. If this is necessary,
hom_vector_to_proj_hom_mat2d must be used. If the correspondence between the points has not
been determined, proj_match_points_ransac should be used to determine the correspondence as well as
the transformation.
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or

HALCON 8.0.2
1052 CHAPTER 15. TOOLS

any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double / Hlong
Input points in image 1 (row coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double / Hlong
Input points in image 1 (column coordinate).
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
Input points in image 2 (row coordinate).
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Input points in image 2 (column coordinate).
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Estimation algorithm.
Default Value : "normalized_dlt"
List of values : Method ∈ {"normalized_dlt", "gold_standard", "dlt"}
. CovXX1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Row coordinate variance of the points in image 1.
Default Value : []
. CovYY1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Column coordinate variance of the points in image 1.
Default Value : []
. CovXY1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Covariance of the points in image 1.
Default Value : []
. CovXX2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Row coordinate variance of the points in image 2.
Default Value : []
. CovYY2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Column coordinate variance of the points in image 2.
Default Value : []
. CovXY2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Covariance of the points in image 2.
Default Value : []
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Homogeneous projective transformation matrix.
. Covariance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
9 × 9 covariance matrix of the projective transformation matrix.
Parallelization Information
vector_to_proj_hom_mat2d is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac, points_foerstner, points_harris
Possible Successors
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
Alternatives
hom_vector_to_proj_hom_mat2d, proj_match_points_ransac
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.

HALCON/C Reference Manual, 2008-5-13


15.1. 2D-TRANSFORMATIONS 1053

Module
Calibration

T_vector_to_rigid ( const Htuple Px, const Htuple Py, const Htuple Qx,
const Htuple Qy, Htuple *HomMat2D )

Approximate a rigid affine transformation from point correspondences.


vector_to_rigid approximates a rigid affine transformation, i.e., a transformation consisting of a rotation
and a translation, from at least two point correspondences and returns it as the homogeneous transformation ma-
trix HomMat2D. The matrix consists of 2 components: a rotation matrix R and a translation vector t (also see
hom_mat2d_rotate and hom_mat2d_translate):
   
  1 0 0
R t t   R
HomMat2D = = 0 1 · 0  = H(t) · H(R)
00 1
0 0 1 00 1

The point correspondences are passed in the tuples (Px, Py) and (Qx,Qy), where corresponding points must be
at the same index positions in the tuples. The transformation is always overdetermined. Therefore, the returned
transformation is the transformation that minimizes the distances between the original points (Px,Py) and the
transformed points (Qx,Qy), as described in the following equation (points as homogeneous vectors):

    2
X Qx[i] Px[i]


 Qy[i]  − HomMat2D ·  Py[i]  = minimum

i 1 1

HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter

. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double


X coordinates of the original points.
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Y coordinates of the original points.
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
X coordinates of the transformed points.
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Y coordinates of the transformed points.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Parallelization Information
vector_to_rigid is reentrant and processed without parallelization.
Possible Successors
affine_trans_image, affine_trans_region, affine_trans_contour_xld,
affine_trans_polygon_xld, affine_trans_point_2d
Alternatives
vector_to_hom_mat2d, vector_to_similarity

HALCON 8.0.2
1054 CHAPTER 15. TOOLS

See also
vector_field_to_hom_mat2d
Module
Foundation

T_vector_to_similarity ( const Htuple Px, const Htuple Py,


const Htuple Qx, const Htuple Qy, Htuple *HomMat2D )

Approximate an similarity transformation from point correspondences.


vector_to_similarity approximates a similarity transformation, i.e., a transformation consisting of a uni-
form scaling, a rotation, and a translation, from at least two point correspondences and returns it as the homoge-
neous transformation matrix HomMat2D. The matrix consists of 3 components: a scaling matrix S with identical
scaling in th e x and y direction, a rotation matrix R, and a translation vector t (also see hom_mat2d_scale,
hom_mat2d_rotate, and hom_mat2d_translate):
     
  1 0 0 0
R·S t t · R S
HomMat2D = = 0 1 0 · 0  = H(t) · H(R) · H(S)
0 0 1
0 0 1 00 1 00 1

The point correspondences are passed in the tuples (Px, Py) and (Qx,Qy), where corresponding points must be
at the same index positions in the tuples. If more than two point correspondences are passed the transformation
is overdetermined. In this case, the returned transformation is the transformation that minimizes the distances
between the original points (Px,Py) and the transformed points (Qx,Qy), as described in the following equation
(points as homogeneous vectors):
    2
X Qx[i] Px[i]


 Qy[i]  − HomMat2D ·  Py[i]  = minimum

i 1 1

HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
X coordinates of the original points.
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Y coordinates of the original points.
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
X coordinates of the transformed points.
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Y coordinates of the transformed points.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Parallelization Information
vector_to_similarity is reentrant and processed without parallelization.
Possible Successors
affine_trans_image, affine_trans_region, affine_trans_contour_xld,
affine_trans_polygon_xld, affine_trans_point_2d

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1055

Alternatives
vector_to_hom_mat2d, vector_to_rigid
See also
vector_field_to_hom_mat2d
Module
Foundation

15.2 3D-Transformations
T_affine_trans_point_3d ( const Htuple HomMat3D, const Htuple Px,
const Htuple Py, const Htuple Pz, Htuple *Qx, Htuple *Qy, Htuple *Qz )

Apply an arbitrary affine 3D transformation to points.


affine_trans_point_3d applies an arbitrary affine 3D transformation, i.e., scaling, rotation, and translation,
to the input points (Px,Py,Pz) and returns the resulting points in (Qx, Qy,Qz). The affine transformation is
described by the homogeneous transformation matrix given in HomMat3D. This corresponds to the following
equation (input and output points as homogeneous vectors):
   
Qx Px
 Qy   Py 
 Qz  = HomMat3D · 
   
Pz 
1 1

The transformation matrix can be created using the operators hom_mat3d_identity, hom_mat3d_scale,
hom_mat3d_rotate, hom_mat3d_translate, etc., or be the result of pose_to_hom_mat3d.
For example, if HomMat3D corresponds to a rigid transformation, i.e., if it consists of a rotation and a translation,
the points are transformed as follows:
       
Qx   Px Px
 Qy  R t  Py   R· Py  + t 
  =
 Qz  ·
 Pz  = 
  
000 1 Pz 
1 1 1

Parameter

. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double


Input transformation matrix.
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x(-array) ; Htuple . double / Hlong
Input point(s) (x coordinate).
Default Value : 64
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y(-array) ; Htuple . double / Hlong
Input point(s) (y coordinate).
Default Value : 64
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Pz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z(-array) ; Htuple . double / Hlong
Input point(s) (z coordinate).
Default Value : 64
Suggested values : Pz ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Qx (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x(-array) ; Htuple . double *
Output point(s) (x coordinate).
. Qy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y(-array) ; Htuple . double *
Output point(s) (y coordinate).
. Qz (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z(-array) ; Htuple . double *
Output point(s) (z coordinate).

HALCON 8.0.2
1056 CHAPTER 15. TOOLS

Result
If the parameters are valid, the operator affine_trans_point_3d returns H_MSG_TRUE. If necessary, an
exception is raised.
Parallelization Information
affine_trans_point_3d is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local
Possible Successors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local
Module
Foundation

T_convert_pose_type ( const Htuple PoseIn,


const Htuple OrderOfTransform, const Htuple OrderOfRotation,
const Htuple ViewOfTransform, Htuple *PoseOut )

Change the representation type of a 3D pose.


convert_pose_type converts the 3D pose PoseIn into a 3D pose PoseOut with a different representation
type. See create_pose for details about 3D poses, their representation types, and the meaning of the parameters
OrderOfTransform, OrderOfRotation, and ViewOfTransform.
Note that convert_pose_type only changes the representation of a 3D pose, but not the rigid transformation
described by the pose.
Parameter

. PoseIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong


Original 3D pose.
Number of elements : 7
. OrderOfTransform (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Order of rotation and translation.
Default Value : "Rp+T"
Suggested values : OrderOfTransform ∈ {"Rp+T", "R(p-T)"}
. OrderOfRotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Meaning of the rotation values.
Default Value : "gba"
Suggested values : OrderOfRotation ∈ {"gba", "abg", "rodriguez"}
. ViewOfTransform (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
View of transformation.
Default Value : "point"
Suggested values : ViewOfTransform ∈ {"point", "coordinate_system"}
. PoseOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
3D transformation.
Number of elements : 7
Example (Syntax: HDevelop)

* get pose (exterior camera parameters):


read_pose (’campose.dat’, Pose)
* convert pose to a pose with desired semantic
convert_pose_type (Pose, ’Rp+T’, ’abg’, ’point’, Pose2)

Result
convert_pose_type returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1057

Parallelization Information
convert_pose_type is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration
Possible Successors
write_pose
See also
create_pose, get_pose_type, write_pose, read_pose
Module
Foundation

T_create_pose ( const Htuple TransX, const Htuple TransY,


const Htuple TransZ, const Htuple RotX, const Htuple RotY,
const Htuple RotZ, const Htuple OrderOfTransform,
const Htuple OrderOfRotation, const Htuple ViewOfTransform,
Htuple *Pose )

Create a 3D pose.
create_pose creates the 3D pose Pose. A pose describes a rigid 3D transformation, i.e., a transformation
consisting of an arbitrary translation and rotation, with 6 parameters: TransX, TransY, and TransZ specify the
translation along the x-, y-, and z-axis, respectively, while RotX, RotY, and RotZ describe the rotation.
3D poses are typically used in two ways: First, to describe the position and orientation of one coordinate system
relative to another (e.g., the pose of a part’s coordinate system relative to the camera coordinate system - in short:
the pose of the part relative to the camera) and secondly, to describe how coordinates can be transformed between
two coordinate systems (e.g., to transform points from part coordinates into camera coordinates).

Representation of orientation (rotation)


A 3D rotation around an arbitrary axis can be represented by 3 parameters in multiple ways. HALCON lets you
choose between three of them with the parameter OrderOfRotation: If you pass the value ’gba’, the rotation
is described by the following chain of rotations around the three axes (see hom_mat3d_rotate for the content
for the rotation matrices Rx , Ry , and Rz ):

Rgba = Rx (RotX) · Ry (RotY) · Rz (RotZ)

Please note that you can “read” this chain in two ways: If you start from the right, the rotations are always
performed relative to the global (i.e., fixed or “old”) coordinate system. Thus, Rgba can be read as follows: First
rotate around the z-axis, then around the “old” y-axis, and finally around the “old” x-axis. In contrast, if you read
from the left to the right, the rotations are performed relative to the local (i.e., “new”) coordinate system. Then,
Rgba corresponds to the following: First rotate around the x-axis, the around the “new” y-axis, and finally around
the “new(est)” z-axis.
Reading Rgba from right to left corresponds to the following sequence of operator calls:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate (HomMat3DIdent, RotZ, ’z’, 0, 0, 0, HomMat3DRotZ)
hom_mat3d_rotate (HomMat3DRotZ, RotY, ’y’, 0, 0, 0, HomMat3DRotYZ)
hom_mat3d_rotate (HomMat3DRotYZ, RotX, ’x’, 0, 0, 0, HomMat3DXYZ)

In contrast, reading from left to right corresponds to the following operator sequence:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate_local (HomMat3DIdent, RotX, ’x’, 0, 0, 0,
HomMat3DRotX)
hom_mat3d_rotate_local (HomMat3DRotX, RotY, ’y’, 0, 0, 0,
HomMat3DRotXY)
hom_mat3d_rotate_local (HomMat3DRotXY, RotZ, ’z’, 0, 0, 0, HomMat3DXYZ)

HALCON 8.0.2
1058 CHAPTER 15. TOOLS

When passing ’abg’ in OrderOfRotation, the rotation corresponds to the following chain:

Rabg = Rz (RotZ) · Ry (RotY) · Rx (RotX)

If you pass ’rodriguez’ in OrderOfRotation, the rotation parameters RotX, RotY, and RotZ are interpreted
as the x-, y-, and z-component of the so-called Rodriguez rotation vector. The direction of the vector defines the
(arbitrary) axis of rotation. The length of the vector usually defines the rotation angle with positive orientation.
Here, a variation of the Rodriguez vector is used, where the length of the vector defines the tangent of half the
rotation angle:
 
RotX p
Rrodriguez = rotate around  RotY  by 2 · arctan( RotX2 + RotY2 + RotZ2 )
RotZ

Corresponding homogeneous transformation matrix


You can obtain the homogeneous transformation matrix corresponding to a pose with the operator
pose_to_hom_mat3d. In the standard definition, this is the following homogeneous transformation matrix
which can be split into two separate matrices, one for the translation (H(t)) and one for the rotation (H(R)):

 
  TransX
R t  R(RotX, RotY, RotZ) TransY 
Hpose = = =
000 1  TransZ 
0 0 0 1
   
1 0 0 TransX 0
 0 1 0 TransY  0 
 ·  R(RotX, RotY, RotZ)

=   = H(t) · H(R)
 0 0 1 TransZ   0 
0 0 0 1 0 0 0 1

Transformation of coordinates
The following equation describes how a point can be transformed from coordinate system 1 into coordinate system
2 with a pose, or more exactly, with the corresponding homogeneous transformation matrix 2 H1 (input and output
points as homogeneous vectors, see also affine_trans_point_3d). Note that to transform points from
coordinate system 1 into system 2, you use the transformation matrix that describes the pose of coordinate system
1 relative to system 2.
   
TransX
p2 p1
   
 R(RotX, RotY, RotZ) · p1 +  TransY  
= 2 H1 · = 
1 1  TransZ 
1

This corresponds to the following operator calls:


pose_to_hom_mat3d(PoseOf1In2, HomMat3DFrom1In2)
affine_trans_point_3d(HomMat3DFrom1In2, P1X, P1Y, P1Z, P2X, P2Y, P2Z)

Non-standard pose definitions


So far, we described the standard pose definition. To create such poses, you select the (default) values ’Rp+T’
for the parameter OrderOfTransform and ’point’ for ViewOfTransform. By specifying other values for
these parameters, you can create non-standard poses types which we describe briefly below. Please note that these
representation types are only supported for backwards compatibility; we strongly recommend to use the standard
types.
If you select ’R(p-T)’ for OrderOfTransform, the created pose corresponds to the following chain of transfor-
mations, i.e., the sequence of rotation and translation is reversed and the translation is negated:

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1059

   
0 1 0 0 −TransX
 R(RotX, RotY, RotZ) 0   0 1 0
· −TransY 
 = H(R) · H(−t)
HR(p−T ) = 
 0   0 0 1 −TransZ 
0 0 0 1 0 0 0 1

If you select ’coordinate_system’ for ViewOfTransform, the sequence of transformations remains constant,
but the rotation angles are negated. Please note that, contrary to its name, this is not equivalent to transforming a
coordinate system!

   
1 0 0 TransX 0
 0 1 0
 ·  R(−RotX, −RotY, −RotZ)
TransY  0
 
Hcoordinate_system = 
 0 0 1

TransZ   0 
0 0 0 1 0 0 0 1

Returned data structure


The created 3D pose is returned in Pose which is a tuple of length seven. The first three elements hold the
translation parameters TransX, TransY, and TransZ, followed by the rotation parameters RotX, RotY,
and RotZ. The last element codes the representation type of the pose that you selected with the parameters
OrderOfTransform, OrderOfRotation, and ViewOfTransform. The following table lists the possible
combinations. As already noted, we recommend to use only the representation types with OrderOfTransform
= ’Rp+T’ and ViewOfTransform = ’point’ (codes 0, 2, and 4).

OrderOfTransform OrderOfRotation ViewOfTransform Code


’Rp+T’ ’gba’ ’point’ 0
’Rp+T’ ’abg’ ’point’ 2
’Rp+T’ ’rodriguez’ ’point’ 4
’Rp+T’ ’gba’ ’coordinate_system’ 1
’Rp+T’ ’abg’ ’coordinate_system’ 3
’Rp+T’ ’rodriguez’ ’coordinate_system’ 5
’R(p-T)’ ’gba’ ’point’ 8
’R(p-T)’ ’abg’ ’point’ 10
’R(p-T)’ ’rodriguez’ ’point’ 12
’R(p-T)’ ’gba’ ’coordinate_system’ 9
’R(p-T)’ ’abg’ ’coordinate_system’ 11
’R(p-T)’ ’rodriguez’ ’coordinate_system’ 13

You can convert poses into other representation types using convert_pose_type and query the type using
get_pose_type.
Parameter

. TransX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double


Translation along the x-axis (in [m]).
Default Value : 0.1
Suggested values : TransX ∈ {-1.0, -0.75, -0.5, -0.25, -0.2, -0.1, -0.5, -0.25, -0.125, -0.01, 0, 0.01, 0.125,
0.25, 0.5, 0.1, 0.2, 0.25, 0.5, 0.75, 1.0}
. TransY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Translation along the y-axis (in [m]).
Default Value : 0.1
Suggested values : TransY ∈ {-1.0, -0.75, -0.5, -0.25, -0.2, -0.1, -0.5, -0.25, -0.125, -0.01, 0, 0.01, 0.125,
0.25, 0.5, 0.1, 0.2, 0.25, 0.5, 0.75, 1.0}
. TransZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Translation along the z-axis (in [m]).
Default Value : 0.1
Suggested values : TransZ ∈ {-1.0, -0.75, -0.5, -0.25, -0.2, -0.1, -0.5, -0.25, -0.125, -0.01, 0, 0.01, 0.125,
0.25, 0.5, 0.1, 0.2, 0.25, 0.5, 0.75, 1.0}

HALCON 8.0.2
1060 CHAPTER 15. TOOLS

. RotX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double


Rotation around x-axis or x component of the Rodriguez vector (in [◦ ] or without unit).
Default Value : 90
Suggested values : RotX ∈ {90, 180, 270}
Typical range of values : 0 ≤ RotX ≤ 360
. RotY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Rotation around y-axis or y component of the Rodriguez vector (in [◦ ] or without unit).
Default Value : 90
Suggested values : RotY ∈ {90, 180, 270}
Typical range of values : 0 ≤ RotY ≤ 360
. RotZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Rotation around z-axis or z component of the Rodriguez vector (in [◦ ] or without unit).
Default Value : 90
Suggested values : RotZ ∈ {90, 180, 270}
Typical range of values : 0 ≤ RotZ ≤ 360
. OrderOfTransform (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Order of rotation and translation.
Default Value : "Rp+T"
Suggested values : OrderOfTransform ∈ {"Rp+T", "R(p-T)"}
. OrderOfRotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Meaning of the rotation values.
Default Value : "gba"
Suggested values : OrderOfRotation ∈ {"gba", "abg", "rodriguez"}
. ViewOfTransform (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
View of transformation.
Default Value : "point"
Suggested values : ViewOfTransform ∈ {"point", "coordinate_system"}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
3D pose.
Number of elements : 7
Example (Syntax: HDevelop)

* goal: calibration with non-standard calibration object


* read start values for interior camera parameters
read_cam_par(’campar.dat’, CamParam)
* (read 3D world points [WorldPointsX,WorldPointsY,WorldPointsZ],
* extract corresponding 2D image points [PixelsRow,PixelsColumn])
* task: create starting value for the exterior camera parameters, i.e., the
* pose of the calibration object in the calibration images
* first image: calibration object placed at a distance of 0.5 and 0.1
* ’below’ the camera coordinate system
* orientation ’read from left to right’: rotated 30 degrees
* around the optical axis of the camera (z-axis),
* then tilted 10 degrees around the new y-axis
create_pose(0.1, 0.0, 0.5, 30, 10, 0, ’Rp+T’, ’abg’, ’point’, StartPose1)
* (accumulate all poses in StartPoses = [StartPose1, StartPose2, ...])
* perform the calibration
camera_calibration(WorldPointsX, WorldPointsY, WorldPointsZ,
PixelsRow, PixelsColumn, CamParam, StartPoses, ’pose’,
FinalCamParam, FinalPoses, Errors)

Result
create_pose returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception handling is
raised.
Parallelization Information
create_pose is reentrant and processed without parallelization.
Possible Successors
pose_to_hom_mat3d, write_pose, camera_calibration, hand_eye_calibration

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1061

Alternatives
read_pose, hom_mat3d_to_pose
See also
hom_mat3d_rotate, hom_mat3d_translate, convert_pose_type, get_pose_type,
hom_mat3d_to_pose, pose_to_hom_mat3d, write_pose, read_pose
Module
Foundation

T_get_pose_type ( const Htuple Pose, Htuple *OrderOfTransform,


Htuple *OrderOfRotation, Htuple *ViewOfTransform )

Get the representation type of a 3D pose.


With get_pose_type, the representation type of the 3D pose Pose can be queried. See create_pose
for details about 3D poses, their representation types, and the meaning of the parameters OrderOfTransform,
OrderOfRotation, and ViewOfTransform.
Parameter
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
3D pose.
Number of elements : 7
. OrderOfTransform (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
Order of rotation and translation.
. OrderOfRotation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
Meaning of the rotation values.
. ViewOfTransform (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
View of transformation.
Result
create_pose returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception handling is
raised.
Parallelization Information
get_pose_type is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration
Possible Successors
convert_pose_type
See also
create_pose, convert_pose_type, write_pose, read_pose
Module
Foundation

T_hom_mat3d_compose ( const Htuple HomMat3DLeft,


const Htuple HomMat3DRight, Htuple *HomMat3DCompose )

Multiply two homogeneous 3D transformation matrices.


hom_mat3d_compose composes a new 3D transformation matrix by multiplying the two input matrices:

HomMat3DCompose = HomMat3DLeft · HomMat3DRight

For example, if the two input matrices correspond to rigid transformations, i.e., to transformations consisting of a
rotation and a translation, the resulting matrix is calculated as follows:
     
Rl tl Rr tr Rl · Rr Rl ·tr + tl
HomMat3DCompose = · =
000 1 000 1 0 0 0 1

HALCON 8.0.2
1062 CHAPTER 15. TOOLS

Parameter
. HomMat3DLeft (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Left input transformation matrix.
. HomMat3DRight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Right input transformation matrix.
. HomMat3DCompose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat3d_compose returns H_MSG_TRUE. If necessary, an excep-
tion is raised.
Parallelization Information
hom_mat3d_compose is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_compose, hom_mat3d_translate, hom_mat3d_translate_local,
hom_mat3d_scale, hom_mat3d_scale_local, hom_mat3d_rotate,
hom_mat3d_rotate_local, pose_to_hom_mat3d
Possible Successors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local
See also
affine_trans_point_3d, hom_mat3d_identity, hom_mat3d_rotate,
hom_mat3d_translate, pose_to_hom_mat3d, hom_mat3d_to_pose
Module
Foundation

T_hom_mat3d_identity ( Htuple *HomMat3DIdentity )

Generate the homogeneous transformation matrix of the identical 3D transformation.


hom_mat3d_identity generates the homogeneous transformation matrix HomMat3DIdentity describing
the identical 3D transformation:
 
1 0 0 0
 0 1 0 0 
HomMat3DIdentity = 
 0

0 1 0 
0 0 0 1

Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. Thus, HomMat3DIdentity is stored as the
tuple [1,0,0,0,0,1,0,0,0,0,1,0].
Parameter
. HomMat3DIdentity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Transformation matrix.
Result
hom_mat3d_identity always returns H_MSG_TRUE.
Parallelization Information
hom_mat3d_identity is reentrant and processed without parallelization.
Possible Successors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local
Alternatives
pose_to_hom_mat3d

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1063

Module
Foundation

T_hom_mat3d_invert ( const Htuple HomMat3D, Htuple *HomMat3DInvert )

Invert a homogeneous 3D transformation matrix.


hom_mat3d_invert inverts the homogeneous 3D transformation matrix given by HomMat3D. The resulting
matrix is returned in HomMat3DInvert.
Parameter

. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double


Input transformation matrix.
. HomMat3DInvert (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
hom_mat3d_invert returns H_MSG_TRUE if the parameters are valid and the input matrix is invertible. Oth-
erwise, an exception is raised.
Parallelization Information
hom_mat3d_invert is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local,
pose_to_hom_mat3d
Possible Successors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local,
hom_mat3d_to_pose
See also
affine_trans_point_3d, hom_mat3d_identity, hom_mat3d_rotate,
hom_mat3d_translate, pose_to_hom_mat3d, hom_mat3d_to_pose, hom_mat3d_compose
Module
Foundation

T_hom_mat3d_rotate ( const Htuple HomMat3D, const Htuple Phi,


const Htuple Axis, const Htuple Px, const Htuple Py, const Htuple Pz,
Htuple *HomMat3DRotate )

Add a rotation to a homogeneous 3D transformation matrix.


hom_mat3d_rotate adds a rotation by the angle Phi around the axis passed in the parameter Axis to the
homogeneous 3D transformation matrix HomMat3D and returns the resulting matrix in HomMat3DRotate. The
axis can by specified by passing the strings ’x’, ’y’, or ’z’, or by passing a vector [x,y,z] as a tuple.
The rotation is decribed by a 3×3 rotation matrix R. It is performed relative to the global (i.e., fixed) coordinate
system; this corresponds to the following chain of transformation matrices:
Axis = ’x’:
 
0  
1 0 0
 Rx 0 
HomMat3DRotate =   · HomMat3D Rx =  0 cos(Phi) − sin(Phi) 
 0 
0 sin(Phi) cos(Phi)
000 1

HALCON 8.0.2
1064 CHAPTER 15. TOOLS

Axis = ’y’:
 
0  
cos(Phi) 0 sin(Phi)
 Ry 0 
 · HomMat3D
HomMat3DRotate =  Ry =  0 1 0 
 0 
− sin(Phi) 0 cos(Phi)
000 1

Axis = ’z’:
 
0  
cos(Phi) − sin(Phi) 0
 Rz 0 
 · HomMat3D
HomMat3DRotate =  Rz =  sin(Phi) cos(Phi) 0 
 0 
0 0 1
000 1

Axis = [x,y,z]:
 
0
 Ra 0 
HomMat3DRotate =   · HomMat3D Ra = u · uT + cos(Phi) · (I − u · uT ) + sin(Phi) · S
 0 
000 1
 0 
−z 0 y0
   
x 1 0 0 0
Axis
u= =  y0  I= 0 1 0  S =  z0 0 −x0 
kAxisk
z0 0 0 1 −y 0 x0 0

The point (Px,Py,Pz) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat3DRotate. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the rotation is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
     
1 0 0 +Px 0 1 0 0 −Px
0 1 0 +Py 
· R
 0  0 1
· 0 −Py 
 · HomMat3D
HomMat3DRotate = 
0 0 1 +Pz   0  0 0 1 −Pz 
0 0 0 1 000 1 0 0 0 1

To perform the transformation in the local coordinate system, i.e., the one described by HomMat3D, use
hom_mat3d_rotate_local.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
 
ra rb rc td
 re rf rg th 
 
 ri rj rk tl 
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Input transformation matrix.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / double / Hlong
Axis, to be rotated around.
Default Value : "x"
Suggested values : Axis ∈ {"x", "y", "z"}

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1065

. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x ; Htuple . double / Hlong


Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y ; Htuple . double / Hlong
Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Pz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z ; Htuple . double / Hlong
Fixed point of the transformation (z coordinate).
Default Value : 0
Suggested values : Pz ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat3DRotate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat3d_rotate returns H_MSG_TRUE. If necessary, an excep-
tion is raised.
Parallelization Information
hom_mat3d_rotate is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_translate, hom_mat3d_scale, hom_mat3d_rotate
Possible Successors
hom_mat3d_translate, hom_mat3d_scale, hom_mat3d_rotate
See also
hom_mat3d_invert, hom_mat3d_identity, hom_mat3d_rotate_local,
pose_to_hom_mat3d, hom_mat3d_to_pose, hom_mat3d_compose
Module
Foundation

T_hom_mat3d_rotate_local ( const Htuple HomMat3D, const Htuple Phi,


const Htuple Axis, Htuple *HomMat3DRotate )

Add a rotation to a homogeneous 3D transformation matrix.


hom_mat3d_rotate_local adds a rotation by the angle Phi around the axis passed in the parameter Axis to
the homogeneous 3D transformation matrix HomMat3D and returns the resulting matrix in HomMat3DRotate.
The axis can by specified by passing the strings ’x’, ’y’, or ’z’, or by passing a vector [x,y,z] as a tuple.
The rotation is decribed by a 3×3 rotation matrix R. In contrast to hom_mat3d_rotate, it is performed
relative to the local coordinate system, i.e., the coordinate system described by HomMat3D; this corresponds to
the following chain of transformation matrices:
Axis = ’x’:
 
0  
1 0 0
 Rx 0 
HomMat3DRotate = HomMat3D ·   Rx =  0 cos(Phi) − sin(Phi) 
 0 
0 sin(Phi) cos(Phi)
000 1

Axis = ’y’:
 
0  
cos(Phi) 0 sin(Phi)
 Ry 0 
HomMat3DRotate = HomMat3D ·   Ry =  0 1 0 
 0 
− sin(Phi) 0 cos(Phi)
000 1

HALCON 8.0.2
1066 CHAPTER 15. TOOLS

Axis = ’z’:
 
0  
cos(Phi) − sin(Phi) 0
 Rz 0 
HomMat3DRotate = HomMat3D ·   Rz =  sin(Phi) cos(Phi) 0 
 0 
0 0 1
000 1

Axis = [x,y,z]:
 
0
 Ra 0 
HomMat3DRotate = HomMat3D ·   Ra = u · uT + cos(Phi) · (I − u · uT ) + sin(Phi) · S
 0 
000 1

x0 −z 0 y0
     
1 0 0 0
Axis
u= =  y0  I= 0 1 0  S =  z0 0 −x0 
kAxisk
z0 0 0 1 −y 0 x0 0
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat3DRotate.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
 
ra rb rc td
 re rf rg th 
 
 ri rj rk tl 
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Input transformation matrix.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / double / Hlong
Axis, to be rotated around.
Default Value : "x"
Suggested values : Axis ∈ {"x", "y", "z"}
. HomMat3DRotate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat3d_rotate_local returns H_MSG_TRUE. If necessary,
an exception is raised.
Parallelization Information
hom_mat3d_rotate_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_translate_local, hom_mat3d_scale_local,
hom_mat3d_rotate_local
Possible Successors
hom_mat3d_translate_local, hom_mat3d_scale_local, hom_mat3d_rotate_local
See also
hom_mat3d_invert, hom_mat3d_identity, hom_mat3d_rotate, pose_to_hom_mat3d,
hom_mat3d_to_pose, hom_mat3d_compose
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1067

T_hom_mat3d_scale ( const Htuple HomMat3D, const Htuple Sx,


const Htuple Sy, const Htuple Sz, const Htuple Px, const Htuple Py,
const Htuple Pz, Htuple *HomMat3DScale )

Add a scaling to a homogeneous 3D transformation matrix.


hom_mat3d_scale adds a scaling by the scale factors Sx, Sy, and Sz to the homogeneous 3D transformation
matrix HomMat3D and returns the resulting matrix in HomMat3DScale. The scaling is described by a 3×3
scaling matrix S. It is performed relative to the global (i.e., fixed) coordinate system; this corresponds to the
following chain of transformation matrices:
 
0  
Sx 0 0
 S 0 
 · HomMat3D
HomMat3DScale =  S =  0 Sy 0 
 0 
0 0 Sz
000 1

The point (Px,Py,Pz) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat3DScale. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the scaling is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
     
1 0 0 +Px 0 1 0 0 −Px
 ·  0 1 0 −Py  · HomMat3D
 0 1 0 +Py   S 0   
HomMat3DScale =   0 0 1 +Pz  · 
 
0   0 0 1 −Pz 
0 0 0 1 000 1 0 0 0 1

To perform the transformation in the local coordinate system, i.e., the one described by HomMat3D, use
hom_mat3d_scale_local.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
 
ra rb rc td
 re rf rg th 
 
 ri rj rk tl 
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Input transformation matrix.
. Sx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the x-axis.
Default Value : 2
Suggested values : Sx ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 112}
Restriction : Sx 6= 0
. Sy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the y-axis.
Default Value : 2
Suggested values : Sy ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 112}
Restriction : Sy 6= 0
. Sz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the z-axis.
Default Value : 2
Suggested values : Sz ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 112}
Restriction : Sz 6= 0
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x ; Htuple . double / Hlong
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}

HALCON 8.0.2
1068 CHAPTER 15. TOOLS

. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y ; Htuple . double / Hlong


Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Pz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z ; Htuple . double / Hlong
Fixed point of the transformation (z coordinate).
Default Value : 0
Suggested values : Pz ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat3DScale (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
hom_mat3d_scale returns H_MSG_TRUE if all three scale factors are not 0. If necessary, an exception is
raised.
Parallelization Information
hom_mat3d_scale is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_translate, hom_mat3d_scale, hom_mat3d_rotate
Possible Successors
hom_mat3d_translate, hom_mat3d_scale, hom_mat3d_rotate
See also
hom_mat3d_invert, hom_mat3d_identity, hom_mat3d_scale_local,
pose_to_hom_mat3d, hom_mat3d_to_pose, hom_mat3d_compose
Module
Foundation

T_hom_mat3d_scale_local ( const Htuple HomMat3D, const Htuple Sx,


const Htuple Sy, const Htuple Sz, Htuple *HomMat3DScale )

Add a scaling to a homogeneous 3D transformation matrix.


hom_mat3d_scale_local adds a scaling by the scale factors Sx, Sy, and Sz to the homogeneous 3D trans-
formation matrix HomMat3D and returns the resulting matrix in HomMat3DScale. The scaling is described by a
3×3 scaling matrix S. In contrast to hom_mat3d_scale, it is performed relative to the local coordinate system,
i.e., the coordinate system described by HomMat3D; this corresponds to the following chain of transformation
matrices:
 
0  
Sx 0 0
 S 0 
HomMat3DScale = HomMat3D ·   S= 0 Sy 0 
 0 
0 0 Sz
000 1

The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat3DScale.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
 
ra rb rc td
 re rf rg th 
 
 ri rj rk tl 
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1069

Parameter

. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double


Input transformation matrix.
. Sx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the x-axis.
Default Value : 2
Suggested values : Sx ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 112}
Restriction : Sx 6= 0
. Sy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the y-axis.
Default Value : 2
Suggested values : Sy ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 112}
Restriction : Sy 6= 0
. Sz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the z-axis.
Default Value : 2
Suggested values : Sz ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 112}
Restriction : Sz 6= 0
. HomMat3DScale (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
hom_mat3d_scale_local returns H_MSG_TRUE if all three scale factors are not 0. If necessary, an excep-
tion is raised.
Parallelization Information
hom_mat3d_scale_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_translate_local, hom_mat3d_scale_local,
hom_mat3d_rotate_local
Possible Successors
hom_mat3d_translate_local, hom_mat3d_scale_local, hom_mat3d_rotate_local
See also
hom_mat3d_invert, hom_mat3d_identity, hom_mat3d_scale, pose_to_hom_mat3d,
hom_mat3d_to_pose, hom_mat3d_compose
Module
Foundation

T_hom_mat3d_to_pose ( const Htuple HomMat3D, Htuple *Pose )

Convert a homogeneous transformation matrix into a 3D pose.


hom_mat3d_to_pose converts a homogeneous transformation matrix into the corresponding 3D pose with type
code 0. For details about 3D poses and the corresponding transformation matrices please refer to create_pose.
A typical application of hom_mat3d_to_pose is that a 3D pose was converted into a homogeneous transfor-
mation matrix to further transform it, e.g., with hom_mat3d_rotate or hom_mat3d_translate, and now
must be converted back into a pose to use it as input for operators like image_points_to_world_plane.
Parameter

. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double


Homogeneous transformation matrix.
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Equivalent 3D pose.
Number of elements : 7
Example (Syntax: HDevelop)

HALCON 8.0.2
1070 CHAPTER 15. TOOLS

camera_calibration(WorldPointsX, WorldPointsY, WorldPointsZ,


PixelsRow, PixelsColumn, CamParam, StartPose,6,
FinalCamParam, FinalPose, Errors)
* transform FinalPose to homogeneous transformation matrix
pose_to_hom_mat3d(FinalPose, cam_H_cal)
* rotate it 90 degree around the y-axis to obtain a world coordinate system
* whose y- and z-axis lie in the plane of the calibration plate while the
* x-axis point ’upwards’: cam_H_w = cam_H_cal * RotY(90)
hom_mat3d_identity(HomMat3DIdent)
hom_mat3d_rotate(HomMat3DIdent, deg(90), ’y’, 0, 0, 0,
HomMat3DRotateY)
hom_mat3d_compose(cam_H_cal, HomMat3DRotateY, cam_H_w)
* transform back to pose
hom_mat3d_to_pose(cam_H_w, cam_P_w)
* use pose to transform an image point into the world coordinate system
image_points_to_world_plane(FinalCamParam, cam_P_w, 87, 23.5, 1,
w_px, w_py)

Result
hom_mat3d_to_pose returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised
Parallelization Information
hom_mat3d_to_pose is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_rotate, hom_mat3d_translate, hom_mat3d_invert
Possible Successors
camera_calibration, write_pose, disp_caltab, sim_caltab
See also
create_pose, camera_calibration, disp_caltab, sim_caltab, write_pose, read_pose,
pose_to_hom_mat3d, project_3d_point, get_line_of_sight, hom_mat3d_rotate,
hom_mat3d_translate, hom_mat3d_invert, affine_trans_point_3d
Module
Foundation

T_hom_mat3d_translate ( const Htuple HomMat3D, const Htuple Tx,


const Htuple Ty, const Htuple Tz, Htuple *HomMat3DTranslate )

Add a translation to a homogeneous 3D transformation matrix.


hom_mat3d_translate adds a translation by the vector t = (Tx,Ty,Tz) to the homogeneous 3D transforma-
tion matrix HomMat3D and returns the resulting matrix in HomMat3DTranslate. The translation is performed
relative to the global (i.e., fixed) coordinate system; this corresponds to the following chain of transformation
matrices:
 
1 0 0  
 0 1 1 Tx
t 

HomMat3DTranslate =   0 0 1  · HomMat3D t =  Ty 
Tz
0 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat3D, use
hom_mat3d_translate_local.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
 
ra rb rc td
 re rf rg th 
 
 ri rj rk tl 
0 0 0 1

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1071

is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Input transformation matrix.
. Tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x ; Htuple . double / Hlong
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y ; Htuple . double / Hlong
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Tz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z ; Htuple . double / Hlong
Translation along the z-axis.
Default Value : 64
Suggested values : Tz ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat3DTranslate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat3d_translate returns H_MSG_TRUE. If necessary, an
exception is raised.
Parallelization Information
hom_mat3d_translate is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_translate, hom_mat3d_scale, hom_mat3d_rotate
Possible Successors
hom_mat3d_translate, hom_mat3d_scale, hom_mat3d_rotate
See also
hom_mat3d_invert, hom_mat3d_identity, hom_mat3d_translate_local,
pose_to_hom_mat3d, hom_mat3d_to_pose, hom_mat3d_compose
Module
Foundation

T_hom_mat3d_translate_local ( const Htuple HomMat3D, const Htuple Tx,


const Htuple Ty, const Htuple Tz, Htuple *HomMat3DTranslate )

Add a translation to a homogeneous 3D transformation matrix.


hom_mat3d_translate_local adds a translation by the vector t = (Tx,Ty,Tz) to the homogeneous 3D
transformation matrix HomMat3D and returns the resulting matrix in HomMat3DTranslate. In contrast to
hom_mat3d_translate, the translation is performed relative to the local coordinate system, i.e., the coordinate
system described by HomMat3D; this corresponds to the following chain of transformation matrices:
 
1 0 0  
 0 Tx
1 1 t 
HomMat3DTranslate = HomMat3D · 
 0
 t =  Ty 
0 1 
Tz
0 0 0 1

Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
 
ra rb rc td
 re rf rg th 
 
 ri rj rk tl 
0 0 0 1

HALCON 8.0.2
1072 CHAPTER 15. TOOLS

is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter

. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double


Input transformation matrix.
. Tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x ; Htuple . double / Hlong
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y ; Htuple . double / Hlong
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Tz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z ; Htuple . double / Hlong
Translation along the z-axis.
Default Value : 64
Suggested values : Tz ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat3DTranslate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat3d_translate_local returns H_MSG_TRUE. If neces-
sary, an exception is raised.
Parallelization Information
hom_mat3d_translate_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_translate_local, hom_mat3d_scale_local,
hom_mat3d_rotate_local
Possible Successors
hom_mat3d_translate_local, hom_mat3d_scale_local, hom_mat3d_rotate_local
See also
hom_mat3d_invert, hom_mat3d_identity, hom_mat3d_translate, pose_to_hom_mat3d,
hom_mat3d_to_pose, hom_mat3d_compose
Module
Foundation

T_pose_to_hom_mat3d ( const Htuple Pose, Htuple *HomMat3D )

Convert a 3D pose into a homogeneous transformation matrix.


pose_to_hom_mat3d converts a 3D pose Pose, e.g., the exterior camera parameters, into the equivalent ho-
mogeneous transformation matrix HomMat3D. For details about 3D poses and the corresponding transformation
matrices please refer to create_pose.
A typical application of pose_to_hom_mat3d is that you want to further transform the pose, e.g., rotate
or translate it using hom_mat3d_rotate or hom_mat3d_translate. In case of the exterior camera
parameters, this can be necessary if the calibration plate cannot be placed such that its coordinate system coincides
with the desired world coordinate system.
Parameter

. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong


3D pose.
Number of elements : 7
. HomMat3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Equivalent homogeneous transformation matrix.

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1073

Example (Syntax: HDevelop)

* read interior camera parameters


read_cam_par(’campar.dat’, CamParam)
* read exterior camera parameters
read_pose(’startpose.dat’, StartPose)
* (read 3D world points [WorldPointsX,WorldPointsY,WorldPointsZ],
* extract corresponding 2D image points [PixelsRow,PixelsColumn])
* calibration of exterior camera parameters:
camera_calibration(WorldPointsX, WorldPointsY, WorldPointsZ,
PixelsRow, PixelsColumn, CamParam, StartPose, ’pose’,
FinalCamParam, FinalPose, Errors)
* transform FinalPose to homogeneous transformation matrix
pose_to_hom_mat3d(FinalPose, cam_H_cal)
* rotate it 90 degree around its y-axis to obtain a world coordinate system
* whose y- and z-axis lie in the plane of the calibration plate while the
* x-axis point ’upwards’: cam_H_w = cam_H_cal * RotY(90)
hom_mat3d_identity(HomMat3DIdent)
hom_mat3d_rotate(HomMat3DIdent, deg(90), ’y’, 0, 0, 0,
HomMat3DRotateY)
hom_mat3d_compose(cam_H_cal, HomMat3DRotateY, cam_H_w)

Result
pose_to_hom_mat3d returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised
Parallelization Information
pose_to_hom_mat3d is reentrant and processed without parallelization.
Possible Predecessors
camera_calibration, read_pose
Possible Successors
affine_trans_point_3d, hom_mat3d_invert, hom_mat3d_translate,
hom_mat3d_rotate, hom_mat3d_to_pose
See also
create_pose, camera_calibration, write_pose, read_pose, hom_mat3d_to_pose,
project_3d_point, get_line_of_sight, hom_mat3d_rotate, hom_mat3d_translate,
hom_mat3d_invert, affine_trans_point_3d
Module
Foundation

T_read_pose ( const Htuple PoseFile, Htuple *Pose )

Read a 3D pose from a text file.


read_pose is used to read the 3D pose Pose from a text file with the name PoseFile.
A pose describes a rigid 3D transformation, i.e., a transformation consisting of an arbitrary translation and rotation,
with 6 parameters, three for the translation, three for the rotation. With a seventh parameter different pose types
can be indicated (see create_pose).
A suitable file can be generated by the operator write_pose and looks like the following:

# 3D POSE PARAMETERS: rotation and translation

# Used representation type:


f 0

# Rotation angles [deg] or Rodriguezvector:

HALCON 8.0.2
1074 CHAPTER 15. TOOLS

r -17.8134 1.83816 0.288092

# Translation vector (x y z [m]):


t 0.280164 0.150644 1.7554

Parameter
. PoseFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of the exterior camera parameters.
Default Value : "campose.dat"
List of values : PoseFile ∈ {"campose.dat", "campose.initial", "campose.final"}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
3D pose.
Number of elements : 7
Result
read_pose returns H_MSG_TRUE if all parameter values are correct and the file has been read successfully. If
necessary an exception handling is raised.
Parallelization Information
read_pose is reentrant and processed without parallelization.
Possible Predecessors
read_cam_par
Possible Successors
pose_to_hom_mat3d, camera_calibration, disp_caltab, sim_caltab
See also
create_pose, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
write_pose, pose_to_hom_mat3d, hom_mat3d_to_pose
Module
Foundation

T_set_origin_pose ( const Htuple PoseIn, const Htuple DX,


const Htuple DY, const Htuple DZ, Htuple *PoseNewOrigin )

Translate the origin of a 3D pose.


set_origin_pose translates the origin of the 3D pose PoseIn by the vector given by DX, DY, and DZ
and returns the result in PoseNewOrigin. Note that the translation is performed relative to the local coordi-
nate system of the pose itself. For example, if PoseIn describes the pose of an object in camera coordinates,
PoseNewOrigin is obtained by translating the object’s coordinate system by DX along its own x-axis (and so
on for the other axes) and not along the x-axis of the camera coordinate system. This corresponds to the following
chain of transformations:
   
1 0 0 DX
 0 1 1  DY  
PoseNewOrigin = PoseIn ·   0 0 1

DZ 
0 0 0 1

Thus, set_origin_pose is a shortcut for the following sequence of operator calls:


pose_to_hom_mat3d (PoseIn, HomMat3DIn)
hom_mat3d_translate_local (HomMat3DIn, DX, DY, DZ, HomMat3DNewOrigin)
hom_mat3d_to_pose (HomMat3DNewOrigin, PoseNewOrigin)

A typical application of this operator when defining a world coordinate system by placing the standard cal-
ibration plate on the plane of measurements. In this case, the external camera parameters returned by
camera_calibration correspond to a coordinate system that lies above the measurement plane, because
the coordinate system of the calibration plate is located on its surface and the plate has a certain thickness. To
correct the pose, call set_origin_pose with the translation vector (0,0,D), where D is the thickness of the
calibration plate.

HALCON/C Reference Manual, 2008-5-13


15.2. 3D-TRANSFORMATIONS 1075

Parameter
. PoseIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
original 3D pose.
Number of elements : 7
. DX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
translation of the origin in x-direction.
Default Value : 0
. DY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
translation of the origin in y-direction.
Default Value : 0
. DZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
translation of the origin in z-direction.
Default Value : 0
. PoseNewOrigin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
new 3D pose after applying the translation.
Number of elements : 7
Result
set_origin_pose returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception han-
dling is raised.
Parallelization Information
set_origin_pose is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration
Possible Successors
write_pose, pose_to_hom_mat3d, image_points_to_world_plane,
contour_to_world_plane_xld
See also
hom_mat3d_translate_local
Module
Foundation

T_write_pose ( const Htuple Pose, const Htuple PoseFile )

Write a 3D pose to a text file.


write_pose is used to write a 3D pose Pose into a text file with the name PoseFile.
A pose describes a rigid 3D transformation, i.e., a transformation consisting of an arbitrary translation and rotation,
with 6 parameters, three for the translation, three for the rotation. With a seventh parameter different pose types
can be indicated (see create_pose).
A file generated by write_pose looks like the following:

# 3D POSE PARAMETERS: rotation and translation

# Used representation type:


f 0

# Rotation angles [deg] or Rodriguez vector:


r -17.8134 1.83816 0.288092

# Translation vector (x y z [m]):


t 0.280164 0.150644 1.7554

HALCON 8.0.2
1076 CHAPTER 15. TOOLS

Parameter

. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong


3D pose.
Number of elements : 7
. PoseFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; Htuple . const char *
File name of the exterior camera parameters.
Default Value : "campose.dat"
List of values : PoseFile ∈ {"campose.dat", "campose.initial", "campose.final"}
Example (Syntax: HDevelop)

* read calibration images


read_image(Image1, ’calib-01’)
read_image(Image2, ’calib-02’)
read_image(Image3, ’calib-03’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
find_caltab(Image2, Caltab2, ’caltab.descr’, 3, 112, 5)
find_caltab(Image3, Caltab3, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
StartCamPar := [0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576]
find_marks_and_pose(Image1, Caltab1, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,
StartPose1)
find_marks_and_pose(Image2, Caltab2, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord2, CCoord2,
StartPose2)
find_marks_and_pose(Image3, Caltab3, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord3, CCoord3,
StartPose3)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ)
* camera calibration
camera_calibration(NX, NY, NZ, [RCoord1, RCoord2, RCoord3],
[CCoord1, CCoord2, CCoord3], StartCamPar,
[StartPose1, StartPose2, StartPose3], ’all’,
CamParam, NFinalPose, Errors)
* write exterior camera parameters of first calibration image
write_pose(NFinalPose[0:6], ’campose.dat’)

Result
write_pose returns H_MSG_TRUE if all parameter values are correct and the file has been written successfully.
If necessary an exception handling is raised.
Parallelization Information
write_pose is local and processed completely exclusively without parallelization.
Possible Predecessors
camera_calibration, hom_mat3d_to_pose
See also
create_pose, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
read_pose, pose_to_hom_mat3d, hom_mat3d_to_pose
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.3. BACKGROUND-ESTIMATOR 1077

15.3 Background-Estimator

close_all_bg_esti ( )
T_close_all_bg_esti ( )

Delete all background estimation data sets.


close_all_bg_esti deletes the background estimation data sets and releases all used memory.
Attention
close_all_bg_esti exists solely for the purpose of implementing the “reset program” functionality in HDe-
velop. close_all_bg_esti must not be used in any application.
Result
If it is possible to close the background estimation data sets the operator close_all_bg_esti returns the
value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
close_all_bg_esti is local and processed completely exclusively without parallelization.
Alternatives
close_bg_esti
See also
create_bg_esti
Module
Foundation

close_bg_esti ( Hlong BgEstiHandle )


T_close_bg_esti ( const Htuple BgEstiHandle )

Delete the background estimation data set.


close_bg_esti deletes the background estimation data set and releases all used memory.
Parameter

. BgEstiHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bg_estimation ; Hlong


ID of the BgEsti data set.
Example

/* read Init-Image: */
read_image(&InitImage,"Init_Image") ;
/* initialize BgEsti-Dataset
with fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7,10,3.25,15.0,&BgEstiHandle) \’
/* read the next image in sequence: */
read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region1,WindowHandle) ;
/* read the next image in sequence: */
read_image(&Image2,"Image_2") ;
/* estimate the Background: */
run_bg_esti(Image2,&Region2,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region2,WindowHandle) ;

HALCON 8.0.2
1078 CHAPTER 15. TOOLS

/* etc. */
/* - end of background estimation - */
/* close the dataset: */
close_bg_est(BgEstiHandle) ;

Result
close_bg_esti returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
close_bg_esti is local and processed completely exclusively without parallelization.
Possible Predecessors
run_bg_esti
See also
create_bg_esti
Module
Foundation

create_bg_esti ( const Hobject InitializeImage, double Syspar1,


double Syspar2, const char *GainMode, double Gain1, double Gain2,
const char *AdaptMode, double MinDiff, Hlong StatNum,
double ConfidenceC, double TimeC, Hlong *BgEstiHandle )

T_create_bg_esti ( const Hobject InitializeImage, const Htuple Syspar1,


const Htuple Syspar2, const Htuple GainMode, const Htuple Gain1,
const Htuple Gain2, const Htuple AdaptMode, const Htuple MinDiff,
const Htuple StatNum, const Htuple ConfidenceC, const Htuple TimeC,
Htuple *BgEstiHandle )

Generate and initialize a data set for the background estimation.


create_bg_esti creates a new data set for the background estimation and initializes it with the appropriate
parameters. The estimated background image is part of this data set. The newly created set automatically becomes
the current set.
InitializeImage is used as an initial prediction for the background image. For a good prediction an image of
the ovserved scene without moving objects should be passed in InitializeImage. That way the foreground
adaptation rate can be held low. If there is no empty scene image available, a homogenous gray image can be used
instead. In that case the adaptation rate for the foreground image must be raised, because initially most of the image
will be detected as foreground. The intialization image must to be of type byte or real. Because of processing
single-channel images, data sets must be created for every channel. Size and region of InitializeImage
determines size and region for all background estimations ( run_bg_esti) that are performed with this data set.
Syspar1 and Syspar2 are the parameters of the Kalman system matrix. The system matrix describes the
system of the gray value changes according to Kalman filter theory. The background estimator implements a
different system for each pixel.
GainMode defines whether a fixed Kalman gain should be used for the estimation or whether the gain should
adapt itself depending on the difference between estimation and actual value. If GainMode is set to ’fixed’, then
Gain1 is used as Kalman gain for pixels predicted as foreground and Gain2 as gain for pixels predicted as
background. Gain1 should be smaller than Gain2, because adaptation of the foreground should be slower than
adaptation of the background. Both Gain1 and Gain2 should be smaller than 1.0.
If GainMode is set to ’frame’, then tables for foreground and background estimation are computed containing
Kalman gains for all the 256 possible grayvalue changes. Gain1 and Gain2 then denote the number of frames
necessary to adapt the difference between estimated value and actual value. So with a fixed time for adaptation
(i.e. number of frames) the needed Kalman gain grows with the grayvalue difference. Gain1 should therefore
be larger than Gain2. Different gains for different grayvalue differences are useful if the background estimator
is used for generating an ’empty’ scene assuming that there are always moving objects in the observated area. In
that case the adaptation time for foreground adaptaion (Gain1) must not be too big. Gain1 and Gain2 should
be bigger than 1.0.

HALCON/C Reference Manual, 2008-5-13


15.3. BACKGROUND-ESTIMATOR 1079

AdaptMode denotes, whether the foreground/background decision threshold applied to the grayvalue difference
between estimation and actual value is fixed or whether it adapts itself depending on the grayvalue deviation of the
background pixels.
If AdaptMode is set to ’off’, the parameter MinDiff denotes a fixed threshold. The parameters StatNum,
ConfidenceC and TimeC are meaningless in this case.
If AdaptMode is set to ’on’, then MinDiff is interpreted as a base threshold. For each pixel an offset is added
to this threshold depending on the statistical evaluation of the pixel value over time. StatNum holds the number
of data sets (past frames) that are used for computing the grayvalue variance (FIR-Filter). ConfidenceC is used
to determine the confidence interval.
The confidence interval determines the values of the background statistics if background pixels are hidden by
a foreground object and thus are detected as foreground. According to the student t-distribution the confidence
constant is 4.30 (3.25, 2.82, 2.26) for a confidence interval of 99,8% (99,0%, 98,0%, 95,0%). TimeC holds a
time constant for the exp-function that raises the threshold in case of a foreground estimation of the pixel. That
means, the threshold is raised in regions where movement is detected in the foreground. That way larger changes in
illumination are tolerated if the background becomes visible again. The main reason for increasing this tolerance is
the impossibility for a prediction of illumintaion changes while the background is hidden. Therefore no adaptation
of the estimated background image is possible.
Attention
If GainMode was set to ’frame’, the run-time can be extremly long for large values of Gain1 or Gain2, because
the values for the gains’ table are determined by a simple binary search.
Parameter

. InitializeImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / real


initialization image.
. Syspar1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
1. system matrix parameter.
Default Value : 0.7
Suggested values : Syspar1 ∈ {0.65, 0.7, 0.75}
Typical range of values : 0.05 ≤ Syspar1 ≤ 1.0
Recommended Increment : 0.05
. Syspar2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
2. system matrix parameter.
Default Value : 0.7
Suggested values : Syspar2 ∈ {0.65, 0.7, 0.75}
Typical range of values : 0.05 ≤ Syspar2 ≤ 1.0
Recommended Increment : 0.05
. GainMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Gain type.
Default Value : "fixed"
List of values : GainMode ∈ {"fixed", "frame"}
. Gain1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Kalman gain / foreground adaptation time.
Default Value : 0.002
Suggested values : Gain1 ∈ {10.0, 20.0, 50.0, 0.1, 0.05, 0.01, 0.005, 0.001}
Restriction : 0.0 ≤ Gain1
. Gain2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Kalman gain / background adaptation time.
Default Value : 0.02
Suggested values : Gain2 ∈ {2.0, 4.0, 8.0, 0.5, 0.1, 0.05, 0.01}
Restriction : 0.0 ≤ Gain2
. AdaptMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Threshold adaptation.
Default Value : "on"
List of values : AdaptMode ∈ {"on", "off"}

HALCON 8.0.2
1080 CHAPTER 15. TOOLS

. MinDiff (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


Foreground/background threshold.
Default Value : 7.0
Suggested values : MinDiff ∈ {3.0, 5.0, 7.0, 9.0, 11.0}
Recommended Increment : 0.2
. StatNum (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
Number of statistic data sets.
Default Value : 10
Suggested values : StatNum ∈ {5, 10, 20, 30}
Typical range of values : 1 ≤ StatNum
Recommended Increment : 5
. ConfidenceC (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Confidence constant.
Default Value : 3.25
Suggested values : ConfidenceC ∈ {4.30, 3.25, 2.82, 2.62}
Recommended Increment : 0.01
Restriction : 0.0 < ConfidenceC
. TimeC (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Constant for decay time.
Default Value : 15.0
Suggested values : TimeC ∈ {10.0, 15.0, 20.0}
Recommended Increment : 5.0
Restriction : 0.0 < TimeC
. BgEstiHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bg_estimation ; Hlong *
ID of the BgEsti data set.
Example

long Handle1, Handle2;


/* read Init-Image: */
read_image(&InitImage,"Init_Image") ;
/* initialize 1. BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7.0,10,3.25,15.0:&BgEstiHandle) ;
/* initialize 2. BgEsti-Dataset with
frame orientated gains and fixed threshold */
create_bg_esti(InitImage,0.7,0.7,"frame",30.0,4.0,
"off",9.0,10,3.25,15.0:&BgEstiHandle2) ;

Result
create_bg_esti returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
create_bg_esti is local and processed completely exclusively without parallelization.
Possible Successors
run_bg_esti
See also
set_bg_esti_params, close_bg_esti
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.3. BACKGROUND-ESTIMATOR 1081

get_bg_esti_params ( Hlong BgEstiHandle, double *Syspar1,


double *Syspar2, char *GainMode, double *Gain1, double *Gain2,
char *AdaptMode, double *MinDiff, Hlong *StatNum, double *ConfidenceC,
double *TimeC )

T_get_bg_esti_params ( const Htuple BgEstiHandle, Htuple *Syspar1,


Htuple *Syspar2, Htuple *GainMode, Htuple *Gain1, Htuple *Gain2,
Htuple *AdaptMode, Htuple *MinDiff, Htuple *StatNum,
Htuple *ConfidenceC, Htuple *TimeC )

Return the parameters of the data set.


get_bg_esti_params returns the parameters of the data set. The returned parameters are the same as in
create_bg_esti and set_bg_esti_params (see these for an explanation).
Parameter
. BgEstiHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bg_estimation ; Hlong
ID of the BgEsti data set.
. Syspar1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
1. system matrix parameter.
. Syspar2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
2. system matrix parameter.
. GainMode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Gain type.
. Gain1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Kalman gain / foreground adaptation time.
. Gain2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Kalman gain / background adaptation time.
. AdaptMode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Threshold adaptation.
. MinDiff (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Foreground / background threshold.
. StatNum (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of statistic data sets.
. ConfidenceC (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Confidence constant.
. TimeC (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Constant for decay time.
Example

/* read Init-Image:*/
read_image(&InitImage,"Init_Image") ;
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7.0,10,3.25,15.0,&BgEstiHandle) ;
/* read the next image in sequence: */
read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region1,WindowHandle) ;
/* read the next image in sequence: */
read_image(&Image2,"Image_2") ;
/* estimate the Background: */
run_bg_esti(Image2,&Region2,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region2,WindowHandle) ;

HALCON 8.0.2
1082 CHAPTER 15. TOOLS

/* etc. */
/* change only the gain parameter in dataset: */
get_bg_esti_params(BgEstiHandle,&par1,&par2,&par3,&par4,
&par5,&par6,&par7,&par8,&par9,&par10);
set_bg_esti_params(BgEstiHandle,par1,par2,par3,0.004,
0.08,par6,par7,par8,par9,par10) ;
/* read the next image in sequence: */
read_image(&Image3,"Image_3") ;
/* estimate the Background: */
run_bg_esti(Image3,&Region3,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region3,WindowHandle) ;
/* etc. */

Result
get_bg_esti_params returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
get_bg_esti_params is reentrant and processed without parallelization.
Possible Predecessors
create_bg_esti
Possible Successors
run_bg_esti
See also
set_bg_esti_params
Module
Foundation

give_bg_esti ( Hobject *BackgroundImage, Hlong BgEstiHandle )


T_give_bg_esti ( Hobject *BackgroundImage, const Htuple BgEstiHandle )

Return the estimated background image.


give_bg_esti returns the estimated background image of the current BgEsti data set. The background image
has the same type and size as the initialization image passed in create_bg_esti.
Parameter
. BackgroundImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / real
Estimated background image of the current data set.
. BgEstiHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bg_estimation ; Hlong
ID of the BgEsti data set.
Example

/* read Init-Image: */
read_image(&InitImage,"Init_Image") ;
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7,10,3.25,15.0,&BgEstiHandle) ;
/* read the next image in sequence: */
read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* give the background image from the aktive dataset: */
give_bg_esti(&BgImage,BgEstiHandle) ;
/* display the background image: */
disp_image(BgImage,WindowHandle) ;

HALCON/C Reference Manual, 2008-5-13


15.3. BACKGROUND-ESTIMATOR 1083

Result
give_bg_esti returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
give_bg_esti is reentrant and processed without parallelization.
Possible Predecessors
run_bg_esti
Possible Successors
run_bg_esti, create_bg_esti, update_bg_esti
See also
run_bg_esti, update_bg_esti, create_bg_esti
Module
Foundation

run_bg_esti ( const Hobject PresentImage, Hobject *ForegroundRegion,


Hlong BgEstiHandle )

T_run_bg_esti ( const Hobject PresentImage, Hobject *ForegroundRegion,


const Htuple BgEstiHandle )

Estimate the background and return the foreground region.


run_bg_esti adapts the background image stored in the BgEsti data set using a Kalman filter on each pixel and
returns a region of the foreground (detected moving objects).
For every pixel an estimation of its grayvalue is computed using the values of the current data set and its stored
background image and the current image (PresentImage). By comparison to the threshold (fixed or adaptive,
see create_bg_esti) the pixels are classified as either foreground or background.

The background estimation processes only single-channel images. Therefore the background has to be adapted
separately for every channel.

The background estimation should be used on half- or even quarter-sized images. For this, the input images (and
the initialization image!) has to be reduced using zoom_image_factor. The advantage is a shorter run-time
on one hand and a low-band filtering on the other. The filtering eliminates high frequency noise and results in a
more reliable estimation. As a result the threshold (see create_bg_esti) can be lowered. The foreground
region returned by run_bg_esti then has to be enlarged again for further processing.
Attention
The passed image (PresentImage) must have the same type and size as the background image of the current
data set (initialized with create_bg_esti).
Parameter
. PresentImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / real
Current image.
. ForegroundRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Region of the detected foreground.
. BgEstiHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bg_estimation ; Hlong
ID of the BgEsti data set.
Example

/* read Init-Image: */
read_image(&InitImage,"Init_Image") ;
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7,10,3.25,15.0,&BgEstiHandle) ;
/* read the next image in sequence: */

HALCON 8.0.2
1084 CHAPTER 15. TOOLS

read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region1,WindowHandle) ;
/* read the next image in sequence: */
read_image(&Image2,"Image_2") ;
/* estimate the Background: */
run_bg_esti(Image2,&Region2,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region2,WindowHandle) ;
/* etc. */

Result
run_bg_esti returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
run_bg_esti is reentrant and processed without parallelization.
Possible Predecessors
create_bg_esti, update_bg_esti
Possible Successors
run_bg_esti, give_bg_esti, update_bg_esti
See also
set_bg_esti_params, create_bg_esti, update_bg_esti, give_bg_esti
Module
Foundation

set_bg_esti_params ( Hlong BgEstiHandle, double Syspar1,


double Syspar2, const char *GainMode, double Gain1, double Gain2,
const char *AdaptMode, double MinDiff, Hlong StatNum,
double ConfidenceC, double TimeC )

T_set_bg_esti_params ( const Htuple BgEstiHandle,


const Htuple Syspar1, const Htuple Syspar2, const Htuple GainMode,
const Htuple Gain1, const Htuple Gain2, const Htuple AdaptMode,
const Htuple MinDiff, const Htuple StatNum, const Htuple ConfidenceC,
const Htuple TimeC )

Change the parameters of the data set.


set_bg_esti_params is used to change the parameters of the data set. The parameters passed by
set_bg_esti_params are the same as in create_bg_esti (see there for an explanation).
The image format cannot be changed! To do this, a new data set with an initialization image of the appropriate
format has to be created.
To exchange the background image completely, use update_bg_esti. The current image then has to be passed
for both the input image and the update region.
Attention
If GainMode was set to ’frame’, the run-time can be extremly long for large values of Gain1 or Gain2, because
the values for the gains’ table are determined by a simple binary search.
Parameter

. BgEstiHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bg_estimation ; Hlong


ID of the BgEsti data set.

HALCON/C Reference Manual, 2008-5-13


15.3. BACKGROUND-ESTIMATOR 1085

. Syspar1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double


1. system matrix parameter.
Default Value : 0.7
Suggested values : Syspar1 ∈ {0.65, 0.7, 0.75}
Typical range of values : 0.05 ≤ Syspar1 ≤ 1.0
Recommended Increment : 0.05
. Syspar2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
2. system matrix parameter.
Default Value : 0.7
Suggested values : Syspar2 ∈ {0.65, 0.7, 0.75}
Typical range of values : 0.05 ≤ Syspar2 ≤ 1.0
Recommended Increment : 0.05
. GainMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Gain type.
Default Value : "fixed"
List of values : GainMode ∈ {"fixed", "frame"}
. Gain1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Kalman gain / foreground adaptation time.
Default Value : 0.002
Suggested values : Gain1 ∈ {10.0, 20.0, 50.0, 0.1, 0.05, 0.01, 0.005, 0.001}
Restriction : 0.0 ≤ Gain1
. Gain2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Kalman gain / background adaptation time.
Default Value : 0.02
Suggested values : Gain2 ∈ {2.0, 4.0, 8.0, 0.5, 0.1, 0.05, 0.01}
Restriction : 0.0 ≤ Gain2
. AdaptMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Threshold adaptation.
Default Value : "on"
List of values : AdaptMode ∈ {"on", "off"}
. MinDiff (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Foreground/background threshold.
Default Value : 7.0
Suggested values : MinDiff ∈ {3.0, 5.0, 7.0, 9.0, 11.0}
Recommended Increment : 0.2
. StatNum (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
Number of statistic data sets.
Default Value : 10
Suggested values : StatNum ∈ {5, 10, 20, 30}
Typical range of values : 1 ≤ StatNum
Recommended Increment : 5
. ConfidenceC (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Confidence constant.
Default Value : 3.25
Suggested values : ConfidenceC ∈ {4.30, 3.25, 2.82, 2.62}
Recommended Increment : 0.01
Restriction : 0.0 < ConfidenceC
. TimeC (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Constant for decay time.
Default Value : 15.0
Suggested values : TimeC ∈ {10.0, 15.0, 20.0}
Recommended Increment : 5.0
Restriction : 0.0 < TimeC
Example

/* read Init-Image:*/
read_image(&InitImage,"Init_Image") ;

HALCON 8.0.2
1086 CHAPTER 15. TOOLS

/* initialize BgEsti-Dataset with


fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7.0,10,3.25,15.0,&BgEstiHandle) ;
/* read the next image in sequence: */
read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region1,WindowHandle) ;
/* read the next image in sequence: */
read_image(&Image2,"Image_2") ;
/* estimate the Background: */
run_bg_esti(Image2,&Region2,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region2,WindowHandle) ;
/* etc. */
/* change parameter in dataset: */
set_bg_esti_params(BgEstiHandle,0.7,0.7,"fixed",
0.004,0.08,"on",9.0,10,3.25,20.0) ;
/* read the next image in sequence: */
read_image(&Image3,"Image_3") ;
/* estimate the Background: */
run_bg_esti(Image3,&Region3,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region3,WindowHandle) ;
/* etc. */

Result
set_bg_esti_params returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
set_bg_esti_params is reentrant and processed without parallelization.
Possible Predecessors
create_bg_esti
Possible Successors
run_bg_esti
See also
update_bg_esti
Module
Foundation

update_bg_esti ( const Hobject PresentImage,


const Hobject UpDateRegion, Hlong BgEstiHandle )

T_update_bg_esti ( const Hobject PresentImage,


const Hobject UpDateRegion, const Htuple BgEstiHandle )

Change the estimated background image.


update_bg_esti overwrites the image stored in the current BgEsti data set with the grayvalues of
PresentImage within the bounds of UpDateRegion. This can be used for a "‘hard"’ adaptation: Image
regions with a sudden change in (known) background can be adapted very fast this way.
Attention
The passed image (PresentImage) must have the same type and size as the background image of the current
data set (initialized with create_bg_esti).

HALCON/C Reference Manual, 2008-5-13


15.4. BARCODE 1087

Parameter
. PresentImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / real
Current image.
. UpDateRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region describing areas to change.
. BgEstiHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bg_estimation ; Hlong
ID of the BgEsti data set.
Example

/* read Init-Image: */
read_image(&InitImage,"Init_Image") ;
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7,10,3.25,15.0,&BgEstiHandle) ;
/* read the next image in sequence: */
read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* use the Region and the information of a knowledge base */
/* to calculate the UpDateRegion */
update_bg_esti(Image1,UpdateRegion,BgEstiHandle) ;
/* then read the next image in sequence: */
read_image(&Image2,"Image_2") ;
/* estimate the Background: */
run_bg_esti(Image2,&Region2,BgEstiHandle) ;
/* etc. */

Result
update_bg_esti returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
update_bg_esti is reentrant and processed without parallelization.
Possible Predecessors
run_bg_esti
Possible Successors
run_bg_esti
See also
run_bg_esti, give_bg_esti
Module
Foundation

15.4 Barcode
clear_all_bar_code_models ( )
T_clear_all_bar_code_models ( )

Delete all bar code models and free the allocated memory
The operator clear_all_bar_code_models deletes all bar code models that were created by
create_bar_code_model. All memory used by the models is freed. After the operator call, all bar code
handles are invalid.
Attention
clear_all_bar_code_models exists solely for the purpose of implementing the “reset program” function-
ality in HDevelop. clear_all_bar_code_models must not be used in any application.

HALCON 8.0.2
1088 CHAPTER 15. TOOLS

Result
The operator clear_all_bar_code_models returns the value H_MSG_TRUE if all bar code models were
freed correctly. Otherwise, an exception will be raised.
Parallelization Information
clear_all_bar_code_models is processed completely exclusively without parallelization.
Alternatives
clear_bar_code_model
See also
create_bar_code_model, find_bar_code
Module
Bar Code

clear_bar_code_model ( Hlong BarCodeHandle )


T_clear_bar_code_model ( const Htuple BarCodeHandle )

Delete a bar code model and free the allocated memory


The operator clear_bar_code_model deletes a bar code model that was created by
create_bar_code_model. All memory used by the model is freed. The handle of the model is
passed in BarCodeHandle, which is invalid after the operator call.
Parameter
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; Hlong
Handle of the bar code model.
Result
The operator clear_bar_code_model returns the value H_MSG_TRUE if a valid handle was passed and the
referred bar code model can be freed correctly. Otherwise, an exception will be raised.
Parallelization Information
clear_bar_code_model is processed completely exclusively without parallelization.
Alternatives
clear_all_bar_code_models
See also
find_bar_code
Module
Bar Code

create_bar_code_model ( const char *GenParamNames,


const char *GenParamValues, Hlong *BarCodeHandle )

T_create_bar_code_model ( const Htuple GenParamNames,


const Htuple GenParamValues, Htuple *BarCodeHandle )

Create a model of a bar code reader.


The operator create_bar_code_model creates a generic model for reading all types of supported bar code
symbols. The result of this operator is a handle to the bar code model (BarCodeHandle), which is used for
all further operations on the bar code, like modifying the model, reading a symbol, or accessing the results of the
symbol search.
In general, bar codes will be found and decoded without any additional adjustment of the parameters. There-
fore, GenParamNames and GenParamValues are empty tuples by default. In the case of poor image quality
or abnormal geometric characteristics of the bar code, which requires special parameter settings for a successful
decoding of the bar code symbols, parameters can be adjusted already while creating the bar code model. Alter-
natively, parameters can be changed later on as well by applying the operator set_bar_code_param. For a
detailed description of the available model parameters see set_bar_code_param.

HALCON/C Reference Manual, 2008-5-13


15.4. BARCODE 1089

Parameter

. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *


Names of the generic parameters that can be adjusted for the bar code model.
Default Value : []
List of values : GenParamNames ∈ {"element_size_min", "element_size_max", "check_char",
"meas_thresh", "max_diff_orient", "composite_code"}
. GenParamValues (input_control) . . . . . . attribute.value(-array) ; (Htuple .) const char * / Hlong / double
Values of the generic parameters that can be adjusted for the bar code model.
Default Value : []
Suggested values : GenParamValues ∈ {1.5, 2, 3, 8, "present", "absent", 0.1, "none", "CC-A/B"}
. BarCodeHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; (Htuple .) Hlong *
Handle for using and accessing the bar code model.
Result
The operator create_bar_code_model returns the value H_MSG_TRUE if the given parameters are correct.
Otherwise, an exception will be raised.
Parallelization Information
create_bar_code_model is processed completely exclusively without parallelization.
Possible Successors
find_bar_code
See also
clear_bar_code_model, clear_all_bar_code_models
Module
Bar Code

find_bar_code ( const Hobject Image, Hobject *SymbolRegions,


Hlong BarCodeHandle, const char *CodeType, char *DecodedDataStrings )

T_find_bar_code ( const Hobject Image, Hobject *SymbolRegions,


const Htuple BarCodeHandle, const Htuple CodeType,
Htuple *DecodedDataStrings )

Detect and read bar code symbols in an image.


The operator find_bar_code finds and reads bar code symbols in a given image (Image) and returns the
decoded data. In one image an arbitrary number of bar codes of a single type can be read. The type of the desired
bar code symbology is given by CodeType. The decoded strings are returned in DecodedDataStrings
and the corresponding bar code regions in SymbolRegions. For a total of n successfully read bar codes,
the indices from 0 to (n-1) can be used as candidate handle in the operators get_bar_code_object and
get_bar_code_result in order to retrieve the desired data of one specific bar code result.
Before calling find_bar_code a bar code model must be created by calling create_bar_code_model.
This operator returns a bar code model BarCodeHandle, which is input to find_bar_code.
The output value DecodedDataStrings contains the decoded string of the symbol for each bar code
result. The contents of the strings are conform to the approriate standard of the symbology. Typically,
DecodedDataStrings contains only data characters. For bar codes with a mandatory check character the
check character is not included in the string. For bar codes with a facultative check character, like for example
Code 39, Codabar, 25 Industrial or 25 Interleaved, the result depends on the value of the ’check_char’ parame-
ter, which can be set in create_bar_code_model or set_bar_code_param. By default ’check_char’
is ’absent’ and the check character is interpreted as a normal data character and hence included in the decoded
string. When ’check_char’ is set to ’present’ the correctness of the check character is primarily tested. If the check
character is correct the decoded string contains just the data characters; if the check character is not correct the bar
code is graded as unreadable. Accordingly, the symbol region and the decoded string do not appear in the list of
resulting strings (DecodedDataStrings) and in the list of resulting regions (SymbolRegions).
The underlying decoded reference data, including start/stop and check characters, can be queried by using the
get_bar_code_result operator with the option ’decoded_reference’.

HALCON 8.0.2
1090 CHAPTER 15. TOOLS

Following bar code symbologies are supported: 2/5 Industrial, 2/5 Interleaved, Codabar, Code 39, Code 93, Code
128, EAN-8, EAN-8 Add-On 2, EAN-8 Add-On 5, EAN-13, EAN-13 Add-On 2, EAN-13 Add-On 5, UPC-A,
UPC-A Add-On 2, UPC-A Add-On 5, UPC-E, UPC-E Add-On 2, UPC-E Add-On 5, PharmaCode, RSS-14, RSS-
14 Truncated, RSS-14 Stacked, RSS-14 Stacked Omnidirectional, RSS Limited, RSS Expanded, RSS Expanded
Stacked.
Note, that the PharmaCode can be read in forward and backward direction, both yielding a valid result. Therefore,
both strings are returned and concatenated into a single string in DecodedDataStrings by a separating comma.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2


Input image.
. SymbolRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Regions of the successfully decoded bar code symbols.
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; (Htuple .) Hlong
Handle of the bar code model.
. CodeType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Type of the searched barcode.
Default Value : "EAN-13"
List of values : CodeType ∈ {"2/5 Industrial", "2/5 Interleaved", "Codabar", "Code 39", "Code 93", "Code
128", "EAN-13", "EAN-13 Add-On 2", "EAN-13 Add-On 5", "EAN-8", "EAN-8 Add-On 2", "EAN-8
Add-On 5", "UPC-A", "UPC-A Add-On 2", "UPC-A Add-On 5", "UPC-E", "UPC-E Add-On 2", "UPC-E
Add-On 5", "PharmaCode", "RSS-14", "RSS-14 Truncated", "RSS-14 Stacked", "RSS-14 Stacked Omnidir",
"RSS Limited", "RSS Expanded", "RSS Expanded Stacked"}
. DecodedDataStrings (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Data strings of all successfully decoded bar codes.
Result
The operator find_bar_code returns the value H_MSG_TRUE if the given parameters are correct. Otherwise,
an exception will be raised.
Parallelization Information
find_bar_code is reentrant and processed without parallelization.
Possible Predecessors
create_bar_code_model, set_bar_code_param
Possible Successors
get_bar_code_result, get_bar_code_object, clear_bar_code_model
Module
Bar Code

get_bar_code_object ( Hobject *BarCodeObjects, Hlong BarCodeHandle,


const char *CandidateHandle, const char *ObjectName )

T_get_bar_code_object ( Hobject *BarCodeObjects,


const Htuple BarCodeHandle, const Htuple CandidateHandle,
const Htuple ObjectName )

Access iconic objects that were created during the search or decoding of bar code symbols.
With the operator get_bar_code_object, iconic objects created during the last call of the operator
find_bar_code can be accessed. Besides the name of the object (ObjectName), the bar code model
(BarCodeHandle) must be passed to get_bar_code_object. In addition, in CandidateHandle an in-
dex to a single decoded symbol or a single symbol candidate must be passed. Alternatively, CandidateHandle
can be set to ’all’ and then all objects of the decoded symbols or symbol candidates are returned.
Setting ObjectName to ’symbol_regions’ will return regions of successfully decoded symbols. When choosing
’all’ as CandidateHandle, the regions of all decoded symbols are retrieved. The order of the returned objects
is the same as in find_bar_code. If there is a total of n decoded symbols CandidateHandle can be chosen
in between 0 and (n-1) to get the region of the respective decoded symbol.

HALCON/C Reference Manual, 2008-5-13


15.4. BARCODE 1091

Setting ObjectName to ’candidate_regions’ will return regions of potential bar codes. If there is a total of n
decoded symbols out of a total of m candidates then CandidateHandle can be chosen between 0 and (m-1).
With CandidateHandle between 0 and (n-1) the original segmented region of the respective decoded symbol
is retrieved. With CandidateHandle between n and (m-1) the region of the potential or undecodable symbol
is returned. In addition, CandidateHandle can be set to ’all’ to retrieve all candidate regions at the same time.
Setting ObjectName to ’scanlines_all’ or ’scanlines_valid’ will return XLD contours representing the partic-
ular detected bars in the scanlines applied on the candidate regions. ’scanlines_all’ represents all scanlines that
find_bar_code whould place in order to decode a barcode. ’scanlines_valid’ represents only those scanlines
that could be successfully decoded. For single row bar codes, there will be at least one ’scanlines_valid’ if the
symbol was successfully decoded. There will be no ’scanlines_valid’ if it was not decoded. For stacked bar codes
(e.g. ’RSS-14 Stacked’ and ’RSS Expanded Stacked’) this rule applies similarly, but on a per-symbol-row basis
rather then per-symbol. Note that get_bar_code_object returns all XLD contours merged into a single ar-
ray of XLDs and hence there is no way to identify the contours corresponding to separate scanlines. Furthermore,
if ’all’ is used as CandidateHandle, the output object will contain XLD contours for all symbols and in this
case there is no way to identify the contours corresponding to separate symbols as well. However, the contours
still can be used for visualization purposes.
Parameter
. BarCodeObjects (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject *
Objects that are created as intermediate results during the detection or evaluation of bar codes.
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; Hlong
Handle of the bar code model.
. CandidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; const char * / Hlong
Indicating the bar code results respectively candidates for which the data is required.
Default Value : "all"
Suggested values : CandidateHandle ∈ {0, 1, 2, "all"}
. ObjectName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the iconic object to return.
Default Value : "symbol_regions"
List of values : ObjectName ∈ {"symbol_regions", "candidate_regions", "scanlines_all", "scanlines_valid"}
Result
The operator get_bar_code_object returns the value H_MSG_TRUE if the given parameters are correct
and the requested objects are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_bar_code_object is reentrant and processed without parallelization.
Possible Predecessors
find_bar_code
See also
get_bar_code_result
Module
Bar Code

get_bar_code_param ( Hlong BarCodeHandle, const char *GenParamNames,


Hlong *GenParamValues )

T_get_bar_code_param ( const Htuple BarCodeHandle,


const Htuple GenParamNames, Htuple *GenParamValues )

Get one or several parameters that describe the bar code model.
The operator get_bar_code_param allows to query parameters of a bar code model, which are of relevance
for a successful search and decoding of a respective class of bar codes.
The names of the desired parameters are passed in the generic parameter GenParamNames and the corresponding
values are returned in GenParamValues. All of these parameters can be set and changed at any time with the
operator set_bar_code_param.
The following parameters can be queried – ordered by different categories:

HALCON 8.0.2
1092 CHAPTER 15. TOOLS

Size of the bar code elements:

’element_size_min’: Minimal size of the bar code elements.

’element_size_max’: Maximal size of the bar code elements.

’element_height_min’: Minimal height of the bar code.

Orientation of bar code elements:

’orientation’: Accepted orientation of the decoded bar codes.

’orientation_tol’: Tolerance of the accepted orientation.

Appearance of the bar code in the image:

’meas_thresh’: Threshold for the detection of edges in the bar code region.

’max_diff_orient’: Maximal difference in the orientation of edges in a bar code region. The difference in oriented
angles, given in degree, refers to neighboring pixels.

Bar code specific values:

’check_char’: Presence of a check character.

’composite_code’: Presence and type of a 2D composite code appended to the barcode.

Further details on the above parameters can be found with the description of set_bar_code_param operator.
Parameter

. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; (Htuple .) Hlong


Handle of the bar code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that are to be queried for the bar code model.
Default Value : "element_size_max"
List of values : GenParamNames ∈ {"element_size_min", "element_size_max", "element_height_min",
"orientation", "orientation_tol", "meas_thresh", "max_diff_orient", "check_char", "composite_code"}
. GenParamValues (output_control) . . . . . . . attribute.name(-array) ; (Htuple .) Hlong * / char * / double *
Values of the generic parameters.
Result
The operator get_bar_code_param returns the value H_MSG_TRUE if the given parameters are correct.
Otherwise, an exception will be raised.
Parallelization Information
get_bar_code_param is reentrant and processed without parallelization.
Possible Predecessors
create_bar_code_model, set_bar_code_param
Possible Successors
set_bar_code_param
Module
Bar Code

HALCON/C Reference Manual, 2008-5-13


15.4. BARCODE 1093

get_bar_code_result ( Hlong BarCodeHandle,


const char *CandidateHandle, const char *ResultName,
char *BarCodeResults )

T_get_bar_code_result ( const Htuple BarCodeHandle,


const Htuple CandidateHandle, const Htuple ResultName,
Htuple *BarCodeResults )

Get the alphanumerical results that were accumulated during the decoding of bar code symbols.
The operator get_bar_code_result allows to access alphanumeric results of the find and decode process.
To access a result, first the handle of the bar code model (BarCodeHandle) and the index of the resulting
symbol (CandidateHandle) must be passed. CandidateHandle refers to the results, in the same order that
is returned by operator find_bar_code. CandidateHandle can take numbers from 0 to (n-1), where n is
the total number of successfully decoded symbols. Alternatively, CandidateHandle can be set to ’all’ if all
results are desired. The option ’all’ can be chosen only in the case where the return value of a single result is single
valued.
When ResultName is set to ’decoded_strings’ the decoded result is returned as a string in a human readable
format. This decoded string can be returned for a single result, i.e., CandidateHandle is for example 0, or for
all results simultaneously, i.e., CandidateHandle is set to ’all’. Note, that only data characters are comprised
in the decoded string. Start/stop characters are excluded, but can be refered to via ’decoded_reference’. For codes
with a facultative check character it depends on the settings whether the check character is returned or not. When
’check_char’ is set to the default value ’absent’ the decoded string takes the check character as a normal data
character. When ’check_char’ is set to ’present’ and if the check character is correct it will be ignored in the string.
If the check character is wrong the resulting string is an empty string.
When choosing ’decoded_reference’ as ResultName the underlying decoded reference data is returned. It com-
prises all original characters of the symbol, i.e., data characters, potential start or stop characters and check charac-
ters if present. For codes taking only numeric data, like, e.g., the EAN/UPC codes, the RSS-14 and RSS Limited
codes, or the 2/5 codes, the decoded reference data takes the same values as the decoded string data including check
characters. For codes with alphanumeric data, like for example code 39 or code 128 the decoded reference data are
the indices of the respective decoding table. For RSS Expanded and RSS Expanded Stacked the reference values
are the ASCII codes of the decoded data, where the special charachter FNC1 appears with value 10. Furthermore,
for all codes from the RSS family the first reference value reprsents a linkage flag with value of 1 if the flag is set
and 0 otherwise. As the decoded reference is a tuple of whole numbers it can only be called for a single result,
meaning that CandidateHandle has to be the handle number of the corresponding decoded symbol.
When ResultName is set to ’composite_strings’ or ’composite_reference’, then the decoded string or the refer-
ence data of a RSS Composite component is returned, respectively. For further details see the description of the
parameter ’composite_code’ of set_bar_code_param.
When ResultName is set to ’orientation’, the orientation for the specified result is returned. The ’orientation’ of
a bar code is defined as the angle between its reading direction and the horizontal image axis. The angle is positive
in counter clockwise direction and is given in degrees. It can be in the range of [-180.0 . . . 180.0] degrees. Note
that the reading direction is perpendicular to the bars of the bar code. A single angle is returned when only one
result is specified, e.g., by entering 0 for CandidateHandle. Otherwise, when CandidateHandle is set to
’all’, a tuple containing the angles of all results is returned.
Parameter
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; (Htuple .) Hlong
Handle of the bar code model.
. CandidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) const char * / Hlong
Indicating the bar code results respectively candidates for which the data is required.
Default Value : "all"
Suggested values : CandidateHandle ∈ {0, 1, 2, "all"}
. ResultName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; (Htuple .) const char *
Names of the resulting data to return.
Default Value : "decoded_strings"
Suggested values : ResultName ∈ {"decoded_strings", "decoded_reference", "orientation",
"composite_strings", "composite_reference"}
. BarCodeResults (output_control) . . . . . . . attribute.value(-array) ; (Htuple .) char * / Hlong * / double *
List with the results.

HALCON 8.0.2
1094 CHAPTER 15. TOOLS

Result
The operator get_bar_code_result returns the value H_MSG_TRUE if the given parameters are correct
and the requested results are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_bar_code_result is reentrant and processed without parallelization.
Possible Predecessors
find_bar_code
See also
get_bar_code_object
Module
Bar Code

set_bar_code_param ( Hlong BarCodeHandle, const char *GenParamNames,


Hlong GenParamValues )

T_set_bar_code_param ( const Htuple BarCodeHandle,


const Htuple GenParamNames, const Htuple GenParamValues )

Set selected parameters of the bar code model.


The operator set_bar_code_param is used to set or change the different parameters of a bar code model in
order to adapt to special properties of the bar codes or to a particular appearance in the image. All parameters can
also be set while creating the bar code model with create_bar_code_model. The current configuration of
the bar code model can be queried with get_bar_code_param.
The following overview lists the different generic parameters with the respective value ranges and default values:
Size of bar code elements:

’element_size_min’: Minimal size of bar code elements, i.e. the minimal width of bars and spaces. For small bar
codes the value should be reduced to 1.5. In the case of huge bar codes the value should be increased, which
results in a shorter execution time and fewer candidates.
Typical values: [1.5 . . . 10.0]
Default: 2.0
’element_size_max’: Maximal size of bar code elements, i.e. the maximal width of bars and spaces. The value of
’element_size_max’ should be adequate low such that two neighboring bar codes are not fused into a single
one. On this other hand the value should be sufficiently high in order to find the complete bar code region.
Typical values: [4.0 . . . 60.0]
Default: 8.0
’element_height_min’: Minimal bar code height. The default value of this parameter is -1, meaning that the bar
code reader automatically derives a reasonable height from the other parameters. Just for very flat and very
high bar codes a manual adjustment of this parameter can be necessary. In the case of a bar code with a height
of less than 16 pixels the respective height should be set by the user. Note, that the minimal value is 8 pixels.
If the bar code is very high, i.e. 70 pixels and more, manually adjusting to the respective height can lead to a
speed-up of the subsequent finding and reading operation.
Typical values: [-1, 8 . . . 64]
Default: -1

Orientation of bar code elements:

’orientation’: Expected bar code orientation. A potential (candidate) bar code contains bars with similar ori-
entation. The ’orientation’ and ’orientation_tol’ parameters are used to specify the range [’orientation’-
’orientation_tol’, ’orientation’+’orientation_tol’]. find_bar_code processes a candidate bar code only
when the avarage orientation of its bars lies in this range. If the bar codes are expected to appear only in
certain orientations in the processed images, one can reduce the orientation range adequately. This enables
an early identification of false candidates and hence shorter execution times. This adjustment can be used for
images with a lot of texture, which includes fragments tending to result in false bar code candidates.
The actual orientation angle of a bar code is explained with get_bar_code_result(...,’orientation’,...)
with the only difference that for the early identification of false candidates the reading direction of the bar

HALCON/C Reference Manual, 2008-5-13


15.4. BARCODE 1095

codes is ignored, which results in relevant orientation values only in the range [-90.0 . . . 90.0]. The only ex-
ception to this rule constitutes the bar code symbol PharmaCode, which possesses a forward and a backward
reading direction at the same time: here, ’orientation’ can take values in the range [-180.0 . . . 180.0] and the
decoded result is unique corresponding to just one reading direction.
Typical values: [-90.0 . . . 90.0]
Default: 0.0
’orientation_tol’: Orientation tolerance. See the explanation of ’orientation’ parameter. As explained there, rel-
evant orientation values are only in the range of [-90.0 . . . 90.0], which means that with ’orientation_tol’ =
90 the whole range is spanned. Therefore, valid values for ’orientation_tol’ are only in the range of [0.0
. . . 90.0]. The default value 90.0 means that no restriction on the bar code candidates is performed.
Typical values: [0.0 . . . 90.0]
Default: 90.0

Appearance of the bar code in the image:

’meas_thresh’: The bar-space-sequence of a bar code is determined with a scanline measuring the position of the
edges. Finding these edges requires a threshold. ’meas_thresh’ defines this threshold which is a relative value
with respect to the dynamic range of the scanline pixels. In the case of disturbances in the bar code region or
a high noise level, the value of ’meas_thresh’ should be increased.
Typical values: [0.05 . . . 0.2]
Default: 0.05
’max_diff_orient’: A potential bar code region contains bars, and hence edges, with a similar orientation. The
value max_diff_orient denotes the maximal difference in this orientation between adjacent pixels and is given
in degree. If a bar code is of bad quality with jagged edges the parameter max_diff_orient should be set to
bigger values. If the bar code is of good quality max_diff_orient can be set to smaller values, thus reducing
the number of potential but false bar code candidates.
Typical values: [2 . . . 20]
Default: 10

Bar code specific values:

’check_char’: For bar codes with a facultative check character, this parameter determines whether the check char-
acter is taken into account or not. If the bar code has a check character, ’check_char’ should be set to ’present’
and thus the check character is tested. In that case, a bar code result is returned only if the check sum is cor-
rect. For ’check_char’ set to ’absent’ no check sum is computed and bar code results are retunred as long as
they were successfully decoded. Bar codes with a facultative check character are, e.g. Code 39, Codabar, 25
Industrial and 25 Interleaved.
Values: [’absent’, ’present’]
Default: ’absent’
’composite_code’: EAN.UPC bar codes can have an additional 2D Composite code component appended. If
’composite_code’ is set to ’CC-A/B’ the composite component will be found and decoded. By default, ’com-
posite_code’ is set to ’none’ and thus it is disabled. If the searched bar code symbol has no attached composite
component, just the result of the bar code itself is returned by find_bar_code. Composite codes are sup-
ported only for bar codes of the RSS family.
Values: [’none’, ’CC-A/B’]
Default: ’none’

Parameter
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; (Htuple .) Hlong
Handle of the bar code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that shall be adjusted for finding and decoding bar codes.
Default Value : "element_size_max"
List of values : GenParamNames ∈ {"element_size_min", "element_size_max", "element_height_min",
"orientation", "orientation_tol", "meas_thresh", "max_diff_orient", "check_char", "composite_code"}
. GenParamValues (input_control) . . . . . attribute.name(-array) ; (Htuple .) Hlong / const char * / double
Values of the generic parameters that are adjusted for finding and decoding bar codes.
Default Value : 8
Suggested values : GenParamValues ∈ {0.1, 1.5, 2, 8, 32, 45, "present", "absent", "none", "CC-A/B"}

HALCON 8.0.2
1096 CHAPTER 15. TOOLS

Result
The operator set_bar_code_param returns the value H_MSG_TRUE if the given parameters are correct.
Otherwise, an exception will be raised.
Parallelization Information
set_bar_code_param is reentrant and processed without parallelization.
Possible Predecessors
create_bar_code_model
Possible Successors
find_bar_code
Module
Bar Code

15.5 Calibration
T_caltab_points ( const Htuple CalTabDescrFile, Htuple *X, Htuple *Y,
Htuple *Z )

Read the mark center points from the calibration plate description file.
caltab_points reads the mark center points from the calibration plate description file CalTabDescrFile
(see gen_caltab) and returns their coordinates in X, Y und Z. The mark center points are 3D coordinates in
the calibration plate coordinate system und describe the 3D model of the calibration plate. The calibration plate
coordinate system is located in the middle of the surface of the calibration plate, its z-axis points into the calibration
plate, its x-axis to the right, and it y-axis downwards.
The mark center points are typically used as input parameters for the operator camera_calibration. This
operator projects the model points into the image, minimizes the distance between the projected points and the
observed 2D coordinates in the image (see find_marks_and_pose), and from this computes the exact values
for the interior and exterior camera parameters.
Parameter
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
X coordinates of the mark center points in the coordinate system of the calibration plate.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Y coordinates of the mark center points in the coordinate system of the calibration plate.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Z coordinates of the mark center points in the coordinate system of the calibration plate.
Example (Syntax: HDevelop)

* read_image(Image1, ’calib-01’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
StartCamPar := [0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576]
find_marks_and_pose(Image1,Caltab1,’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,
StartPose1)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ) >
* camera calibration
camera_calibration(NX, NY, NZ, RCoord1, CCoord1, StartCamPar,
StartPose1, ’all’, CamParam, FinalPose, Errors)

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1097

* visualize calibration result


disp_image(Image1, WindowHandle)
set_color(WindowHandle, ’red’)
disp_caltab(’caltab.descr’, CamParam, FinalPose, 1.0)

Result
caltab_points returns H_MSG_TRUE if all parameter values are correct and the file CalTabDescrFile
has been read successfully. If necessary, an exception handling is raised.
Parallelization Information
caltab_points is reentrant and processed without parallelization.
Possible Successors
camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
project_3d_point, get_line_of_sight, gen_caltab
Module
Foundation

T_cam_mat_to_cam_par ( const Htuple CameraMatrix, const Htuple Kappa,


const Htuple ImageWidth, const Htuple ImageHeight, Htuple *CamParam )

Compute the interior camera parameters from a camera matrix.


cam_mat_to_cam_par computes interior camera parameters from the camera matrix CameraMatrix, the
radial distortion coefficient Kappa, the image width ImageWidth, and the image height ImageHeight. The
camera parameters are returned in CamParam. The parameters CameraMatrix and Kappa can be determined
with stationary_camera_self_calibration. cam_mat_to_cam_par converts this representation
of the internal camera parameters into the representation used by camera_calibration. The conversion can
only be performed if the skew of the image axes is set to 0 in stationary_camera_self_calibration,
i.e., if the parameter ’skew’ is not being determined.
Parameter
. CameraMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
3 × 3 projective camera matrix that determines the interior camera parameters.
. Kappa (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Kappa.
. ImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Width of the images that correspond to CameraMatrix.
Restriction : ImageWidth > 0
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Height of the images that correspond to CameraMatrix.
Restriction : ImageHeight > 0
. CamParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
Interior camera parameters.
Number of elements : CamParam = 8
Example (Syntax: HDevelop)

* For the input data to stationary_camera_self_calibration, please


* refer to the example for stationary_camera_self_calibration.
stationary_camera_self_calibration (4, 640, 480, 1, From, To,
HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches,
’gold_standard’,
[’focus’,’principal_point’,’kappa’],
’true’, CameraMatrix, Kappa,
RotationMatrices, X, Y, Z, Error)
cam_mat_to_cam_par (CameraMatrix, Kappa, 640, 480, CamParam)

HALCON 8.0.2
1098 CHAPTER 15. TOOLS

Result
If the parameters are valid, the operator cam_mat_to_cam_par returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
cam_mat_to_cam_par is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
See also
camera_calibration, cam_par_to_cam_mat
Module
Calibration

T_cam_par_to_cam_mat ( const Htuple CamParam, Htuple *CameraMatrix,


Htuple *ImageWidth, Htuple *ImageHeight )

Compute a camera matrix from interior camera parameters.


cam_par_to_cam_mat computes the camera matrix CameraMatrix as well as the image width
ImageWidth, and the image height ImageHeight from the internal camera parameters CamParam.
The internal camera parameters CamParam can be determined with camera_calibration.
cam_par_to_cam_mat converts this representation of the internal camera parameters into the represen-
tation used by stationary_camera_self_calibration. The conversion can only be performed if the
radial distortion coefficient Kappa is 0. If necessary, change_radial_distortion_cam_par must be
used to achieve this.
Parameter
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : CamParam = 8
. CameraMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
3 × 3 projective camera matrix that corresponds to CamParam.
. ImageWidth (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Width of the images that correspond to CameraMatrix.
Assertion : ImageWidth > 0
. ImageHeight (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Height of the images that correspond to CameraMatrix.
Assertion : ImageHeight > 0
Example (Syntax: HDevelop)

* For the input data to camera_calibration, please refer to the


* example for camera_calibration.
camera_calibration (X, Y, Z, Rows, Cols, StartCamParam, StartPoses,
[’all’,’~kappa’], CamParam, FinalPoses, Errors)
cam_par_to_cam_mat (CamParam, CameraMatrix, ImageWidth, ImageHeight)

* Alternatively, the following calls can be used.


camera_calibration (X, Y, Z, Rows, Cols, StartCamParam, StartPoses,
’all’, CamParam, FinalPoses, Errors)
change_radial_distortion_cam_par (’adaptive’, CamParam, 0, CamParamOut)
cam_par_to_cam_mat (CamParamOut, CameraMatrix, ImageWidth, ImageHeight)

Result
If the parameters are valid, the operator cam_par_to_cam_mat returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
cam_par_to_cam_mat is reentrant and processed without parallelization.

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1099

Possible Predecessors
camera_calibration
See also
stationary_camera_self_calibration, cam_mat_to_cam_par
Module
Calibration

T_camera_calibration ( const Htuple NX, const Htuple NY,


const Htuple NZ, const Htuple NRow, const Htuple NCol,
const Htuple StartCamParam, const Htuple NStartPose,
const Htuple EstimateParams, Htuple *CamParam, Htuple *NFinalPose,
Htuple *Errors )

Determine all camera parameters by a simultanous minimization process.


camera_calibration performs the calibration of a camera. For this, known 3D model points (with coordi-
nates NX, NY, NZ) are projected in the image and the sum of the squared distance between these projections and
the corresponding image points (with coordinates NRow, NCol) is minimized.
If the minimization converges, the exact interior (CamParam) and exterior (NFinalPose) camera parameters
are determined by this minimization algorithm. The parameters StartCamParam and NStartPose are used as
initial values for the minimization process. Since this algorithm simultaneously handles correspondences between
image and model points from different images, it is also called multi-image calibration.
In general, camera calibration means the exact determination of the parameters that model the (optical) projection
of any 3D world point Pw into a (sub-)pixel [r,c] in the image. This is important, if the original 3D pose of an
object has to be computed using an image (e.g., measuring of industrial parts).

Used 3D camera model


The projection consists of multiple steps: First, the point pw is transformed from world into camera coordinates
(points as homogeneous vectors, compare affine_trans_point_3d):
 
 c  x    w 
p  y  R t p
=   = ·
1  z  000 1 1
1

Then, the point is projected into the image plane, i.e., onto the sensor chip.
For the modeling of this projection process that is determined by the used combination of camera, lens, and frame
grabber, HALCON provides the following three 3D camera models:

• Area scan pinhole camera:


The combination of an area scan camera with a lens that effects a perspective projection and that may show
radial distortions.
• Area scan telecentric camera:
The combination of an area scan camera with a telecentric lens that effects a parallel projection and that may
show radial distortions.
• Line scan pinhole camera:
The combination of a line scan camera with a lens that effects a perspective projection and that may show
radial distortions.

For area scan cameras, the projection of the point pc that is given in camera coordinates into a (sub-)pixel [r,c]
in the image consists of the following steps: First, the point is projected into the image plane, i.e., onto the sensor
chip. If the underlying camera model is an area scan pinhole camera, i.e., if the focal length passed in CamParam
is greater than 0, the projection is described by the following equations:
 
x
pc =  y 
z

HALCON 8.0.2
1100 CHAPTER 15. TOOLS

x y
u = Focus · and v = Focus ·
z z
In contrast, if the focal length is passed as 0 in CamParam, the camera model of an area scan telecentric camera
is used, i.e., it is assumed that the optics of the lens of the camera performs a parallel projection. In this case, the
corresponding equations are:
 
x
pc =  y 
z

u = x and v=y

The following equations perform the radial distortion of the points:

2u 2v
ũ = p and ṽ = p
1+ 1− 4κ(u2 + v2 ) 1+ 1 − 4κ(u2 + v 2 )

Finally, the point is transformed from the image plane coordinate system into the image coordinate system, i.e.,
the pixel coordinate system:

ũ ṽ
c= + Cx and r= + Cy
Sx Sy

For line scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON,
the following assumptions for this motion are made:

1. the camera moves with constant velocity along a straight line


2. the orientation of the camera is constant
3. the motion is equal for all images

The motion is described by the motion vector V = (Vx , Vy , Vz )T that must be given in [meter/scanline] in the
camera coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact,
this is equivalent to the assumption of a fixed camera with the object travelling along −V .
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system is the
center of projection. The z-axis is identical to the optical axis and directed so that the visible points have positive z
coordinates. The y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector
has a positive y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a
right-handed coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves
relatively to the object, i.e., each image line has been imaged from a different position. This means, there would
be an individual pose for each image line. To make things easier, in HALCON, all transformations from world
coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion
V is taken into account during the projection of the point pc into the image. Consequently, only the pose of the
first image line is returned by the operators find_marks_and_pose and camera_calibration.
For line scan pinhole cameras, the projection of the point pc that is given in the camera coordinate system into a
(sub-)pixel [r,c] in the image is defined as follows:
Assuming
 
x
pc =  y  ,
z

the following set of equations must be solved for m, ũ, and t:

m · D · ũ = x − t · Vx
−m · D · pv = y − t · Vy
m · Focus = z − t · Vz

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1101

with

1
D =
1+ κ(ũ2 + (pv )2 )
pv = Sy · Cy

This already includes the compensation for radial distortions.


Finally, the point is transformed into the image coordinate system, i.e., the pixel coordinate system:


c= + Cx and r=t
Sx

Camera parameters
The total of 14 camera parameters for area scan cameras and 17 camera parameters for line scan cameras, respec-
tively, can be divided into the interior and exterior camera parameters:

Interior camera parameters: These parameters describe the characteristics of the used camera, especially the
dimension of the sensor itself and the projection properties of the used combination of lens, camera, and
frame grabber.
For area scan cameras, the above described camera model contains the following 8 parameters:
Focus: Focal length of the lens. 0 for telecentric lenses.
Kappa (κ): Distortion coefficient to model the pillow- or barrel-shaped distortions caused by the lens.
Sx : Scale factor. For pinhole cameras, it corresponds to the horizontal distance between two neighbor-
ing cells on the sensor. For telecentric cameras, it represents the horizontal size of a pixel in world
coordinates. Attention: This value increases, if the image is subsampled!
Sy : Scale factor. For pinhole cameras, it corresponds to the vertical distance between two neighboring
cells on the sensor. For telecentric cameras, it respresents the vertical size of a pixel in world coordi-
nates. Since in most cases the image signal is sampled line-synchronously, this value is determined
by the dimension of the sensor and needn’t be estimated for pinhole cameras by the calibration
process. Attention: This value increases, if the image is subsampled!
Cx : Column coordinate of the image center point (center of the radial distortion).
Cy : Row coordinate of the image center point (center of the radial distortion).
ImageWidth: Width of the sampled image. Attention: This value decreases, if the image is subsam-
pled!
ImageHeight: Height of the sampled image. Attention: This value decreases, if the image is subsam-
pled!
For line scan cameras, the above described camera model contains the following 11 parameters:
Focus: Focal length of the lens.
Kappa: Distortion coefficient to model the pin-cushion- or barrel-shaped distortions caused by the lens.
Sx : Scale factor, corresponds to the horizontal distance between two neighboring cells on the sensor.
Attention: This value increases if the image is subsampled!
Sy : Scale factor. During the calibration, it appears only in the form pv = Sy · Cy . pv describes the
distance of the image center point from the sensor line in [meters]. Attention: This value increases
if the image is subsampled!
Cx : Column coordinate of the image center point (center of the radial distortion).
Cy : Distance of the image center point (center of the radial distortion) from the sensor line in [scanlines].
ImageWidth: Width of the sampled image. Attention: This value decreases if the image is subsampled!
ImageHeight: Height of the sampled image. Attention: This value decreases if the image is subsam-
pled!
Vx : X-component of the motion vector.
Vy : Y-component of the motion vector.
Vz : Z-component of the motion vector.
Note that the term focal length is not quite correct and would be appropriate only for an infinite object
distance. To simplify matters, always the term focal length is used even if the image distance is meant.

HALCON 8.0.2
1102 CHAPTER 15. TOOLS

Exterior camera parameters: These 6 parameters describe the 3D pose, i.e., the position and orientation, of the
world coordinate system relative to the camera coordinate system. For line scan cameras, the pose of the
world coordinate system refers to the camera coordinate system of the first image line. Three parameters
describe the translation, three the rotation. See create_pose for more information about 3D poses. Note
that camera_calibration operates with all types of 3D poses for NStartPose.
When using the standard calibration plate, the world coordinate system is defined by the coordinate system
of the calibration plate which is located in the middle of the surface of the calibration plate, its z-axis pointing
into the calibration plate, its x-axis to the right, and it y-axis downwards.

Additional information about the calibration process


The use of camera_calibration leads to some questions, which are dealed with in the following sections:

How to generate a appropriate calibration plate? The simplest method to determine the interior parameters of
a camera is the use of the standard calibration plate as generated by the operator gen_caltab. You can
obtain high-precision calibration plates in various sizes and materials from your local distributor. In case of
small distances between object and lens it may be sufficient to print the calibration pattern by a laser printer
and to mount it on a cardboard. Otherwise – especially by using a wide-angle lens – it is possible to print
the PostScript file on a large ink-jet printer and to mount it on a aluminum plate. It is very important that
the mark coordinates in the calibration plate description file correspond to the real ones on the calibration
plate with high accuracy. Thus, the calibration plate description file has to be modified in accordance with
the measurement of the calibration plate!
How to take a set of suitable images? If you use the standard calibration plate, you can proceed in the following
way: With the combination of lens (fixed distance!), camera, and frame grabber to be calibrated a set of
images of the calibration plate has to be taken, see open_framegrabber and grab_image. The
following items have to be considered:
• At least a total of 10 to 20 images should be taken into account.
• The calibration plate has to be completely visible (incl. border!).
• Reflections etc. on the calibration plate should be avoided.
• Within the set of images the calibration plate should appear in different positions and orientations: Once
left in the image, once right, once (left and right) at the bottom, once (left or right) at the top, from
different distances etc. At this, the calibration plate should be rotated around its x- and/or y-axis, so the
perspective distortions of the calibration pattern are clearly visible. Thus, the exterior camera parameters
(camera pose with regard of the calibration plate) should be set to a large variety of different values!
• The calibration plate should fill at least a quarter of the whole image to ensure the robust detection of the
marks.
How to extract the calibration marks in the images? If a standard calibration plate is used, you can use the
operators find_caltab and find_marks_and_pose to determine the coordinates of the calibration
marks in each image and to compute a rough estimate for the exterior camera parameters. The concatenation
of these values can directly be used as initial values for the exterior camera parameters (NStartPose) in
camera_calibration.
Obviously, images in which the segmentation of the calibration plate ( find_caltab) has failed or the
calibration marks haven’t been determined successfully by find_marks_and_pose should not be used.
How to find suitable initial values for the interior camera parameters? The operators
find_marks_and_pose (determination of initial values for the exterior camera parameters) and
camera_calibration require initial values for the interior camera parameters. These parameters can be
provided by a appropriate text file (see read_cam_par) which can be generated by write_cam_par
or can be edited manually.
For area scan cameras, the following should be considered for the initial values of the single parameters:
Focus: The initial value is the nominal focal length of the the used lens, e.g., 0.008 m.
Kappa: Use 0.0 as initial value.
Sx : The initial value for the horizontal distance between two neighboring cells depends on the dimen-
sion of the used chip of the camera (see technical specifications of the camera). Generally, common
chips are either 1/3”-Chips (e.g., SONY XC-73, SONY XC-777), 1/2”-Chips (e.g., SONY XC-999,
Panasonic WV-CD50), or 2/3”-Chips (e.g., SONY DXC-151, SONY XC-77). Notice: The value of
Sx increases if the image is subsampled! Appropriate initial values are:

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1103

Full image (768*576) Subsampling (384*288)


1/3"-Chip 0.0000055 m 0.0000110 m
1/2"-Chip 0.0000086 m 0.0000172 m
2/3"-Chip 0.0000110 m 0.0000220 m

The value for Sx is calibrated, since the video signal of a camera normally isn’t sampled pixel-
synchronously.
Sy : Since most off-the-shelf cameras have quadratic pixels, the same values for Sy are valid as for Sx .
In contrast to Sx the value for Sy will not be calibrated for pinhole cameras, because the video
signal of a camera normally is sampled line-synchronously. Thus, the initial value is equal to the
final value. Appropriate initial values are:
Full image (768*576) Subsampling (384*288)
1/3"-Chip 0.0000055 m 0.0000110 m
1/2"-Chip 0.0000086 m 0.0000172 m
2/3"-Chip 0.0000110 m 0.0000220 m

Cx and Cy : Initial values for the coordinates of the image center is the half image width and half image
height. Notice: The values of Cx and Cy decrease if the image is subsampled! Appropriate initial
values are:
Full image (768*576) Subsampling (384*288)
Cx 384.0 192.0
Cy 288.0 144.0

ImageWidth and ImageHeight: These two parameters are determined by the the used frame grabber
and therefore are not calibrated. Appropriate initial values are, for example:
Full image (768*576) Subsampling (384*288)
ImageWidth 768 384
ImageHeight 576 288

For line scan cameras, the following should be considered for the initial values of the single parameters:
Focus: The initial value is the nominal focal length of the the used lens, e.g., 0.008 m.
Kappa: Use 0.0 as initial value.
Sx : The initial value for the horizontal distance between two neighboring cells can be taken from the
technical specifications of the camera. Typical initial values are 7e-6 m, 10e-6 m, and 14e-6 m.
Notice: The value of Sx increase, if the image is subsampled!
Sy : The initial value for the size of a cell in the direction perpendicular to the sensor line can also be
taken from the technical specifications of the camera. Typical initial values are 7e-6 m, 10e-6 m,
and 14e-6 m. Notice: The value of Sx increase, if the image is subsampled! In contrast to Sx , the
value for Sy will NOT be calibrated for line scan cameras, because it appears only in the form pv =
Sy · Cy . Therefore, it cannot be determined separately.
Cx : The initial value for the x-coordinate of the image center is the half image width. Notice: The
values of Cx decreases if the image is subsampled! Appropriate initial values are:
Image width: 1024 2048 4096 8192
Cx: 512 1024 2048 4096

Cy : The initial value for the y-coordinate of the image center can normally be set to 0.
ImageWidth and ImageHeight: These two parameters are determined by the used frame grabber and
therefore are not calibrated.
Vx , Vy , Vz : The initial values for the x-, y-, and z-component of the motion vector depend on the image
acquisition setup. Assuming a camera that looks perpendicularly onto a conveyor belt, and that is
rotated around its optical axis such that the sensor line is perpendicular to the conveyor belt, i.e., the
y-axis of the camera coordinate system is parallel to the conveyor belt, the initial values Vx = Vz =
0. The initial value for Vy can then be determined, e.g., from a line scan image of an object with
known size (e.g., calibration plate, ruler):

Vy = l[m]/l[row]

HALCON 8.0.2
1104 CHAPTER 15. TOOLS

with:
l[m] = Length of the object in object coordinates [meter]
l[row] = Length of the object in image coordinates [rows]

If, compared to the above setup, the camera is rotated 30 degrees around its optical axis, i.e., around
the z-axis of the camera coordinate system, the above determined initial values must be changed as
follows:

Vxz = sin(30) ∗ Vy
Vyz = cos(30) ∗ Vy
Vzz = Vz = 0

If, compared to the first setup, the camera is rotated -20 degrees around the x-axis of the camera
coordinate system, the following initial values result:

Vxx = Vx = 0
Vyx = cos(−20) ∗ Vy
Vzx = sin(−20) ∗ Vy

The quality of the initial values for Vx , Vy , and Vz are crucial for the success of the whole calibration.
If they are not precise enough, the calibration may fail.
Which camera parameters have to be estimated? The input parameter EstimateParams is used to select
which camera parameters to estimate. Usually this parameter is set to ’all’, i.e., all 6 exterior camera pa-
rameters (translation and rotation) and all interior camera parameters are determined. If the interior camera
parameters already have been determined (e.g., by a previous call to camera_calibration) it is often
desired to only determine the pose of the world coordinate system in camera coordinates (i.e., the exterior
camera parameters). In this case, EstimateParams can be set to ’pose’. This has the same effect as
EstimateParams = [’alpha’,’beta’,’gamma’,’transx’,’transy’,’transz’]. Otherwise, EstimateParams
contains a tuple of strings indicating the combination of parameters to estimate. In addition, parameters can
be excluded from estimation by using the prefix ~. For example, the values [’pose’,’~transx’] have the same
effect as [’alpha’,’beta’,’gamma’,’transy’,’transz’]. Whereas [’all’,’~focus’] determines all internal and ex-
ternal parameters except the focus, for instance. The prefix ~ can be used with all parameter values except
’all’.
What is the order within the individual parameters? The length of the tuple NStartPose corresponds to the
number of calibration images, e.g., using 15 images leads to a length of the tuple NStartPose equal to
15 · 7 = 105 (15 times the 7 exterior camera parameters). The first 7 values correspond to the pose of the
calibration plate in the first image, the next 7 values to the pose in the second image, etc.
This fixed number of calibration images has to be considered within the tuples with the coordinates of the 3D
model marks and the extracted 2D marks. If 15 images are used, the length of the tuples NRow and NCol
is 15 times the length of the tuples with the coordinates of the 3D model marks (NX, NY, and NZ). If every
image consists 49 marks, the length of the tuples NRow and NCol is 15 · 49 = 735, while the length of the
tuples NX, NY, and NZ is 49. The order of the values in NRow and NCol is “image after image”, i.e., using
49 marks the first 3D model point corresponds to the 1st, 50th, 99th, 148th, 197th, 246th, etc. extracted 2D
mark.
The 3D model points can be read from a calibration plate description file using the operator
caltab_points. Initial values for the poses of the calibration plate can be determined by applying
find_marks_and_pose for each image. The tuple NStartPose is set by the concatenation of all
these poses.
What is the meaning of the output parameters? If the camera calibration process is finished successfully, i.e.,
the minimization process has converged, the output parameters CamParam and NFinalPose contain the
computed exact values for the interior and exterior camera parameters. The length of the tuple NFinalPose
corresponds to the length of the tuple NStartPose.
The representation types of NFinalPose correspond to the representation type of the first tuple of
NStartPose (see create_pose). You can convert the representation type by convert_pose_type.
The computed average errors (Errors) give an impression of the accuracy of the calibration. The error
values (deviations in x and y coordinates) are measured in pixels.

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1105

Must I use a planar calibration object? No. The operator camera_calibration is designed in a way that
the input tuples NX, NY, NZ, NRow, and NCol can contain any 3D/2D correspondences, see the above para-
graph explaining the order of the single parameters.
Thus, it makes no difference how the required 3D model marks and the corresponding extracted 2D marks are
determined. On one hand, it is possible to use a 3D calibration pattern, on the other hand, you also can use any
characteristic points (natural landmarks) with known position in the world. By setting EstimateParams
to ’pose’, it is thus possible to compute the pose of an object in camera coordinates! For this, at least three
3D/2D-correspondences are necessary as input. NStartPose can, e.g., be generated directly as shown in
the program example for create_pose.

Attention
The minimization process of the calibration depends on the initial values of the interior (StartCamParam) and
exterior (NStartPose) camera parameters. The computed average errors Errors give an impression of the
accuracy of the calibration. The errors (deviations in x and y coordinates) are measured in pixels.
Parameter
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered tuple with all x coordinates of the calibration marks (in meters).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered tuple with all y coordinates of the calibration marks (in meters).
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered tuple with all z coordinates of the calibration marks (in meters).
. NRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; Htuple . double
Ordered tuple with all row coordinates of the extracted calibration marks (in pixels).
. NCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; Htuple . double
Ordered tuple with all column coordinates of the extracted calibration marks (in pixels).
. StartCamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong
Initial values for the interior camera parameters.
. NStartPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Ordered tuple with all initial values for the exterior camera parameters.
. EstimateParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char * / Hlong
Camera parameters to be estimated.
Default Value : "all"
List of values : EstimateParams ∈ {"all", "pose", "alpha", "beta", "gamma", "transx", "transy", "transz",
"focus", "kappa", "cx", "cy", "sx", "sy", "vx", "vy", "vz"}
. CamParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
Interior camera parameters.
. NFinalPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Ordered tuple with all exterior camera parameters.
. Errors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Average error distances in pixels.
Example (Syntax: HDevelop)

* read calibration images


read_image(Image1, ’calib-01’)
read_image(Image2, ’calib-02’)
read_image(Image3, ’calib-03’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
find_caltab(Image2, Caltab2, ’caltab.descr’, 3, 112, 5)
find_caltab(Image3, Caltab3, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
find_marks_and_pose(Image1, Caltab1, ’caltab.descr’,
[0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576],
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,
StartPose1)
find_marks_and_pose(Image2, Caltab2, ’caltab.descr’,

HALCON 8.0.2
1106 CHAPTER 15. TOOLS

[0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576],


128, 10, 18, 0.9, 15.0, 100.0, RCoord2, CCoord2,
StartPose2)
find_marks_and_pose(Image3, Caltab3, ’caltab.descr’,
[0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576],
128, 10, 18, 0.9, 15.0, 100.0, RCoord3, CCoord3,
StartPose3)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ)
* camera calibration
camera_calibration(NX, NY, NZ, [RCoord1, RCoord2, RCoord3],
[CCoord1, CCoord2, CCoord3],
[0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576],
[StartPose1, StartPose2, StartPose3], ’all’,
CamParam, NFinalPose, Errors)
* write interior camera parameters to file
write_cam_par(CamParam, ’campar.dat’)

Result
camera_calibration returns H_MSG_TRUE if all parameter values are correct and the desired camera pa-
rameters have been determined by the minimization algorithm. If necessary, an exception handling is raised.
Parallelization Information
camera_calibration is reentrant and processed without parallelization.
Possible Predecessors
find_marks_and_pose, caltab_points, read_cam_par
Possible Successors
write_pose, pose_to_hom_mat3d, disp_caltab, sim_caltab
See also
find_caltab, find_marks_and_pose, disp_caltab, sim_caltab, write_cam_par,
read_cam_par, create_pose, convert_pose_type, write_pose, read_pose,
pose_to_hom_mat3d, hom_mat3d_to_pose, caltab_points, gen_caltab
Module
Calibration

T_change_radial_distortion_cam_par ( const Htuple Mode,


const Htuple CamParIn, const Htuple Kappa, Htuple *CamParOut )

Determine new camera parameters in accordance to the specified radial distortion.


change_radial_distortion_cam_par modifies the interior camera parameters in accordance to the spec-
ified radial distortion Kappa. Via Mode one of the following modes can be selected:

• ’fixed’: Only Kappa is modified, the other interior camera parameters remain unchanged. In general, this
leads to a change of the visible part of the scene.
• ’fullsize’: The scale factors Sx and Sy and the image center point [Cx , Cy ]T are modified in order to preserve
the visible part of the scene. Thus, all points visible in the original image are also visible in the modified
(rectified) image. In general, this leads to undefined pixels in the modified image.
• ’adaptive’: A trade-off between the other modes: The visible part of the scene is slightly reduced to prevent
undefined pixels in the modified image. Similiarly to ’fullsize’, the scale factors and the image center point
are modified.
• ’preserve_resolution’: As in the mode ’fullsize’, all points visible in the original image are also visible in
the modified (rectified) image, i.e., the scale factors Sx and Sy and the image center point [Cx , Cy ]T are
modified. In general, this leads to undefined pixels in the modified image. In contrast to the mode ’fullsize’
additionally the size of the modified image is increased such that the image resolution does not decrease in
any part of the image.

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1107

In all modes the radial distortion coefficient κ in CamParOut is set to Kappa. The transformation of a pixel in
the modified image into the image plane using CamParOut results in the same point as the transformation of a
pixel in the original image via CamParIn.
Parameter
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Mode
Default Value : "adaptive"
Suggested values : Mode ∈ {"fullsize", "adaptive", "fixed", "preserve_resolution"}
. CamParIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong
Interior camera parameters (original).
. Kappa (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Desired radial distortion.
Default Value : 0.0
. CamParOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double * / Hlong *
Interior camera parameters (modified).
Result
change_radial_distortion_cam_par returns H_MSG_TRUE if all parameter values are correct. If nec-
essary, an exception handling is raised.
Parallelization Information
change_radial_distortion_cam_par is reentrant and processed without parallelization.
Possible Predecessors
camera_calibration, read_cam_par
Possible Successors
change_radial_distortion_image, change_radial_distortion_contours_xld,
gen_radial_distortion_map
See also
camera_calibration, read_cam_par, change_radial_distortion_image,
change_radial_distortion_contours_xld
Module
Calibration

T_change_radial_distortion_contours_xld ( const Hobject Contours,


Hobject *ContoursRectified, const Htuple CamParIn,
const Htuple CamParOut )

Change the radial distortion of contours.


change_radial_distortion_contours_xld changes the radial distortion of the input contours
Contours in accordance to the interior camera parameters CamParIn and CamParOut. Each subpixel of
an input contour is transformed into the image plane using CamParIn and subsequently projected into a subpixel
of the corresponding contour in ContoursRectified using CamParOut.
If CamParOut was computed via change_radial_distortion_cam_par, the contours
ContoursRectified are equivalent to Contours obtained with a lens with a modified radial distor-
tion. If κ is 0 the contours are rectified. A subsequent pose estimation (determination of the exterior camera
parameters) is not affected by this operation.
Parameter
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject
Original contours.
. ContoursRectified (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *
Resulting contours with modified radial distortion.
. CamParIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong
Interior camera parameter for Contours.
. CamParOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong
Interior camera parameter for ContoursRectified.

HALCON 8.0.2
1108 CHAPTER 15. TOOLS

Parallelization Information
change_radial_distortion_contours_xld is reentrant and processed without parallelization.
Possible Predecessors
change_radial_distortion_cam_par, gen_contours_skeleton_xld, edges_sub_pix,
smooth_contours_xld
Possible Successors
gen_polygons_xld, smooth_contours_xld
See also
change_radial_distortion_cam_par, camera_calibration, read_cam_par,
change_radial_distortion_image
Module
Calibration

T_change_radial_distortion_image ( const Hobject Image,


const Hobject Region, Hobject *ImageRectified, const Htuple CamParIn,
const Htuple CamParOut )

Change the radial distortion of an image.


change_radial_distortion_image changes the radial distortion of the input image Image in accordance
to the interior camera parameters CamParIn and CamParOut. Each pixel of the output image that lies within the
region Region is transformed into the image plane using CamParOut and subsequently projected into a subpixel
of Image using CamParIn. The resulting gray value is determined by bilinear interpolation. If the subpixel is
outside of Image, the corresponding pixel in ImageRectified is set to ’black’ and eliminated from the image
domain.
If the gray values of all pixels in the output image shall be calculated, it is sufficient to pass an empty object in
Region (which must be previously generated by, for example, using gen_empty_obj). This is especially
useful if the size of the output image differs from the size of the input image, and hence, it is not possible to simply
pass the region of the input image in Region.
If CamParOut was computed via change_radial_distortion_cam_par, ImageRectified is
equivalent to Image obtained with a lens with a modified radial distortion. If κ is 0 the image is rectified. A
subsequent pose estimation (determination of the exterior camera parameters) is not affected by this operation.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2


Original image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region of interest in ImageRectified.
. ImageRectified (output_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Resulting image with modified radial distortion.
. CamParIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong
Interior camera parameter for Image.
. CamParOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong
Interior camera parameter for Image.
Result
change_radial_distortion_image returns H_MSG_TRUE if all parameter values are correct.
If the input is empty (no input image is available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
change_radial_distortion_image is reentrant and processed without parallelization.
Possible Predecessors
change_radial_distortion_cam_par, read_image, grab_image
Possible Successors
edges_image, threshold

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1109

See also
change_radial_distortion_cam_par, camera_calibration, read_cam_par,
change_radial_distortion_contours_xld
Module
Calibration

T_contour_to_world_plane_xld ( const Hobject Contours,


Hobject *ContoursTrans, const Htuple CamParam, const Htuple WorldPose,
const Htuple Scale )

Transform an XLD contour into the plane z=0 of a world coordinate system.
The operator contour_to_world_plane_xld transforms contour points given in Contours into the plane
z=0 in a world coordinate system and returns the 3D contour points in ContoursTrans. The world coordinate
system is chosen by passing its 3D pose relative to the camera coordinate system in WorldPose. In CamParam
you must pass the interior camera parameters (see write_cam_par for the sequence of the parameters and the
underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
With the parameter Scale you can scale the resulting 3D coordinates. The parameter Scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’µm’ for the parameter Scale.
Internally, the operator first computes the line of sight between the projection center and the image point in the
camera coordinate system, taking into account the radial distortions. The line of sight is then transformed into the
world coordinate system specified in WorldPose. By intersecting the plane z=0 with the line of sight the 3D
coordinates of the transformed contour ContoursTrans are obtained.
Parameter
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject
Input XLD contours to be transformed in image coordinates.
. ContoursTrans (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *
Transformed XLD contours in world coordinates.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
3D pose of the world coordinate system in camera coordinates.
Number of elements : 7
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . const char * / Hlong / double
Scale oder dimension
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
Example (Syntax: HDevelop)

* perform camera calibration (with standard calibration plate)


camera_calibration(NX, NY, NZ, NRow, NCol, StartCamParam, NStartPose, ’all’,
FinalCamParam, NFinalPose, Errors)
* world coordinate system is defined by calibration plate in first image
FinalPose1 := NFinalPose[0:6]
* compensate thickness of plate
set_origin_pose(FinalPose1, 0, 0, 0.0006, WorldPose)
* transform contours into world coordinate system (unit mm)
contour_to_world_plane_xld(Contours, ContoursTrans,
FinalCamParam, WorldPose, ’mm’)

HALCON 8.0.2
1110 CHAPTER 15. TOOLS

Result
contour_to_world_plane_xld returns H_MSG_TRUE if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
contour_to_world_plane_xld is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
See also
image_points_to_world_plane
Module
Calibration

create_caltab ( double Width, const char *CalTabDescrFile,


const char *CalTabFile )

T_create_caltab ( const Htuple Width, const Htuple CalTabDescrFile,


const Htuple CalTabFile )

Generate a calibration plate description file and a corresponding PostScript file. (obsolete)
create_caltab has been replaced with the operator gen_caltab. The operator is contained and described
for compatibility reasons only.
create_caltab generates the description of a standard calibration plate for HALCON. This calibration plate
consists of 49 black circular marks on a white plane which are surrounded by a black frame. The parameter Width
sets the width (equal to the height) of the whole calibration plate in meters. Using a width of 0.8 m, the distance
between two neighboring marks becomes 10 cm, and the mark radius and the frame width are set to 2.5 cm. The
calibration plate coordinate system is located in the middle of the surface of the calibration plate, its z-axis points
into the calibration plate, its x-axis to the right, and it y-axis downwards.
The file CalTabDescrFile contains the calibration plate description, e.g., the number of rows and columns
of the calibration plate, the geometry of the surrounding frame (see find_caltab), and the coordinates and
the radius of all calibration plate marks given in the calibration plate coordinate system. A file generated by
create_caltab looks like the following (comments are marked by a ’#’ at the beginning of a line):

#
# Description of the standard calibration plate
# used for the camera calibration in HALCON
#

# 7 rows X 7 columns
# Distance between mark centers [meter]: 0.1

# Number of marks per row


r 7

# Number of marks per column


c 7

# Quadratic frame (with outer and inner border) around calibration plate
w 0.025
o -0.41 0.41 0.41 -0.41
i -0.4 0.4 0.4 -0.4

# calibration marks: x y radius [Meter]

# calibration marks at y = -0.3 m


-0.3 -0.3 0.025

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1111

-0.2 -0.3 0.025


-0.1 -0.3 0.025
0 -0.3 0.025
0.1 -0.3 0.025
0.2 -0.3 0.025
0.3 -0.3 0.025

# calibration marks at y = -0.2 m


-0.3 -0.2 0.025
-0.2 -0.2 0.025
-0.1 -0.2 0.025
0 -0.2 0.025
0.1 -0.2 0.025
0.2 -0.2 0.025
0.3 -0.2 0.025

# calibration marks at y = -0.1 m


-0.3 -0.1 0.025
-0.2 -0.1 0.025
-0.1 -0.1 0.025
0 -0.1 0.025
0.1 -0.1 0.025
0.2 -0.1 0.025
0.3 -0.1 0.025

# calibration marks at y = 0 m
-0.3 0 0.025
-0.2 0 0.025
-0.1 0 0.025
0 0 0.025
0.1 0 0.025
0.2 0 0.025
0.3 0 0.025

# calibration marks at y = 0.1 m


-0.3 0.1 0.025
-0.2 0.1 0.025
-0.1 0.1 0.025
0 0.1 0.025
0.1 0.1 0.025
0.2 0.1 0.025
0.3 0.1 0.025

# calibration marks at y = 0.2 m


-0.3 0.2 0.025
-0.2 0.2 0.025
-0.1 0.2 0.025
0 0.2 0.025
0.1 0.2 0.025
0.2 0.2 0.025
0.3 0.2 0.025

# calibration marks at y = 0.3 m


-0.3 0.3 0.025
-0.2 0.3 0.025
-0.1 0.3 0.025
0 0.3 0.025
0.1 0.3 0.025
0.2 0.3 0.025

HALCON 8.0.2
1112 CHAPTER 15. TOOLS

0.3 0.3 0.025

The file CalTabFile contains the corresponding PostScript description of the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), the printed calibration plate may not
match the values in the calibration plate descripton file CalTabDescrFile exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
Parameter
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Width of the calibration plate in meters.
Default Value : 0.8
Suggested values : Width ∈ {1.2, 0.8, 0.6, 0.4, 0.2, 0.1}
Recommended Increment : 0.1
Restriction : 0.0 < Width
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .filename.write ; const char *
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. CalTabFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name of the PostScript file.
Default Value : "caltab.ps"
Example (Syntax: HDevelop)

* create calibration plate with width = 80 cm


create_caltab(0.8, ’caltab.descr’, ’caltab.ps’)

Result
create_caltab returns H_MSG_TRUE if all parameter values are correct and both files have been written
successfully. If necessary, an exception handling is raised.
Parallelization Information
create_caltab is processed completely exclusively without parallelization.
Possible Successors
read_cam_par, caltab_points
See also
gen_caltab, find_caltab, find_marks_and_pose, camera_calibration, disp_caltab,
sim_caltab
Module
Foundation

T_disp_caltab ( const Htuple WindowHandle,


const Htuple CalTabDescrFile, const Htuple CamParam,
const Htuple CaltabPose, const Htuple ScaleFac )

Project and visualize the 3D model of the calibration plate in the image.
disp_caltab is used to visualize the calibration marks and the connecting lines between the marks of the
used calibration plate (CalTabDescrFile) in the window specified by WindowHandle. Additionally, the
x- and y-axes of the plate’s coordiante system are printed on the plate’s surface. For this, the 3D model of
the calibration plate is projected into the image plane using the interior (CamParam) and exterior camera pa-
rameters (CaltabPose, i.e., the pose of the calibration plate in camera coordinates). The underlying camera
model (pinhole, telecentric, or line scan camera with radial distortion) is described in write_cam_par and
camera_calibration.
Typically, disp_caltab is used to verify the result of the camera calibration (see camera_calibration)
by superimposing it onto the original image. The current line width can be set by set_line_width, the current

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1113

color can be set by set_color. Additionally, the font type of the labels of the coordinate axes can be set by
set_font.
The parameter ScaleFac influences the number of supporting points to approximate the elliptic contours of the
calibration marks. You should increase the number of supporting points, if the image part in the output window is
displayed with magnification (see set_part).
Parameter

. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong


Window in which the calibration plate should be visualized.
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. CaltabPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Exterior camera parameters (3D pose of the calibration plate in camera coordinates).
Number of elements : 7
. ScaleFac (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Scaling factor for the visualization.
Default Value : 1.0
Suggested values : ScaleFac ∈ {0.5, 1.0, 2.0, 3.0}
Recommended Increment : 0.05
Restriction : 0.0 < ScaleFac
Example (Syntax: HDevelop)

* read calibration image


read_image(Image1, ’calib-01’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
StartCamPar := [Focus, Kappa, Sx, Sy, Cx, Cy, ImageWidth, ImageHeight]
find_marks_and_pose(Image1, Caltab1, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,
StartPose1)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ)
* camera calibration
camera_calibration(NX, NY, NZ, RCoord1, CCoord1, StartCamPar,
StartPose1, 11, CamParam, FinalPose, Errors)
* visualize calibration result
disp_image(Image1, WindowHandle)
set_color(WindowHandle, ’red’)
disp_caltab(’caltab.descr’, CamParam, FinalPose, 1.0)

Result
disp_caltab returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception handling is
raised.
Parallelization Information
disp_caltab is reentrant, local, and processed without parallelization.
Possible Predecessors
camera_calibration, read_cam_par, read_pose
See also
find_marks_and_pose, camera_calibration, sim_caltab, write_cam_par,

HALCON 8.0.2
1114 CHAPTER 15. TOOLS

read_cam_par, create_pose, write_pose, read_pose, project_3d_point,


get_line_of_sight
Module
Foundation

find_caltab ( const Hobject Image, Hobject *Caltab,


const char *CalTabDescrFile, Hlong SizeGauss, Hlong MarkThresh,
Hlong MinDiamMarks )

T_find_caltab ( const Hobject Image, Hobject *Caltab,


const Htuple CalTabDescrFile, const Htuple SizeGauss,
const Htuple MarkThresh, const Htuple MinDiamMarks )

Segment the standard calibration plate region in the image.


find_caltab is used to determine the region of a plane calibration plate with circular marks in the input image
Image. First the input image is smoothed (see gauss_image); the size of the used filter mask is given by
SizeGauss. Afterwards, a thresholding operator (see threshold) with minimum gray value MarkThresh
and maximum gray value 255 is applied. Among the extracted connected regions the most convex region with
almost correct number of holes (corresponding to the dark marks of the calibration plate) is selected. Holes with
a diameter smaller than the expected size of the marks MinDiamMarks are eliminated to reduce the impact of
noise. The number of marks is read from the calibration plate description file CalTabDescrFile. The complete
explanation of this file can be found within the description of gen_caltab.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2


Input image.
. Caltab (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Output region.
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. SizeGauss (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Filter size of the Gaussian.
Default Value : 3
List of values : SizeGauss ∈ {0, 3, 5, 7, 9, 11}
. MarkThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Threshold value for mark extraction.
Default Value : 112
List of values : MarkThresh ∈ {48, 64, 80, 96, 112, 128, 144, 160}
. MinDiamMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Expected minimal diameter of the marks on the calibration plate.
Default Value : 5
List of values : MinDiamMarks ∈ {3, 5, 9, 15, 30, 50, 70}
Example (Syntax: HDevelop)

* read calibration image


read_image(Image, ’calib-01’)
* find calibration pattern
find_caltab(Image, Caltab, ’caltab.descr’, 3, 112, 5)

Result
find_caltab returns H_MSG_TRUE if all parameter values are correct and an image region is
found. The behavior in case of empty input (no image given) can be set via set_system

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1115

(’no_object_result’,<Result>) and the behavior in case of an empty result region via set_system
(’store_empty_region’,<true/false>). If necessary, an exception handling is raised.
Parallelization Information
find_caltab is reentrant and processed without parallelization.
Possible Predecessors
read_image
Possible Successors
find_marks_and_pose
See also
find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
caltab_points, gen_caltab
Module
Foundation

T_find_marks_and_pose ( const Hobject Image,


const Hobject CalTabRegion, const Htuple CalTabDescrFile,
const Htuple StartCamParam, const Htuple StartThresh,
const Htuple DeltaThresh, const Htuple MinThresh, const Htuple Alpha,
const Htuple MinContLength, const Htuple MaxDiamMarks, Htuple *RCoord,
Htuple *CCoord, Htuple *StartPose )

Extract the 2D calibration marks from the image and calculate initial values for the exterior camera parameters.
find_marks_and_pose is used to determine the necessary input data for the subsequent camera calibration
(see camera_calibration): First, the 2D center points [RCoord,CCoord] of the calibration marks within
the region CalTabRegion of the input image Image are extracted and ordered. Secondly, a rough estimate for
the exterior camera parameters (StartPose) is computed, i.e., the 3D pose (= position and orientation) of the
calibration plate relative to the camera coordinate system (see create_pose for more information about 3D
poses).
In the input image Image an edge detector is applied (see edges_image, mode ’lanser2’) to the region
CalTabRegion, which can be found by applying the operator find_caltab. The filter parameter for this
edge detection can be tuned via Alpha. In the edge image closed contours are searched for: The number of closed
contours must correspond to the number of calibration marks as described in the calibration plate description file
CalTabDescrFile and the contours have to be ellipticly shaped. Contours shorter than MinContLength are
discarded, just as contours enclosing regions with a diameter larger than MaxDiamMarks (e.g., the border of the
calibration plate).
For the detection of contours a threshold operator is applied on the resulting amplitudes of the edge detector. All
points with a high amplitude (i.e., borders of marks) are selected.
First, the threshold value is set to StartThresh. If the search for the closed contours or the successive pose
estimate fails, this threshold value is successively decreased by DeltaThresh down to a minimum value of
MinThresh.
Each of the found contours is refined with subpixel accuracy (see edges_sub_pix) and subsequently approx-
imated by an ellipse. The center points of these ellipses represent a good approximation of the desired 2D image
coordinates [RCoord,CCoord] of the calibration mark center points. The order of the values within these two tu-
ples must correspond to the order of the 3D coordinates of the calibration marks in the calibration plate description
file CalTabDescrFile, since this fixes the correspondences between extracted image marks and known model
marks (given by caltab_points)! If a triangular orientation mark is defined in a corner of the plate by the
plate description file (see gen_caltab), the mark will be detected and the point order is returned in row-major
order beginning with the corner mark in the (barycentric) negative quadrant with respect to the defined coordinate
system of the plate. Else, if no orientation mark is defined, the order of the center points is in row-major order
beginning at the upper left corner mark in the image.
Based on the ellipse parameters for each calibration mark, a rough estimate for the exterior camera parameters is
finally computed. For this purpose the fixed correspondences between extracted image marks and known model
marks are used. The estimate StartPose describes the pose of the calibration plate in the camera coordinate
system as required by the operator camera_calibration.

HALCON 8.0.2
1116 CHAPTER 15. TOOLS

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Input image.
. CalTabRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region of the calibration plate.
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. StartCamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Initial values for the interior camera parameters.
Number of elements : (StartCamParam = 8) ∨ (StartCamParam = 11)
. StartThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Initial threshold value for contour detection.
Default Value : 128
List of values : StartThresh ∈ {80, 96, 112, 128, 144, 160}
. DeltaThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Loop value for successive reduction of StartThresh.
Default Value : 10
List of values : DeltaThresh ∈ {6, 8, 10, 12, 14, 16, 18, 20, 22}
. MinThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Minimum threshold for contour detection.
Default Value : 18
List of values : MinThresh ∈ {8, 10, 12, 14, 16, 18, 20, 22}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Filter parameter for contour detection, see edges_image.
Default Value : 0.9
Suggested values : Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1}
Typical range of values : 0.2 ≤ Alpha ≤ 2.0
Restriction : Alpha > 0.0
. MinContLength (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum length of the contours of the marks.
Default Value : 15.0
Suggested values : MinContLength ∈ {10.0, 15.0, 20.0, 30.0, 40.0, 100.0}
Restriction : MinContLength > 0.0
. MaxDiamMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Maximum expected diameter of the marks.
Default Value : 100.0
Suggested values : MaxDiamMarks ∈ {50.0, 100.0, 150.0, 200.0, 300.0}
Restriction : MaxDiamMarks > 0.0
. RCoord (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Tuple with row coordinates of the detected marks.
. CCoord (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Tuple with column coordinates of the detected marks.
. StartPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Estimation for the exterior camera parameters.
Number of elements : 7
Example (Syntax: HDevelop)

* read calibration image


read_image(Image, ’calib-01’)
* find calibration pattern
find_caltab(Image,Caltab1, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start pose
find_marks_and_pose(Image, Caltab, ’caltab.descr’ ,

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1117

[0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576],


128, 10, 18, 0.9, 15.0, 100.0, RCoord, CCoord, StartPose)

Result
find_marks_and_pose returns H_MSG_TRUE if all parameter values are correct and an estimation for the
exterior camera parameters has been determined successfully. If necessary, an exception handling is raised.
Parallelization Information
find_marks_and_pose is reentrant and processed without parallelization.
Possible Predecessors
find_caltab
Possible Successors
camera_calibration
See also
find_caltab, camera_calibration, disp_caltab, sim_caltab, read_cam_par,
read_pose, create_pose, pose_to_hom_mat3d, caltab_points, gen_caltab,
edges_sub_pix, edges_image
Module
Foundation

gen_caltab ( Hlong XNum, Hlong YNum, double MarkDist,


double DiameterRatio, const char *CalTabDescrFile,
const char *CalTabPSFile )

T_gen_caltab ( const Htuple XNum, const Htuple YNum,


const Htuple MarkDist, const Htuple DiameterRatio,
const Htuple CalTabDescrFile, const Htuple CalTabPSFile )

Generate a calibration plate description file and a corresponding PostScript file.


gen_caltab generates the description of a standard calibration plate for HALCON. This calibration plate con-
sists of XNum times YNum black circular marks on a white plane which are surrounded by a black frame. The
marks are arranged in a rectangular grid with YNum and XNum equidistant rows and columns. The distances be-
tween these rows and columns defines the parameter MarkDist in meter. The marks’ diameter can be set by the
parameter DiameterRatio and is defined by the equation
Diameter = MarkDist ∗ DiameterRatio . Using a distance between marks of 0.01 m and a diameter ratio
of 0.5, the width of the dark surrounding frame becomes 8 cm, and the radius of the marks is set to 2.5 mm.
The coordinate system of the calibration plate is located in the barycenter of all marks, its z-axis points into the
calibration plate, its x-axis to the right, and its y-axis downwards.
The file CalTabDescrFile contains the calibration plate description, e.g., the number of rows and columns
of the calibration plate, the geometry of the surrounding frame (see find_caltab), the triangular orientation
mark, an offset of the coordinate system to the plate’s surface in z-direction, and the x,y coordinates and the radius
of all calibration plate marks given in the calibration plate coordinate system. The definition of the orientation and
the offset, indicated by t and z, is optional and can be commented out. A file generated by gen_caltab looks
like the following (comments are marked by a ’#’ at the beginning of a line):

# Plate Description Version 2


# HALCON Version 7.1 -- Fri Jun 24 16:41:00 2005
# Description of the standard calibration plate
# used for the CCD camera calibration in HALCON
# (generated by gen\_caltab)
#
#
# 7 rows x 7 columns
# Width, height of the black frame [meter]: 0.1, 0.1
# Distance between mark centers [meter]: 0.0125

HALCON 8.0.2
1118 CHAPTER 15. TOOLS

# Number of marks in y-dimension (rows)


r 7

# Number of marks in x-dimension (columns)


c 7

# offset of coordinate system in z-dimension [meter] (optional):


z 0

# Rectangular border (rim and black frame) of calibration plate


# rim of the calibration plate (min x, max y, max x, min y) [meter]:
o -0.05125 0.05125 0.05125 -0.05125
# outer border of the black frame (min x, max y, max x, min y) [meter]:
i -0.05 0.05 0.05 -0.05
# triangular corner mark given by two corner points (x,y, x,y) [meter]
# (optional):
t -0.05 -0.0375 -0.0375 -0.05

# width of the black frame [meter]:


w 0.003125

# calibration marks: x y radius [meter]

# calibration marks at y = -0.0375 m


-0.0375 -0.0375 0.003125
-0.025 -0.0375 0.003125
-0.0125 -0.0375 0.003125
-3.46945e-018 -0.0375 0.003125
0.0125 -0.0375 0.003125
0.025 -0.0375 0.003125
0.0375 -0.0375 0.003125

# calibration marks at y = -0.025 m


-0.0375 -0.025 0.003125
-0.025 -0.025 0.003125
-0.0125 -0.025 0.003125
-3.46945e-018 -0.025 0.003125
0.0125 -0.025 0.003125
0.025 -0.025 0.003125
0.0375 -0.025 0.003125

# calibration marks at y = -0.0125 m


-0.0375 -0.0125 0.003125
-0.025 -0.0125 0.003125
-0.0125 -0.0125 0.003125
-3.46945e-018 -0.0125 0.003125
0.0125 -0.0125 0.003125
0.025 -0.0125 0.003125
0.0375 -0.0125 0.003125

# calibration marks at y = -3.46945e-018 m


-0.0375 -3.46945e-018 0.003125
-0.025 -3.46945e-018 0.003125
-0.0125 -3.46945e-018 0.003125
-3.46945e-018 -3.46945e-018 0.003125
0.0125 -3.46945e-018 0.003125
0.025 -3.46945e-018 0.003125
0.0375 -3.46945e-018 0.003125

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1119

# calibration marks at y = 0.0125 m


-0.0375 0.0125 0.003125
-0.025 0.0125 0.003125
-0.0125 0.0125 0.003125
-3.46945e-018 0.0125 0.003125
0.0125 0.0125 0.003125
0.025 0.0125 0.003125
0.0375 0.0125 0.003125

# calibration marks at y = 0.025 m


-0.0375 0.025 0.003125
-0.025 0.025 0.003125
-0.0125 0.025 0.003125
-3.46945e-018 0.025 0.003125
0.0125 0.025 0.003125
0.025 0.025 0.003125
0.0375 0.025 0.003125

# calibration marks at y = 0.0375 m


-0.0375 0.0375 0.003125
-0.025 0.0375 0.003125
-0.0125 0.0375 0.003125
-3.46945e-018 0.0375 0.003125
0.0125 0.0375 0.003125
0.025 0.0375 0.003125
0.0375 0.0375 0.003125

The file CalTabPSFile contains the corresponding PostScript description of the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), the printed calibration plate may not
match the values in the calibration plate descripton file CalTabDescrFile exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
Parameter

. XNum (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong


Number of marks in x direction.
Default Value : 7
Suggested values : XNum ∈ {5, 7, 9}
Recommended Increment : 1
Restriction : XNum > 1
. YNum (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of marks in y direction.
Default Value : 7
Suggested values : YNum ∈ {5, 7, 9}
Recommended Increment : 1
Restriction : YNum > 1
. MarkDist (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Distance of the marks in meters.
Default Value : 0.0125
Suggested values : MarkDist ∈ {0.1, 0.0125, 0.00375, 0.00125}
Restriction : 0.0 < MarkDist
. DiameterRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Ratio of the mark diameter to the mark distance.
Default Value : 0.5
Suggested values : DiameterRatio ∈ {0.5, 0.55, 0.6, 0.65}
Restriction : (0.0 < DiameterRatio) < 1.0

HALCON 8.0.2
1120 CHAPTER 15. TOOLS

. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .filename.write ; const char *


File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. CalTabPSFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name of the PostScript file.
Default Value : "caltab.ps"
Example (Syntax: HDevelop)

* Create calibration plate with width = 80 cm


gen_caltab( 7, 7, 0.1, 0.5, ’caltab.descr’, ’caltab.ps’)

Result
gen_caltab returns H_MSG_TRUE if all parameter values are correct and both files have been written success-
fully. If necessary, an exception handling is raised.
Parallelization Information
gen_caltab is processed completely exclusively without parallelization.
Possible Successors
read_cam_par, caltab_points
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab
Module
Foundation

T_gen_image_to_world_plane_map ( Hobject *Map,


const Htuple CamParam, const Htuple WorldPose, const Htuple WidthIn,
const Htuple HeightIn, const Htuple WidthMapped,
const Htuple HeightMapped, const Htuple Scale,
const Htuple Interpolation )

Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world
coordinate system.
gen_image_to_world_plane_map generates a projection map Map, which describes the mapping between
the image plane and the plane z=0 (plane of measurements) in a world coordinate system. This map can be used
to rectify an image with the operator map_image. The rectified image shows neither radial nor perspective dis-
tortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly onto the plane
of measurements. The world coordinate system is chosen by passing its 3D pose relative to the camera coordinate
system in WorldPose. In CamParam you must pass the interior camera parameters (see write_cam_par for
the sequence of the parameters and the underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
The size of the images to be mapped can be specified by the parameters WidthIn and HeightIn. The pixel
position of the upper left corner of the output image is determined by the origin of the world coordinate system.
The size of the output image can be choosen by the parameters WidthMapped, HeightMapped, and Scale.
WidthMapped and HeightMapped must be given in pixels.
With the parameter Scale you can specify the size of a pixel in the transformed image. There are two typical
scenarios: First, you can scale the image such that pixel coordinates in the transformed image directly correspond
to metric units, e.g., that one pixel corresponds to one micron. This is useful if you want to perform measurements
in the transformed image which will then directly result in metric results. The second scenario is to scale the image
such that its content appears in a size similar to the original image. This is useful, e.g., if you want to perform
shape-based matching in the transformed image.
Scale must be specified as the ratio desired pixel size/original unit. A pixel size of 1µm means that a pixel in
the transformed image corresponds to the area 1µm × 1µm in the plane of measurements. The original unit is

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1121

determined by the coordinates of the calibration object. If the original unit is meters (which is the case if you use
the standard calibration plate), you can use the parameter values ’m’, ’cm’, ’mm’, ’microns’, or ’µm’ to directly set
the unit of pixel coordinates in the transformed image.
The parameter Interpolation specifies whether bilinear interpolation (’bilinear’) should be applied between
the pixels in the input image or whether the gray value of the nearest neighboring pixel (’none’) should be used.
The mapping function is stored in the output image Map. Map has the same size as the resulting images after
the mapping. If no interpolation is chosen, Map consists of one image containing one channel, in which for each
pixel of the resulting image the linearized coordinate of the pixel of the input image is stored that is the nearest
neighbor to the transformed coordinates. If bilinear interpolation is chosen, Map consists of one image containing
five channels. In the first channel for each pixel in the resulting image the linearized coordinates of the pixel in
the input image is stored that is in the upper left position relative to the transformed coordinates. The four other
channels contain the weights of the four neighboring pixels of the transformed coordinates which are used for the
bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to
the transformed coordinates. If several images have to be mapped using the same camera parameters,
gen_image_to_world_plane_map in combination with map_image is much more efficient than the op-
erator image_to_world_plane because the mapping function needs to be computed only once.
Parameter
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : int4 / uint2
Image containing the mapping data.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
3D pose of the world coordinate system in camera coordinates.
Number of elements : 7
. WidthIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Htuple . Hlong
Width of the images to be transformed.
. HeightIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Htuple . Hlong
Height of the images to be transformed.
. WidthMapped (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Htuple . Hlong
Width of the resulting mapped images in pixels.
. HeightMapped (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Htuple . Hlong
Height of the resulting mapped images in pixels.
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . const char * / Hlong / double
Scale or unit.
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of interpolation.
Default Value : "bilinear"
List of values : Interpolation ∈ {"none", "bilinear"}
Example (Syntax: HDevelop)

* perform camera calibration (with standard calibration plate)


camera_calibration(NX, NY, NZ, NRow, NCol, StartCamParam, NStartPose, ’all’,
FinalCamParam, NFinalPose, Errors)
* world coordinate system is defined by calibration plate in first image
FinalPose1 := NFinalPose[0:6]
* compensate thickness of plate
set_origin_pose(FinalPose1, 0, 0, 0.000073, WorldPose)
* goal: rectify images

HALCON 8.0.2
1122 CHAPTER 15. TOOLS

* first determine parameters such that the entire image content is visible
* -> transform image boundary into world plane, determine smallest
* rectangle around it
get_image_pointer1(Image, Pointer, Type, Width, Height)
gen_rectangle1 (ImageRect, 0, 0, Height-1, Width-1)
gen_contour_region_xld (ImageRect, ImageBorder, ’border’)
contour_to_world_plane_xld(ImageBorder, ImageBorderWCS, FinalCamParam,
WorldPose, 1)
smallest_rectangle1_xld (ImageBorderWCS, MinY, MinX, MaxY, MaxX)
* -> move the pose to the upper left corner of the surrounding rectangle
set_origin_pose(WorldPose, MinX, MinY, 0, PoseForEntireImage)
* -> determine the scaling factor such that the center pixel has the same
* size in the original and in the rectified image
* method: transform corner points of the pixel into the world
* coordinate system, compute their distances, and use their
* mean as the scaling factor
image_points_to_world_plane(FinalCamParam, PoseForEntireImage,
[Height/2, Height/2, Height/2+1],
[Width/2, Width/2+1, Width/2],
1, WorldPixelX, WorldPixelY)
distance_pp(WorldPixelY[0], WorldPixelX[0], WorldPixelY[1], WorldPixelX[1],
WorldLength1)
distance_pp(WorldPixelY[0], WorldPixelX[0], WorldPixelY[2], WorldPixelX[2],
WorldLength2)
ScaleForSimilarPixelSize := (WorldLength1+WorldLength2)/2
* -> determine output image size such that entire input image fits into it
ExtentX := MaxX-MinX
ExtentY := MaxY-MinY
WidthRectifiedImage := ExtentX/ScaleForSimilarPixelSize
HeightRectifiedImage := ExtentY/ScaleForSimilarPixelSize
* create mapping with the determined parameters
gen_image_to_world_plane_map(Map, FinalCamParam, PoseForEntireImage,
Width, Height,
WidthRectifiedImage, HeightRectifiedImage,
ScaleForSimilarPixelSize, ’bilinear’)
* transform grabbed images with the created map
while(1)
grab_image_async(Image, FGHandle, -1)
map_image(Image, Map, RectifiedImage)
endwhile

Result
gen_image_to_world_plane_map returns H_MSG_TRUE if all parameter values are correct. If necessary,
an exception handling is raised.
Parallelization Information
gen_image_to_world_plane_map is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
Possible Successors
map_image
Alternatives
image_to_world_plane
See also
map_image, contour_to_world_plane_xld, image_points_to_world_plane
Module
Calibration

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1123

T_gen_radial_distortion_map ( Hobject *Map, const Htuple CamParIn,


const Htuple CamParOut, const Htuple Interpolation )

Generate a projection map that describes the mapping of images corresponding to a changing radial distortion.
gen_radial_distortion_map computes the mapping of images corresponding to a changing radial dis-
tortion in accordance to the interior camera parameters CamParIn and CamParOut which can be obtained,
e.g., using the operator camera_calibration. CamParIn and CamParOut contain the old and the new
camera parameters including the old and the new radial distortion, respectively (also see write_cam_par for
the sequence of the parameters and the underlying camera model). Each pixel of the potential output image is
transformed into the image plane using CamParOut and subsequently projected into a subpixel position of the
potential input image using CamParIn.
The mapping function is stored in the output image Map. The size of Map is given by the camera parameters
CamParOut and therefore defines the size of the resulting mapped images using map_image. The size of the
images to be mapped with map_image is determined by the camera parmaters CamParIn. If no interpolation
is chosen (Interpolation = ’none’), Map consists of one image containing one channel, in which for each
pixel of the output image the linearized coordinate of the pixel of the input image is stored that is the nearest
neighbor to the transformed coordinates. If bilinear interpolation is chosen (Interpolation = ’bilinear’),
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image
the linearized coordinate of the pixel in the input image is stored that is in the upper left position relative to
the transformed coordinates. The four other channels contain the weights of the four neighboring pixels of the
transformed coordinates which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
If CamParOut was computed via change_radial_distortion_cam_par, the mapping describes the
effect of a lens with a modified radial distortion. If κ is 0, the mapping corresponds to a rectification.
If several images have to be mapped using the same camera parameters, gen_radial_distortion_map
in combination with map_image is much more efficient than the operator
change_radial_distortion_image because the transformation must be computed only once.
Parameter
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : int4 / uint2
Image containing the mapping data.
. CamParIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Old camera parameters.
Number of elements : 8
. CamParOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
New camera parameters.
Number of elements : 8
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of interpolation.
Default Value : "bilinear"
List of values : Interpolation ∈ {"none", "bilinear"}
Result
gen_radial_distortion_map returns H_MSG_TRUE if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
gen_radial_distortion_map is reentrant and processed without parallelization.
Possible Predecessors
change_radial_distortion_cam_par, camera_calibration, hand_eye_calibration
Possible Successors
map_image
Alternatives
change_radial_distortion_image

HALCON 8.0.2
1124 CHAPTER 15. TOOLS

See also
change_radial_distortion_contours_xld
Module
Calibration

T_get_circle_pose ( const Hobject Contour, const Htuple CamParam,


const Htuple Radius, const Htuple OutputType, Htuple *Pose1,
Htuple *Pose2 )

Determine the 3D pose of a circle from its perspective 2D projection.


Each ellipse in an image can be interpreted as the perspective projection of a circle into the image. In fact, for
a given radius of the circle, there exist two differently oriented circles in 3D that result in the same projection.
get_circle_pose determines the 3D positions and orientations of these two circles. First, each Contour
is approximated by an ellipse. Then, based on the interior camera parameters (CamParam) and the radius of the
circle in 3D (Radius), the 3D positions and orientations (Pose1,Pose2) are determined in camera coordinates.
Depending on the value of the parameter OutputType, the position and orientation is returned as a 3D pose
(OutputType = ’pose’) or in the form of the center of the 3D circle and the normal vector of the plane in which
the circle lies (OutputType = ’center_normal’). In the former case, the angle for the rotation around the z axis
is set to zero, because it cannot be determined. In the latter case, the first three elements of the output parameters
Pose1 and Pose2 contain the position of the center of the circle. The following three elements contain the normal
vector. The normal vectors are normalized and oriented such that they point away from the optical center which
is the origin of the camera coordinate system. If OutputType is set to ’center_normal’, the output parameters
Pose1 and Pose2 contain only six elements which describe the position and orientation of the circle instead of
the seven elements of the 3D pose that are returned if OutputType is set to ’pose’.
If more than one contour is passed in Contour, Radius must either contain a tuple that contains a value for
each contour or only one value which is then used for all contours. The resulting positions and orientations are
stored one after another in Pose1 and Pose2, i.e., Pose1 and Pose2 contain first the pose or the position and
the normal vector of the first contour, followed by the respective values for the second contour and so on.
Attention
The accuracy of the determined poses depends heavily on the accuracy of the extracted contours. The extraction of
curved edges using relatively large filter masks leads to a slightly shifted edge position. Edge extraction approaches
that are based on the first derivative of the image function (e.g., edges_sub_pix) yield edges that are shifted
towards the center of curvature, i.e., extracted ellipses will be slightly to small. Approaches that are based on the
second derivative of the image function ( laplace_of_gauss followed by zero_crossing_sub_pix)
result in edges that are shifted away from the center of curvature, i.e., extracted ellipses will be slightly too large.
These effects increase with the curvature of the edge and with the size of the filter mask that is used for the
edge extraction. Therefore, to achieve high accuracy, the ellipses should appear large in the image and the filter
parameter should be chosen such that small filter masks are used (see info_edges).
Parameter
. Contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld(-array) ; Hobject
Contours to be examined.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : CamParam = 8
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Radius of the circle in object space.
Number of elements : (Radius = Contour) ∨ (Radius = 1)
Restriction : Radius > 0.0
. OutputType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of output parameters.
Default Value : "pose"
List of values : OutputType ∈ {"pose", "center_normal"}
. Pose1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
3D pose of the first circle.
Number of elements : (Pose1 = (7 · Contour)) ∨ (Pose1 = (6 · Contour))

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1125

. Pose2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *


3D pose of the second circle.
Number of elements : (Pose2 = (7 · Contour)) ∨ (Pose2 = (6 · Contour))
Result
get_circle_pose returns H_MSG_TRUE if all parameter values are correct and the position of the circle has
been determined successfully. If necessary, an exception handling is raised.
Parallelization Information
get_circle_pose is reentrant and processed without parallelization.
Possible Predecessors
edges_sub_pix
Alternatives
find_marks_and_pose, camera_calibration
See also
get_rectangle_pose, fit_ellipse_contour_xld
Module
3D Metrology

T_get_line_of_sight ( const Htuple Row, const Htuple Column,


const Htuple CamParam, Htuple *PX, Htuple *PY, Htuple *PZ, Htuple *QX,
Htuple *QY, Htuple *QZ )

Compute the line of sight corresponding to a point in the image.


get_line_of_sight computes the line of sight corresponding to a pixel (Row, Column) in the image. The
line of sight is a (straight) line in the camera coordinate system, which is described by two points (PX,PY,PZ) and
(QX,QY,QZ) on the line. A pinhole or telecentric camera model with radial distortions described by the interior
camera parameters CamParam is used (see write_cam_par). If a pinhole camera is used, the second point
lies on the focal plane, i.e., for frame cameras, the output parameter QZ is equivalent to the focal length of the
camera, whereas for linescan cameras, QZ also depends on the motion of the camera with respect to the object.
The equation of the line of sight is given by
     
X PX QX − PX
 Y  =  PY  + l ·  QY − PY 
Z PZ QZ − PZ

The advantage of representing the line of sight as two points is that it is easier to transform the line in 3D. To do
so, all that is necessary is to apply the operator affine_trans_point_3d to the two points.
Parameter
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Row coordinate of the pixel.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Column coordinate of the pixel.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. PX (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
X coordinate of the first point on the line of sight in the camera coordinate system
. PY (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Y coordinate of the first point on the line of sight in the camera coordinate system
. PZ (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Z coordinate of the first point on the line of sight in the camera coordinate system
. QX (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
X coordinate of the second point on the line of sight in the camera coordinate system
. QY (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Y coordinate of the second point on the line of sight in the camera coordinate system

HALCON 8.0.2
1126 CHAPTER 15. TOOLS

. QZ (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *


Z coordinate of the second point on the line of sight in the camera coordinate system
Example (Syntax: HDevelop)

* get interior camera parameters


read_cam_par(’campar.dat’, CamParam)
* inverse projection
get_line_of_sight([50, 100], [100, 200], CamParam, PX, PY, PZ, QX, QY, QZ)

Result
get_line_of_sight returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
get_line_of_sight is reentrant and processed without parallelization.
Possible Predecessors
read_cam_par, camera_calibration
Possible Successors
affine_trans_point_3d
See also
camera_calibration, disp_caltab, read_cam_par, project_3d_point,
affine_trans_point_3d
Module
Calibration

T_get_rectangle_pose ( const Hobject Contour, const Htuple CamParam,


const Htuple Width, const Htuple Height, const Htuple WeightingMode,
const Htuple ClippingFactor, Htuple *Pose, Htuple *CovPose,
Htuple *Error )

Determine the 3D pose of a rectangle from its perspective 2D projection.


A rectangle in space is projected as a general quadrangle into the image. get_rectangle_pose determines
the Pose of the rectangle from this projection (Contour).
The algorithm works as follows: First, Contour is segmented into four line segments and their intersections are
considered as corners of the contour. The corners together with the interior camera parameters (CamParam) and
the rectangle size in meters (Width, Height) are used for an initial estimation of the rectangle pose. Then, the
final Pose is refined with a non-linear optimization by minimizing the geometrical distance of the contour points
from the reprojection of the rectangle in the image.
The operator supports only area-scan pinhole (projective) cameras. An error is returned if CamParam specifies a
line-scan or a telecentric camera (see also camera_calibration).
Width and Height specify the size of the rectangle in x and y dimensions, respectively, in its coordinate system.
The origin of this coordinate system is in the center of the rectangle. The z axis points away from the camera.
The arguments WeightingMode and ClippingFactor can be used to damp the impact of outliers on the
algorithm. If WeightingMode is set to ’tukey’ or ’huber’, the contour points are weighted based on the ap-
proach of Tukey or Huber respectively. In such a case a robust error statistics is used to estimate the stan-
dard deviation of the distances of the contour points from the reprojected rectangle excluding outliers. The pa-
rameter ClippingFactor (a scaling factor for the standard deviation) controls the amount of damping out-
liers: The smaller the value chosen for ClippingFactor the more outliers are detected. See a discussion
about the properties of the different weighting modes in fit_line_contour_xld. Note that, unlike by
fit_line_contour_xld, for the rectangle pose estimation the approach of Huber is recommended.

Output
The resulting Pose is of code-0 (see create_pose) and represents the pose of the center of the rectangle. You
can compute the pose of the corners of the rectangle as follows:

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1127

set_origin_pose (Pose, Width/2, -Height/2, 0, PoseCorner1)


set_origin_pose (Pose, Width/2, Height/2, 0, PoseCorner2)
set_origin_pose (Pose, -Width/2, Height/2, 0, PoseCorner3)
set_origin_pose (Pose, -Width/2, -Height/2, 0, PoseCorner4)

A rectangle is symmetric with respect to its x, y, and z axis and one and the same contour can represent a rectangle
in 4 different poses. The angles in Pose are normalized to be in the range [−90; 90] degrees and the rest of the 4
possible poses can be computed by combining flips around the corresponding axis:
∗ NOTE: the following code works ONLY for pose of type Code-0
∗ as it is returned by get_rectangle_pose

∗ flip around z-axis
PoseFlippedZ := Pose
PoseFlippedZ[5] := PoseFlippedZ[5]+180
∗ flip around y-axis
PoseFlippedY := Pose
PoseFlippedY[4] := PoseFlippedY[4]+180
PoseFlippedY[5] := -PoseFlippedY[5]
∗ flip around x-axis
PoseFlippedX := Pose
PoseFlippedX[3] := PoseFlippedX[3]+180
PoseFlippedX[4] := -PoseFlippedX[4]
PoseFlippedX[5] := -PoseFlippedX[5]

Note that if the rectangle is a square (Width == Height) the number of alternative poses is 8.
If more than one contour are given in Contour, a corresponding tuple of values for both Width and Height
has to be provided as well. Yet, if only one value is provided for each of these arguments, then this value is applied
for each processed contour. A pose is estimated for each processed contour and all poses are concatenated in Pose
(see the example below).

Accuracy of the pose


The accuracy of the estimated pose depends on the following three factors:

• ratio Width/Height
• length of the projected contour
• degree of perspective distortion of the contour

In order to achieve an accurate pose estimation, there are three corresponding criteria that should be considered:
The ratio Width/Height should fulfill

1
< Width/Height < 3
3

For a rectangular object deviating from this criterion, its longer side dominates the determination of its pose. This
causes instability in the estimation of the angle around the longer rectangle’s axis. In the extreme case when one
of the dimensions is 0, the rectangle is in fact a line segment, whose pose cannot be estimated.
Secondly, the lengths of each side of the contour should be at least 20 pixels. An error is returned if a side of the
contour is less than 5 pixels long.
Thirdly, the more the contour appears projectively distorted, the more stable the algorithm works. Therefore, the
pose of a rectangle tilted w.r.t to the image plane can be estimated accurately, whereas the pose of an rectangle
parallel to the image plane of the camera could be unstable. This is further discussed in the next paragraph.
Additionally, there is a rule of thumb that ensures projective distortion: the rectangle should be placed in space
such that its size in x and y dimension in the camera coordinate system should not be less than 1/10th of its
distance from the camera in z direction.
get_rectangle_pose provides two measures for the accuracy of the estimated Pose. Error is the average
pixel error between the contour points and the modeled rectangle reprojected on the image. If Error is exceeding
0.5, this is an indication that the algorithm did not converge properly, and the resulting Pose should not be used.

HALCON 8.0.2
1128 CHAPTER 15. TOOLS

CovPose contains 36 entries representing the 6 × 6 covariance matrix of the first 6 entries of Pose. The above
mentioned case of instability of the angle about the longer rectangle’s axis be detected by checking that the absolute
values of the variances and covariances of the rotations around the x and y axis (CovPose[21],CovPose[28],
and CovPose[22] == CovPose[27]) do not exceed 0.05. Further, unusually increased values of any of the
covariances and especially of the variances (the 6 values on the diagonal of CovPose with indices 0, 7, 14, 21, 28
and 35, respectively) indicate a poor quality of Pose.
Parameter

. Contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld(-array) ; Hobject


Contour(s) to be examined.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : CamParam = 8
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Width of the rectangle in meters.
Restriction : Width > 0
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Height of the rectangle in meters.
Restriction : Height > 0
. WeightingMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Weighting mode for the optimization phase.
Default Value : "nonweighted"
List of values : WeightingMode ∈ {"nonweighted", "huber", "tukey"}
. ClippingFactor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Clipping factor for the elimination of outliers (typical: 1.0 for ’huber’ and 3.0 for ’tukey’).
Default Value : 2.0
Suggested values : ClippingFactor ∈ {1.0, 1.5, 2.0, 2.5, 3.0}
Restriction : ClippingFactor > 0
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
3D pose of the rectangle.
Number of elements : Pose = (7 · Contour)
. CovPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Covariances of the pose values.
Number of elements : CovPose = (36 · Contour)
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Root-mean-square value of the final residual error.
Number of elements : Error = Contour
Example (Syntax: HDevelop)

* Process an image with several rectangles of the same size appearing


* as light objects
*
*
RectWidth := 0.04
RectHeight := 0.025
read_cam_par (’campar.dat’, CamParam)
read_image (Image, ’tea_boxes’)
* find light objects in the image
mean_image (Image, ImageMean, 201, 201)
dyn_threshold (Image, ImageMean, Region, 5, ’light’)
* fill gaps in the objects
fill_up (Region, RegionFillUp)
* extract rectangular contours
* NOTE: for a real application, this step might require some additional
* pre- or postprocessing
reduce_domain (Image, RegionSelected, ImageReduced)
edges_sub_pix (ImageReduced, Edges, ’canny’, 0.7, 20, 30)

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1129

* get the pose of all contours found


get_rectangle_pose (Edges, CamParam, RectWidth, RectHeight, ’huber’, 2,
Poses, CovPose, Error)
NumPoses := |Poses|/7
for I := 0 to NumPoses-1 by 1
Pose := Poses[I*7:I*7+6]
* use the Pose here
* ...
endfor

Result
get_rectangle_pose returns H_MSG_TRUE if all parameter values are correct and the position of the
rectangle has been determined successfully. If the provided contour(s) cannot be segmented as a quadrangle
get_rectangle_pose returns H_ERR_FIT_QUADRANGLE. If further necessary, an exception handling is
raised.
Parallelization Information
get_rectangle_pose is reentrant, local, and processed without parallelization.
Possible Predecessors
edges_sub_pix
See also
get_circle_pose, set_origin_pose, camera_calibration
References
G.Schweighofer and A.Pinz: “Robust Pose Estimation from a Planar Target”; Transactions on Pattern Analysis
and Machine Intelligence (PAMI), 28(12):2024-2030, 2006
Module
3D Metrology

T_hand_eye_calibration ( const Htuple NX, const Htuple NY,


const Htuple NZ, const Htuple NRow, const Htuple NCol,
const Htuple MPointsOfImage, const Htuple MRelPoses,
const Htuple BaseStartPose, const Htuple CamStartPose,
const Htuple CamParam, const Htuple ToEstimate,
const Htuple StopCriterion, const Htuple MaxIterations,
const Htuple MinError, Htuple *BaseFinalPose, Htuple *CamFinalPose,
Htuple *NumErrors )

Perform a hand-eye calibration.


The operator hand_eye_calibration determines the 3D pose of a robot (“hand”) relative to a camera
(“eye”). With this information, the results of image processing can be transformed into the coordinate system
of the robot which can then, e.g., grasp an inspected part. There are two possible configurations of robot-camera
(hand-eye) systems: The camera can be mounted on the robot or be stationary and observe the robot. Note that the
term robot is used in place for a mechanism that moves objects. Thus, you can use hand_eye_calibration
to calibrate many different systems, from pan-tilt heads to multi-axis manipulators.
A hand-eye calibration is performed similarly to the calibration of the external camera parameters (see
camera_calibration): You acquire a set of images of a calibration object, determine correspondences be-
tween known model points and their projection in the images and pass them to hand_eye_calibration
via the parameters NX, NY, NZ, NRow, NCol, and MPointsOfImage. If you use the standard cali-
bration plate, the correspondences can be determined very easily with the operators find_caltab and
find_marks_and_pose. Furthermore, you must specify the internal camera parameters in CamParam.
In contrast to the camera calibration, the calibration object is not moved manually. This task is delegated to
the robot which either moves the camera (mounted camera) or the calibration object (stationary camera). The
robot’s movements are assumed to be known and therefore are also used as an input for the calibration (parameter
MRelPoses).

HALCON 8.0.2
1130 CHAPTER 15. TOOLS

The two hand-eye configurations are discussed in more detail below, followed by general information about the
process of hand-eye calibration.

Moving camera (mounted on a robot)


In this configuration, the calibration object remains stationary and the camera is moved to different positions by the
robot. The main idea behind the hand-eye calibration is that the information extracted from a calibration image,
i.e., the pose of the calibration object relative to the camera (i.e., the external camera parameters), can be seen as a
chain of poses or homogeneous transformation matrices, from the calibration object via the base of the robot to its
tool (end-effector) and finally to the camera:

cam
Moving camera: Hcal = cam Htool · tool Hbase · base Hcal
* 6 YH
H
 H
CamStartPose MRelPoses BaseStartPose
CamFinalPose BaseFinalPose

From the set of calibration images, the operator hand_eye_calibration determines the two transformations
at the ends of the chain, i.e., the pose of the robot tool in camera coordinates (cam Htool ) and the pose of the
calibration object in the robot base coordinate system (base Hcal ). In the input parameters CamStartPose and
BaseStartPose, you must specify suitable starting values for these transformations which are constant over
all calibration images. hand_eye_calibration then returns the calibrated values in CamFinalPose and
BaseFinalPose.
In contrast, the transformation in the middle of the chain, tool Hbase , is known but changes for each calibration
image, because it describes the pose of the robot moving the camera, or to be more exact its inverse pose (pose of
the base coordinate system in robot tool coordinates). You must specify the (inverse) robot poses in the calibration
images in the parameter MRelPoses.
Internally, hand_eye_calibration uses a Newton-type algorithm to minimize an error function based on
normal equations. Analogously to the calibration of the camera itself (see camera_calibration), the hand-
eye calibration becomes more robust if you use many calibration images that were acquired with different robot
poses.

Stationary camera
In this configuration, the robot grasps the calibration object and moves it in front of the camera. Again, the
information extracted from a calibration image, i.e., the pose of the calibration object in camera coordinates (i.e.,
the external camera parameters), are equal to a chain of poses or homogeneous transformation matrices, this time
from the calibration object via the robot’s tool to its base and finally to the camera:

cam
Stationary camera: Hcal = cam Hbase · base Htool · tool Hcal
*
 YH
H
 6 H
CamStartPose MRelPoses BaseStartPose
CamFinalPose BaseFinalPose

Analogously to the configuration with a moving camera, the operator hand_eye_calibration determines
the two transformations at the ends of the chain, here the pose of the robot base coordinate system in camera coordi-
nates (cam Hbase ) and the pose of the calibration object relative to the robot tool (tool Hcal ). In the input parameters
CamStartPose and BaseStartPose, you must specify suitable starting values for these transformations.
hand_eye_calibration then returns the calibrated values in CamFinalPose and BaseFinalPose.
Please note that the names of the parameters BaseStartPose and BaseFinalPose are misleading for this
configuration!
The transformation in the middle of the chain, base Htool , describes the pose of the robot moving the calibration
object, i.e., the pose of the tool relative to the base coordinate system. You must specify the robot poses in the
calibration images in the parameter MRelPoses.

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1131

Additional information about the calibration process


The following sections discuss individual questions arising from the use of hand_eye_calibration. They
are intended to be a guideline for using the operator in an application, as well as to help understanding the operator.

How do I get 3D model points and their projections? 3D model points given in the world coordinate system
(NX, NY, NZ) and their associated projections in the image (NRow, NCol) form the basis of the hand-eye
calibration. In order to be able to perform a successful hand-eye calibration, you need images of the 3D
model points that were obtained for sufficiently many different poses of the manipulator.
In principle, you can use arbitrary known points for the calibration. However, it is usually most convenient to
use the standard calibration plate, e.g., the one that can be generated with gen_caltab. In this case, you
can use the operators find_caltab and find_marks_and_pose to extract the position of the cali-
bration plate and of the calibration marks and the operator caltab_points to access the 3D coordinates
of the calibration marks (see also the description of camera_calibration).
The parameter MPointsOfImage specifies the number of 3D model points used for each pose of the
manipulator, i.e., for each image. With this, the 3D model points which are stored in a linearized fashion
in NX, NY, NZ, and their corresponding projections (NRow, NCol) can be associated with the corresponding
pose of the manipulator (MRelPoses). Note that in contrast to the operator camera_calibration the
3D coordinates of the model points must be specified for each calibration image, not only once.
How do I acquire a suitable set of images? If a standard calibration plate is used, the following procedure
should be used:
• At least 10 to 20 images from different positions should be taken in which the position of the camera
with respect to the calibration plate is sufficiently different. The position of the calibration plate (moving
camera: relative to the robot’s tool; stationary camera: relative to the robot’s base) must not be changed
between images.
• In each image, the calibration plate must be completely visible (including its border).
• No reflections or other disturbances should be visible on the calibration plate.
• The set of images must show the calibration plate from very different positions of the manipulator.
The calibration plate can and should be visible in different parts of the images. Furthermore, it should
be slightly to moderately rotated around its x- or y-axis, in order to clearly exhibit distortions of the
calibration marks. In other words, the corresponding exterior camera parameters (pose of the calibration
plate in camera coordinates) should take on many different values.
• In each image, the calibration plate should fill at least one quarter of the entire image, in order to ensure
the robust detection of the calibration marks.
• The interior camera parameters of the camera to be used must have been determined earlier and must be
passed in CamParam (see camera_calibration). Note that changes of the image size, the focal
length, the aperture, or the focus effect a change of the interior camera parameters.
• The camera must not be modified between the acquisition of the individual images, i.e., focal length,
aperture, and focus must not be changed, because all calibration images use the same interior camera
parameters. Please make sure that the focus is sufficient for the expected changes of the distance the
camera from the calibration plate. Therefore, bright lighting conditions for the calibration plate are
important, because then you can use smaller apertures which result in larger depth of focus.
How do I obtain suitable starting values? Depending on the used hand-eye configuration, you need starting val-
ues for the following poses:
Moving camera
BaseStartPose = pose of the calibration object in robot base coordinates
CamStartPose = pose of the robot tool in camera coordinates
Stationary camera
BaseStartPose = pose of the calibration object in robot tool coordinates
CamStartPose = pose of the robot base in camera coordinates
The camera’s coordinate system is oriented such that its optical axis corresponds to the z-axis, the x-axis
points to the right, and the y-axis downwards. The coordinate system of the standard calibration plate is
located in the middle of the surface of the calibration plate, its z-axis points into the calibration plate, its
x-axis to the right, and it y-axis downwards.
For more information about creating a 3D pose please refer to the description of create_pose which also
contains a short example.

HALCON 8.0.2
1132 CHAPTER 15. TOOLS

In fact, you need a starting value only for one of the two poses (BaseStartPose or CamStartPose).
The other can be computed from one of the calibration images. This means that you can pick the pose that is
easier to determine and let HALCON compute the other one for you.
The main idea is to exploit the fact that the two poses for which we need starting values are connected via the
already described chain of transformations, here shown for a configuration with a moving camera:

cam
Moving camera: Hcal = cam Htool · tool Hbase · base Hcal
* 6 YH
H
 H
CamStartPose MRelPoses BaseStartPose

In this configuration, it is typically easy to determine a starting value for cam Htool (CamStartPose). Thus,
we solve the equation for base Hcal (BaseStartPose):

Moving camera: base


Hcal = (cam Htool · tool Hbase )-1 · cam Hcal
base
= Htool · tool Hcam · cam Hcal

Thus, to compute BaseStartPose you need one of the robot poses (e.g., the one in the first image), your
estimate for CamStartPose, and the pose of the calibration object in camera coordinates in the selected
image. If you use the standard calibration plate, you typically already obtained its pose when applying the
operator find_marks_and_pose to determine the projections of the marks. An example program can
be found below.
For a configuration with a stationary camera, the chain of transformations is:

cam
Stationary camera: Hcal = cam Hbase · base Htool · tool Hcal

*
 HH
Y
 6 H
CamStartPose MRelPoses BaseStartPose
tool
In this configuration, it is typically easier to determine a starting value for Hcal (BaseStartPose).
Thus, we solve the equation for cam Hbase (CamStartPose):

Stationary camera: cam


Hbase = cam
Hcal ·(base Htool · tool Hcal )-1
cam
= Hcal · cal Htool · tool Hbase

Thus, to compute CamStartPose you need one of the robot poses (e.g., the one in the first image), your
estimate for BaseStartPose, and the pose of the calibration object in camera coordinates in the selected
image. If you use the standard calibration plate, you typically already obtained its pose when applying the
operator find_marks_and_pose to determine the projections of the marks. An example program can
be found below.
How do I obtain the poses of the robot? In the parameter MRelPoses you must pass the poses of the robot in
the calibration images (moving camera: pose of the robot base in robot tool coordinates; stationary camera:
pose of the robot tool in robot base coordinates) in a linearized fashion. We recommend to create the robot
poses in a separate program and save in files using write_pose. In the calibration program you can then
read and accumulate them in a tuple as shown in the example program below. Besides, we recommend to
save the pose of the robot tool in robot base coordinates independent of the hand-eye configuration. When
using a moving camera, you then invert the read poses before accumulating them. This is also shown in the
example program.
Via the cartesian interface of the robot, you can typically obtain the pose of the tool in base coordinates in
a notation that corresponds to the pose representations with the codes 0 or 2 (OrderOfRotation = ’gba’
or ’abg’, see create_pose). In this case, you can directly use the pose values obtained from the robot as
input for create_pose.

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1133

If the cartesian interface of your robot describes the orientation in a different way, e.g., with the representation
ZYZ (Rz (ϕ1) · Ry (ϕ2) · Rz (ϕ3)), you can create the corresponding homogeneous transformation matrix
step by step using the operators hom_mat3d_rotate and hom_mat3d_translate and then convert
the matrix into a pose using hom_mat3d_to_pose. The following example code creates a pose from the
ZYZ representation described above:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate (HomMat3DIdent, ϕ3, ’z’, 0, 0, 0, HomMat3DRotZ)
hom_mat3d_rotate (HomMat3DRotZ, ϕ2, ’y’, 0, 0, 0, HomMat3DRotYZ)
hom_mat3d_rotate (HomMat3DRotYZ, ϕ1, ’z’, 0, 0, 0, HomMat3DRotZYZ)
hom_mat3d_translate (HomMat3DRotZYZ, Tx, Ty, Tz, base_H_tool)
hom_mat3d_to_pose (base_H_tool, RobPose)
Please note that the hand-eye calibration only works if the robot poses MRelPoses are specified with high
accuracy!
How can I exclude individual pose parameters from the estimation? hand_eye_calibration estimates
a maximum of 12 pose parameters, i.e., 6 parameters each for the two computed poses CamFinalPose
and BaseFinalPose. However, it is possible to exclude some of these pose parameters from the esti-
mation. This means that the starting values of the poses remain unchanged and are assumed constant for
the estimation of all other pose parameters. The parameter ToEstimate is used to determine which pose
parameters should be estimated. In ToEstimate, a list of keywords for the parameters to be estimated is
passed. The possible values are:
BaseFinalPose:
’baseTx’ = translation along the x-axis
’baseTy’ = translation along the y-axis
’baseTz’ = translation along the z-axis
’baseRa’ = rotation around the x-axis
’baseRb’ = rotation around the y-axis
’baseRg’ = rotation around the z-axis
’base_pose’ = all 6 BaseFinalPose parameters
CamFinalPose:
’camTx’ = translation along the x-axis
’camTy’ = translation along the y-axis
’camTz’ = translation along the z-axis
’camRa’ = rotation around the x-axis
’camRb’ = rotation around the y-axis
’camRg’ = rotation around the z-axis
’cam_pose’ = all 6 CamFinalPose parameters
In order to estimate all 12 pose parameters, you can pass the keyword ’all’ (or of course a tuple containing
all 12 keywords listed above).
It is useful to exclude individual parameters from estimation if those pose parameters have already been mea-
sured exactly. Therefor define a string tuple of the parameters that should be estimated or prefix the strings
of excluded parameters with a ’~’ sign. For example, ToEstimate = [’all’,’~camTx’] estimates all pose
values except the x translation of the camera. Whereas ToEstimate = [’base_pose’,’~baseRy’] estimates
the pose of the base apart from the rotation around the y-axis. The latter is equivalent to ToEstimate =
[’baseTx’,’baseTy’,’baseTz’,’baseRx’,’baseRz’].
Which terminating criteria can be used for the error minimization? The error minimization terminates either
after a fixed number of iterations or if the error falls below a given minimum error. The parameter
StopCriterion is used to choose between these two alternatives. If ’CountIterations’ is passed, the
algorithm terminates after MaxIterations iterations.
If StopCriterion is passed as ’MinError’, the algorithm runs until the error falls below the error threshold
given in MinError. If, however, the number of iterations reaches the number given in MaxIterations,
the algorithm terminates with an error message.
What is the order of the individual parameters? The length of the tuple MPointsOfImage corresponds to
the number of different positions of the manipulator and thus to the number of calibration images. The
parameter MPointsOfImage determines the number of model points used in the individual positions. If
the standard calibration plate is used, this means 49 points per position (image). If for example 15 images
were acquired, MPointsOfImage is a tuple of length 15, where all elements of the tuple have the value 49.

HALCON 8.0.2
1134 CHAPTER 15. TOOLS

The number of calibration images which is determined by the length of MPointsOfImage, must also be
taken into account for the tuples for the 3D model points and for the extracted 2D marks, respectively. Hence,
for 15 calibration images with 49 model points each, the tuples NX, NY, NZ, NRow, and NCol must contain
15 · 49 = 735 values each. These tuples are ordered according to the image the respective points lie in, i.e.,
the first 49 values correspond to the 49 model points in the first image. The order of the 3D model points and
the extracted 2D model points must be the same in each image.
The length of the tuple MRelPoses also depends on the number of calibration images. If, for example, 15
images and therefore 15 poses are used, the length of the tuple MRelPoses is 15 · 7 = 105 (15 times 7 pose
parameters). The first seven parameters thus determine the pose of the manipulator in the first image, and so
on.
What do the output parameters mean? If StopCriterion was set to ’CountIterations’, the output parame-
ters BaseFinalPose and CamFinalPose are returned even if the algorithm didn’t converge. If, how-
ever, StopCriterion was set to ’MinError’, the error must fall below ’MinError’ in order for output
parameters to be returned.
The representation type of BaseFinalPose and CamFinalPose is the same as in the corresponding
starting values. It can be changed with the operator convert_pose_type. The description of the dif-
ferent representation types and of their conversion can be found with the documentation of the operator
create_pose.
The parameter NumErrors contains a list of (numerical) errors from the individual iterations of the algo-
rithm. Based on the evolution of the errors, it can be decided whether the algorithm has converged for the
given starting values. The error values are returned as 3D deviations in meters. Thus, the last entry of the
error list corresponds to an estimate of the accuracy of the returned pose parameters.

Attention
The quality of the calibration depends on the accuracy of the input parameters (position of the calibration marks,
robot poses MRelPoses, and the starting positions BaseStartPose, CamStartPose). Based on the returned
error measures NumErrors, it can be decided, whether the algorithm has converged. Furthermore, the accuracy
of the returned pose can be estimated. The error measures are 3D differences in meters.
Parameter
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Linear list containing all the x coordinates of the calibration points (in the order of the images).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Linear list containing all the y coordinates of the calibration points (in the order of the images).
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Linear list containing all the z coordinates of the calibration points (in the order of the images).
. NRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; Htuple . double
Linear list containing all row coordinates of the calibration points (in the order of the images).
. NCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; Htuple . double
Linear list containing all the column coordinates of the calibration points (in the order of the images).
. MPointsOfImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Number of the calibration points for each image.
. MRelPoses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Measured 3D pose of the robot for each image (moving camera: robot base in robot tool coordinates;
stationary camera: robot tool in robot base coordinates).
. BaseStartPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Starting value for the 3D pose of the calibration object in robot base coordinates (moving camera) or in robot
tool coordinates (stationary camera), respectively.
. CamStartPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Starting value for the 3D pose of the robot tool (moving camera) or robot base (stationary camera),
respectively, in camera coordinates.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
. ToEstimate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char *
Parameters to be estimated (max. 12 degrees of freedom).
Default Value : "all"
List of values : ToEstimate ∈ {"all", "base_pose", "cam_pose", "baseTx", "baseTy", "baseTz", "baseRa",
"baseRb", "baseRg", "camTx", "camTy", "camTz", "camRa", "camRb", "camRg"}

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1135

. StopCriterion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Type of stopping criterion.
Default Value : "CountIterations"
List of values : StopCriterion ∈ {"CountIterations", "MinError"}
. MaxIterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Maximum number of iterations to be executed.
Default Value : 15
Suggested values : MaxIterations ∈ {10, 15, 20, 25, 30}
. MinError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum error used as the stopping criterion.
Default Value : 0.0005
Suggested values : MinError ∈ {0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1}
. BaseFinalPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Computed 3D pose for the 3D pose of the calibration object in robot base coordinates (moving camera) or in
robot tool coordinates (stationary camera), respectively.
. CamFinalPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Computed 3D pose of the robot tool (moving camera) or robot base (stationary camera), respectively, in
camera coordinates.
. NumErrors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Error measures for each iteration.
Example (Syntax: HDevelop)

read_cam_par(’campar.dat’, CamParam)
CalDescr := ’caltab.descr’
caltab_points(CalDescr, X, Y, Z)
* process all calibration images
for i := 0 to NumImages-1 by 1
read_image(Image, ’calib_’+i$’02d’)
* find marks on the calibration plate in every image
find_caltab(Image, CalPlate, CalDescr, 3, 150, 5)
find_marks_and_pose(Image, CalPlate, CalDescr, CamParam, 128, 10,
RCoordTmp, CCoordTmp, StartPose)
* accumulate 2D and 3D coordinates of the marks
RCoord := [RCoord, RCoordTmp]
CCoord := [CCoord, CCoordTmp]
XCoord := [XCoord, X]
YCoord := [YCoord, Y]
ZCoord := [ZCoord, Z]
NumMarker := [NumMarker, |RCoordTmp|]
* read pose of the robot tool in robot base coordinates
read_pose(’robpose_’+i$’02d’+’.dat’, RobPose)
* moving camera? invert pose
if (IsMovingCameraConfig=’true’)
pose_to_hom_mat3d(RobPose, base_H_tool)
hom_mat3d_invert(base_H_tool, tool_H_base)
hom_mat3d_to_pose(tool_H_base, RobPose)
endif
* accumulate robot poses
MRelPoses := [MRelPoses, RobPose]
* store the pose of the calibration plate in the first image and the
* corresponding pose of the robot for later use
if (i=0)
cam_P_cal := StartPose
RelPose0 := RobPose
endif
endfor
* obtain starting values: read one, compute the other
if (IsMovingCameraConfig=’true’)

HALCON 8.0.2
1136 CHAPTER 15. TOOLS

* mov. camera: read pose of robot tool in camera coordinates


* compute pose of calibration plate in robot base coordinates
read_pose(’cam_P_tool.dat’, CamStartPose)
* BaseStartPose = inverse(CamStartPose * RelPose0) * cam_P_cal
pose_to_hom_mat3d(CamStartPose, cam_H_tool)
pose_to_hom_mat3d(RelPose0, tool_H_base)
pose_to_hom_mat3d(StartPose, cam_H_cal)
hom_mat3d_compose(cam_H_tool, tool_H_base, cam_H_base)
hom_mat3d_invert(cam_H_base, base_H_cam)
hom_mat3d_compose(base_H_cam, cam_H_cal, base_H_cal)
hom_mat3d_to_pose(base_H_cal, BaseStartPose)
else
* stat. camera: read pose of calibration plate in robot tool coordinates
* compute pose of robot base in camera coordinates
read_pose(’tool_P_cal.dat’, BaseStartPose)
* CamStartPose = cam_P_cal * inverse(RelPose0 * BaseStartPose)
pose_to_hom_mat3d(BaseStartPose, tool_H_cal)
pose_to_hom_mat3d(RelPose0, base_H_tool)
pose_to_hom_mat3d(StartPose, cam_H_cal)
hom_mat3d_compose(base_H_tool, tool_H_cal, base_H_cal)
hom_mat3d_invert(base_H_call, cal_H_base)
hom_mat3d_compose(cam_H_cal, cal_H_base, cam_H_base)
hom_mat3d_to_pose(cam_H_base, CamStartPose)
endif
*
* perform hand-eye calibration
*
hand_eye_calibration(XCoord, YCoord, ZCoord, RCoord, CCoord, NumMarker,
MRelPoses, BaseStartPose, CamStartPose, CamParam,
"all", "CountIterations", 20, 0.000670,
BaseFinalPose, CamFinalPose, NumErrors)
*
* measure some point P in camera coordinates (cam_px, cam_py, cam_pz)
*
* transform point into robot base coordinates: base_p = base_H_cam * cam_p
if (IsMovingCameraConfig=’true’)
* mov. camera: base_H_cam = base_H_tool * tool_H_cam
* base_P_cam = RobPose * inverse(CamFinalPose)
pose_to_hom_mat3d(CamFinalPose, cam_H_tool)
hom_mat3d_invert(cam_H_tool, tool_H_cam)
* obtain current robot pose RobPose from robot
pose_to_hom_mat3d(RobPose, base_H_tool)
hom_mat3d_compose(base_H_tool, tool_H_cam, base_H_cam)
else
* stat. camera: base_P_cam = inverse(CamFinalPose)
pose_to_hom_mat3d(CamFinalPose, cam_H_base)
hom_mat3d_invert(cam_H_base, base_H_cam)
endif
affine_trans_point_3d(base_H_cam, cam_px, cam_py, cam_pz,
base_px, base_py, base_pz)

Result
hand_eye_calibration returns H_MSG_TRUE if all parameter values are correct and the method converges
with an error less than the specified minimum error (if StopCriterion = ’MinError’). If necessary, an excep-
tion handling is raised.
Parallelization Information
hand_eye_calibration is reentrant and processed without parallelization.
Possible Predecessors
find_marks_and_pose

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1137

Possible Successors
write_pose, convert_pose_type, pose_to_hom_mat3d, disp_caltab, sim_caltab
See also
find_caltab, find_marks_and_pose, disp_caltab, sim_caltab, write_cam_par,
read_cam_par, create_pose, convert_pose_type, write_pose, read_pose,
pose_to_hom_mat3d, hom_mat3d_to_pose, caltab_points, gen_caltab
Module
Calibration

T_image_points_to_world_plane ( const Htuple CamParam,


const Htuple WorldPose, const Htuple Rows, const Htuple Cols,
const Htuple Scale, Htuple *X, Htuple *Y )

Transform image points into the plane z=0 of a world coordinate system.
The operator image_points_to_world_plane transforms image points which are given in Rows and
Cols into the plane z=0 in a world coordinate system and returns their 3D coordinates in X and Y. The world
coordinate system is chosen by passing its 3D pose relative to the camera coordinate system in WorldPose.
In CamParam you must pass the interior camera parameters (see write_cam_par for the sequence of the
parameters and the underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
With the parameter Scale you can scale the resulting 3D coordinates. The parameter Scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’µm’ for the parameter Scale.
Internally, the operator first computes the line of sight between the projection center and the image contour points
in the camera coordinate system, taking into account the radial distortions. The line of sight is then transformed
into the world coordinate system specified in WorldPose. By intersecting the plane z=0 with the line of sight the
3D coordinates X and Y are obtained.
Parameter

. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong


Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
3D pose of the world coordinate system in camera coordinates.
Number of elements : 7
. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . double / Hlong
Row coordinates of the points to be transformed.
Default Value : 100.0
. Cols (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . double / Hlong
Column coordinates of the points to be transformed.
Default Value : 100.0
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . const char * / Hlong / double
Scale or dimension
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . double *
X coordinates of the points in the world coordinate system.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . double *
Y coordinates of the points in the world coordinate system.
Example (Syntax: HDevelop)

HALCON 8.0.2
1138 CHAPTER 15. TOOLS

* perform camera calibration (with standard calibration plate)


camera_calibration(NX, NY, NZ, NRow, NCol, StartCamParam, NStartPose, ’all’,
FinalCamParam, NFinalPose, Errors)
* world coordinate system is defined by calibration plate in first image
FinalPose1 := NFinalPose[0:6]
* compensate thickness of plate
set_origin_pose(FinalPose1, 0, 0, 0.0006, WorldPose)
* transform image points into world coordinate system (unit mm)
image_points_to_world_plane(FinalCamParam, WorldPose, PointRows, PointCols,
’mm’, PointXCoord, PointYCoord)

Result
image_points_to_world_plane returns H_MSG_TRUE if all parameter values are correct. If necessary,
an exception handling is raised.
Parallelization Information
image_points_to_world_plane is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
See also
contour_to_world_plane_xld
Module
Calibration

T_image_to_world_plane ( const Hobject Image, Hobject *ImageWorld,


const Htuple CamParam, const Htuple WorldPose, const Htuple Width,
const Htuple Height, const Htuple Scale, const Htuple Interpolation )

Rectify an image by transforming it into the plane z=0 of a world coordinate system.
image_to_world_plane rectifies an image Image by transforming it into the plane z=0 (plane of mea-
surements) in a world coordinate system. The resulting rectified image ImageWorld shows neither radial nor
perspective distortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly
onto the plane of measurements. The world coordinate system is chosen by passing its 3D pose relative to the
camera coordinate system in WorldPose. In CamParam you must pass the interior camera parameters (see
write_cam_par for the sequence of the parameters and the underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
The pixel position of the upper left corner of the output image ImageWorld is determined by the origin of the
world coordinate system. The size of the output image ImageWorld can be choosen by the parameters Width,
Height, and Scale. Width and Height must be given in pixels.
With the parameter Scale you can specify the size of a pixel in the transformed image. There are two typical
scenarios: First, you can scale the image such that pixel coordinates in the transformed image directly correspond
to metric units, e.g., that one pixel corresponds to one micron. This is useful if you want to perform measurements
in the transformed image which will then directly result in metric results. The second scenario is to scale the image
such that its content appears in a size similar to the original image. This is useful, e.g., if you want to perform
shape-based matching in the transformed image.
Scale must be specified as the ratio desired pixel size/original unit. A pixel size of 1µm means that a pixel in
the transformed image corresponds to the area 1µm × 1µm in the plane of measurements. The original unit is
determined by the coordinates of the calibration object. If the original unit is meters (which is the case if you use
the standard calibration plate), you can use the parameter values ’m’, ’cm’, ’mm’, ’microns’, or ’µm’ to directly set
the unit of pixel coordinates in the transformed image.
The parameter Interpolation specifies, whether bilinear interpolation (’bilinear’) should be applied between
the pixels in the input image or whether the gray value of the nearest neighboring pixel (’none’) should be used.

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1139

If several images have to be rectified using the same parameters, gen_image_to_world_plane_map in


combination with map_image is much more efficient than the operator image_to_world_plane because
the mapping function needs to be computed only once.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. ImageWorld (output_object) . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Transformed image.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
3D pose of the world coordinate system in camera coordinates.
Number of elements : 7
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Htuple . Hlong
Width of the resulting image in pixels.
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Htuple . Hlong
Height of the resulting image in pixels.
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . const char * / Hlong / double
Scale or unit
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of interpolation.
Default Value : "bilinear"
List of values : Interpolation ∈ {"none", "bilinear"}
Example (Syntax: HDevelop)

* perform camera calibration (with standard calibration plate)


camera_calibration(NX, NY, NZ, NRow, NCol, StartCamParam, NStartPose, ’all’,
FinalCamParam, NFinalPose, Errors)
* world coordinate system is defined by calibration plate in first image
FinalPose1 := NFinalPose[0:6]
* compensate thickness of plate
set_origin_pose(FinalPose1, 0, 0, 0.0006, WorldPose)
* goal: rectify image
* first determine parameters such that the entire image content is visible
* and that objects have a similar size before and after the rectification
* -> transform image boundary into world plane, determine smallest
* rectangle around it
get_image_pointer1(Image, Pointer, Type, Width, Height)
gen_rectangle1 (ImageRect, 0, 0, Height-1, Width-1)
gen_contour_region_xld (ImageRect, ImageBorder, ’border’)
contour_to_world_plane_xld(ImageBorder, ImageBorderWCS, FinalCamParam,
WorldPose, 1)
smallest_rectangle1_xld (ImageBorderWCS, MinY, MinX, MaxY, MaxX)
* -> move the pose to the upper left corner of the surrounding rectangle
set_origin_pose(WorldPose, MinX, MinY, 0, PoseForEntireImage)
* -> determine the scaling factor such that the center pixel has the same
* size in the original and in the rectified image
* method: transform corner points of the pixel into the world
* coordinate system, compute their distances, and use their
* mean as the scaling factor
image_points_to_world_plane(FinalCamParam, PoseForEntireImage,
[Height/2, Height/2, Height/2+1],
[Width/2, Width/2+1, Width/2],

HALCON 8.0.2
1140 CHAPTER 15. TOOLS

1, WorldPixelX, WorldPixelY)
distance_pp(WorldPixelY[0], WorldPixelX[0], WorldPixelY[1], WorldPixelX[1],
WorldLength1)
distance_pp(WorldPixelY[0], WorldPixelX[0], WorldPixelY[2], WorldPixelX[2],
WorldLength2)
ScaleForSimilarPixelSize := (WorldLength1+WorldLength2)/2
* -> determine output image size such that entire input image fits into it
ExtentX := MaxX-MinX
ExtentY := MaxY-MinY
WidthRectifiedImage := ExtentX/ScaleForSimilarPixelSize
HeightRectifiedImage := ExtentY/ScaleForSimilarPixelSize
* transform the image with the determined parameters
image_to_world_plane(Image, RectifiedImage, FinalCamParam,
PoseForEntireImage, WidthRectifiedImage,
HeightRectifiedImage, ScaleForSimilarPixelSize,
’bilinear’)

Result
image_to_world_plane returns H_MSG_TRUE if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
image_to_world_plane is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
Alternatives
gen_image_to_world_plane_map, map_image
See also
contour_to_world_plane_xld, image_points_to_world_plane
Module
Calibration

T_project_3d_point ( const Htuple X, const Htuple Y, const Htuple Z,


const Htuple CamParam, Htuple *Row, Htuple *Column )

Project 3D points into (sub-)pixel image coordinates.


project_3d_point projects one or more 3D points (with coordinates X, Y, and Z) into the image plane (in
pixels) and returns the result in Row and Column. The coordinates X, Y, and Z are given in the camera coordinate
system, i.e., they describe the position of the points relative to the camera.
The interior camera parameters CamParam describe the projection characteristics of the camera (see
write_cam_par).
Parameter
. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
X coordinates of the 3D points to be projected in the camera coordinate system.
. Y (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Y coordinates of the 3D points to be projected in the camera coordinate system.
. Z (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Z coordinates of the 3D points to be projected in the camera coordinate system.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Row coordinates of the projected points (in pixels).
Default Value : "ProjectedRow"

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1141

. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *


Column coordinates of the projected points (in pixels).
Default Value : "ProjectedCol"
Example (Syntax: HDevelop)

* read pose of the world coordinate system in camera coordinates


read_pose(worldpose.dat’, WorldPose)
* convert pose into transformation matrix
pose_to_hom_mat3d(WorldPose, HomMat3D)
* transform 3D points from world into the camera coordinate system
affine_trans_point_3d([3.0, 3.2], [4.5, 4.5], [5.8, 6.2], HomMat3D, X, Y, Z)
* read interior camera parameters
read_cam_par(’campar.dat’, CamParam)
* project 3D points into image
project_3d_point(X, Y, Z, CamParam, Row, Column)

Result
project_3d_point returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
project_3d_point is reentrant and processed without parallelization.
Possible Predecessors
read_cam_par, affine_trans_point_3d
Possible Successors
gen_region_points, gen_region_polygon, disp_polygon
See also
camera_calibration, disp_caltab, read_cam_par, get_line_of_sight,
affine_trans_point_3d
Module
Calibration

T_radiometric_self_calibration ( const Hobject Images,


const Htuple ExposureRatios, const Htuple Features,
const Htuple FunctionType, const Htuple Smoothness,
const Htuple PolynomialDegree, Htuple *InverseResponse )

Perform a radiometric self-calibration of a camera.


radiometric_self_calibration performs a radiometric self-calibration of a camera. For this, at least two
images that show the same image contents (scene) must be passed in Images. All images passed in Images must
be acquired with different exposures. Typically, the different exposures are obtained by changing the shutter times
at the camera. It is not recommeded to change the exposure by changing the aperture of the lens since in this case
the exposures cannot be determined accurately enough. The ratio of the exposures of consecutive images is passed
in ExposureRatios. For example, a value of 0.5 specifies that the second image of an image pair has been
acquired with half the exposure of the first image of the pair. The exposure ratio can easily be determined from the
shutter times since the exposure is proportional to the shutter time. The exposure ratio must be greater than 0 and
smaller than 1. This means that the images must be sorted according to descending exposure. ExposureRatios
must contain one element less than the number of images passed in Images. If all exposure ratios are identical,
as a simplification a single value can be passed in ExposureRatios.
As described above, the images passed in Images must show identical image contents. Hence, it is typically nec-
essary that neither the camera nor the objects in the scene move. If the camera has rotated around the optical center,
the images should be aligned to a reference image (one of the images) using proj_match_points_ransac
and projective_trans_image. If the features used for the radiometric calibration are determined from the
2D gray value histogram of consecutive image pairs (Features = 0 2d _histogram 0 ), it is essential that the images
are aligned and that the objects in the scene do not move. For Features = 0 1d _histograms 0 , the features used
for the radiometric calibration are determined from the 1D gray value histograms of the image pairs. In this mode,

HALCON 8.0.2
1142 CHAPTER 15. TOOLS

the calibration can theoretically be performed if the 1D histograms of the images do not change by the movement
of the objects in the images. This can, for example, be the case if an object moves in front of a uniformly textured
background. However, it is preferable to use Features = 0 2d _histogram 0 because this mode is more accurate.
The mode Features = 0 1d _histograms 0 should only be used if it is impossible to construct the camera set-up
such that neither the camera nor the objects in the scene move.
Furthermore, care should be taken to cover the range of gray values without gaps by choosing appropriate image
contents. Whether there are gaps in the range of gray values can easily be checked based on the 1D gray value
histograms of the images or the 2D gray value histograms of consecutive images. In the 1D gray value histograms
(see gray_histo_abs), there should be no areas between the minimum and maximum gray value that have a
frequency of 0 or a very small frequency. In the 2D gray value histograms (see histo_2dim), a single connected
region having the shape of a “strip” should result from a threshold operation with a lower threshold of 1. If more
than one connected component results, a more suitable image content should be chosen. If the image content can
be chosen such that the gray value range of the image (e.g., 0-255 for byte images) can be covered with two images
with different exposures, and if there are no gaps in the histograms, the two images suffice for the calibration. This,
however, is typically not the case, and hence multiple images must be used to cover the entire gray value range.
As described above, for this multiple images with different exposures must be taken to cover the entire gray value
range as well as possible. For this, normally the first image should be exposed such that the maximum gray value
is slightly below the saturation limit of the camera, or such that the image is significantly overexposed. If the first
image is overexposed, a significant overexposure is necessary to enable radiometric_self_calibration
to detect the overexposed areas reliably. If the camera exhibits an unusual saturation behavior (e.g., a saturation
limit that lies significantly below the maximum gray value) the overexposed areas should be masked out by hand
with reduce_domain in the overexposed image.
radiometric_self_calibration returns the inverse gray value response function of the camera in
InverseResponse. The inverse response function can be used to create an image with a linear response by
using InverseResponse as the LUT in lut_trans. The parameter FunctionType determines which
function model is used to model the response function. For FunctionType = 0 discrete 0 , the response func-
tion is described by a discrete function with the relevant number of gray values (256 for byte images). For
FunctionType = 0 polynomial 0 , the response is described by a polynomial of degree PolynomialDegree.
The computation of the response function is slower for FunctionType = 0 discrete 0 . However, since a poly-
nomial tends to oscillate in the areas in which no gray value information can be derived, even if smoothness
constraints are imposed as described below, the discrete model should usually be preferred over the polynomial
model.
The parameter Smoothness defines (in addition to the constraints on the response function that can be de-
rived from the images) constraints on the smoothness of the response function. If, as described above, the gray
value range can be covered completely and without gaps, the default value of 1 should not be changed. Other-
wise, values > 1 can be used to obtain a stronger smoothing of the response function, while values < 1 lead
to a weaker smoothing. The smoothing is particularly important in areas for which no gray value information
can be derived from the images, i.e., in gaps in the histograms and for gray values smaller than the minimum
gray value of all images or larger than the maximum gray value of all images. In these areas, the smoothness
constraints lead to an interpolation or extrapolation of the response function. Because of the nature of the inter-
nally derived constraints, FunctionType = 0 discrete 0 favors an exponential function in the undefined areas,
whereas FunctionType = 0 polynomial 0 favors a straight line. Please note that the interpolation and extrapo-
lation is always less reliable than to cover the gray value range completely and without gaps. Therefore, in any
case it should be attempted first to acquire the images optimally, before the smoothness constraints are used to
fill in the remaining gaps. In all cases, the response function should be checked for plausibility after the call to
radiometric_self_calibration. In particular, it should be checked whether InverseResponse is
monotonic. If this is not the case, a more suitable scene should be used to avoid interpolation, or Smoothness
should be set to a larger value. For FunctionType = 0 polynomial 0 , it may also be necessary to change
PolynomialDegree. If, despite these changes, an implausible response is returned, the saturation behavior
of the camera should be checked, e.g., based on the 2D gray value histogram, and the saturated areas should be
masked out by hand, as described above.
When the inverse gray value response function of the camera is determined, the absolute energy falling on the
camera cannot be determined. This means that InverseResponse can only be determined up to a scale factor.
Therefore, an additional constraint is used to fix the unknown scale factor: the maximum gray value that can occur
should occur for the maximum input gray value, e.g., InverseResponse[255] = 255 for byte images. This
constraint usually leads to the most intuitive results. If, however, a multichannel image (typically an RGB image)
should be radiometrically calibrated (for this, each channel must be calibrated separately), the above constraint
may lead to the result that a different scaling factor is determined for each channel. This may lead to the result that

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1143

gray tones no longer appear gray after the correction. In this case, a manual white balancing step must be carried
out by identifying a homogeneous gray area in the original image, and by deriving appropriate scaling factors from
the corrected gray values for two of the three response curves (or, in general, for n − 1 of the n channels). Here,
the response curve that remains invariant should be chosen such that all scaling factors are < 1. With the scaling
factors thus determined, new response functions should be calculated by multiplying each value of a response
function with the scaling factor corresponding to that response function.
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image-array ; Hobject : byte / uint2
Input images.
. ExposureRatios (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Ratio of the exposure energies of successive image pairs.
Default Value : 0.5
Suggested values : ExposureRatios ∈ {0.25, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}
Restriction : (ExposureRatios > 0) ∧ (ExposureRatios < 1)
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Features that are used to compute the inverse response function of the camera.
Default Value : "2d_histogram"
List of values : Features ∈ {"2d_histogram", "1d_histograms"}
. FunctionType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of the inverse response function of the camera.
Default Value : "discrete"
List of values : FunctionType ∈ {"discrete", "polynomial"}
. Smoothness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Smoothness of the inverse response function of the camera.
Default Value : 1.0
Suggested values : Smoothness ∈ {0.3, 0.5, 0.7, 0.8, 1.0, 1.2, 1.5, 2.0, 3.0}
Restriction : Smoothness > 0
. PolynomialDegree (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Degree of the polynomial if FunctionType = ’polynomial’.
Default Value : 5
Suggested values : PolynomialDegree ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : (PolynomialDegree ≥ 1) ∧ (PolynomialDegree ≤ 20)
. InverseResponse (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Inverse response function of the camera.
Example (Syntax: HDevelop)

open_framegrabber (’FirePackage’, 1, 1, 0, 0, 0, 0, ’default’, -1,


’default’, -1, ’default’, ’default’, ’default’,
-1, -1, FGHandle)
* Define appropriate shutter times.
Shutters := [1000,750,500,250,125]
Num := |Shutters|
* Grab and accumulate images with the different exposures. In this
* loop, it must be ensured that the scene remains static.
gen_empty_obj (Images)
for I := 0 to Num-1 by 1
set_framegrabber_param (FGHandle, ’shutter’, Shutters[I])
grab_image (Image, FGHandle)
concat_obj (Images, Image, Images)
endfor
* Compute the exposure ratios from the shutter times.
ExposureRatios := real(Shutters[1:Num-1])/real(Shutters[0:Num-2])
radiometric_self_calibration (Images, ExposureRatios, ’2d_histogram’
’discrete’, 1, 5, InverseResponse)
* Note that if the frame grabber supports hardware LUTs, we could
* also call set_framegrabber_lut here instead of lut_trans below.
* This would be more efficient.

HALCON 8.0.2
1144 CHAPTER 15. TOOLS

while (1)
grab_image_async (Image, FGHandle, -1)
lut_trans (Image, ImageLinear, InverseResponse)
* Process radiometrically correct image.
[...]
endwhile
close_framegrabber (FGHandle)

Result
If the parameters are valid, the operator radiometric_self_calibration returns the value
H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
radiometric_self_calibration is reentrant and processed without parallelization.
Possible Predecessors
read_image, grab_image, grab_image_async, set_framegrabber_param, concat_obj,
proj_match_points_ransac, projective_trans_image
Possible Successors
lut_trans
See also
histo_2dim, gray_histo, gray_histo_abs, reduce_domain
Module
Calibration

T_read_cam_par ( const Htuple CamParFile, Htuple *CamParam )

Read the interior camera parameters from text file.


read_cam_par is used to read the interior camera parameters CamParam from a text file with name
CamParFile. CamParam is a tuple that contains the interior camera parameters in the following sequence
(see write_cam_par for a description of the corresponding camera models):
For area scan cameras:
[Focus, Kappa, Sx, Sy, Cx, Cy, ImageWidth, ImageHeight]
For line scan cameras:
[Focus, Kappa, Sx, Sy, Cx, Cy, ImageWidth, ImageHeight, Vx, Vy, Vz]
The format of the text file is a (HALCON-independent) generic parameter description. It allows to group arbitrary
sets of parameters hierarchically. The description of a single parameter within a parameter group consists of the
following 3 lines:

Name : Shortname : Actual value ;


Type : Lower bound (optional) : Upper bound (optional) ;
Description (optional) ;

Comments are marked by a ’#’ at the beginning of a line.


read_cam_par expects in the file CamParFile one of the two following parameter groups.
The parameter group Camera:Parameter consists of the 8 parameters Focus, Kappa (κ), Sx, Sy, Cx, Cy, Im-
ageWidth and ImageHeight. A suitable file can look like the following:

# INTERNAL CAMERA PARAMETERS

ParGroup: Camera: Parameter;


"Internal camera parameters";

Focus:foc: 0.00806039;
DOUBLE:0.0:;

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1145

"Focal length of the lens [meter]";

Kappa:kappa: -2253.5;
DOUBLE::;
"Radial distortion coefficient [1/(meter*meter)]";

Sx:sx: 1.0629e-05;
DOUBLE:0.0:;
"Width of a cell on the chip [meter]";

Sy:sy: 1.1e-05;
DOUBLE:0.0:;
"Height of a cell on the chip [meter]";

Cx:cx: 378.236;
DOUBLE:0.0:;
"X-coordinate of the image center [pixel]";

Cy:cy: 297.587;
DOUBLE:0.0:;
"Y-coordinate of the image center [pixel]";

ImageWidth:imgw: 768;
INT:1:32767;
"Width of the used calibration images [pixel]";

ImageHeight:imgh: 576;
INT:1:32767;
"Height of the used calibration images [pixel]";

In addition to the 8 parameters of the parameter group Camera:Parameter, the parameter group LinescanCamera:
Parameter contains 3 parameters that describe the motion of the camera with respect to the object. With this,
the parameter group LinescanCamera:Parameter consists of the 11 parameters Focus, Kappa (κ), Sx, Sy, Cx, Cy,
ImageWidth, ImageHeight, Vx, Vy und Vz. A suitable file can look like the following:

# INTERNAL CAMERA PARAMETERS

ParGroup: LinescanCamera: Parameter;


"Internal camera parameters";

Focus:foc: 0.061;
DOUBLE:0.0:;
"Focal length of the lens [meter]";

Kappa:kappa: -16.9761;
DOUBLE::;
"Radial distortion coefficient [1/(meter*meter)]";

Sx:sx: 1.06903e-05;
DOUBLE:0.0:;
"Width of a cell on the chip [meter]";

Sy:sy: 1e-05;
DOUBLE:0.0:;
"Height of a cell on the chip [meter]";

Cx:cx: 930.625;
DOUBLE:0.0:;
"X-coordinate of the image center [pixel]";

HALCON 8.0.2
1146 CHAPTER 15. TOOLS

Cy:cy: 149.962;
DOUBLE:0.0:;
"Y-coordinate of the image center [pixel]";

ImageWidth:imgw: 2048;
INT:1:32767;
"Width of the used calibration images [pixel]";

ImageHeight:imgh: 3840;
INT:1:32767;
"Height of the used calibration images [pixel]";

Vx:vx: 1.41376e-06;
DOUBLE::;
"X-component of the motion vector [meter/scanline]";

Vy:vy: 5.45756e-05;
DOUBLE::;
"Y-component of the motion vector [meter/scanline]";

Vz:vz: 3.45872e-06;
DOUBLE::;
"Z-component of the motion vector [meter/scanline]";

Parameter
. CamParFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of interior camera parameters.
Default Value : "campar.dat"
List of values : CamParFile ∈ {"campar.dat", "campar.initial", "campar.final"}
. CamParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
Example (Syntax: HDevelop)

* get interior camera parameters:


read_cam_par(’campar.dat’, CamParam)

Result
read_cam_par returns H_MSG_TRUE if all parameter values are correct and the file has been read successfully.
If necessary an exception handling is raised.
Parallelization Information
read_cam_par is reentrant and processed without parallelization.
Possible Successors
find_marks_and_pose, sim_caltab, gen_caltab, disp_caltab, camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
write_cam_par, write_pose, read_pose, project_3d_point, get_line_of_sight
Module
Foundation

T_sim_caltab ( Hobject *SimImage, const Htuple CalTabDescrFile,


const Htuple CamParam, const Htuple CaltabPose,
const Htuple GrayBackground, const Htuple GrayCaltab,
const Htuple GrayMarks, const Htuple ScaleFac )

Simulate an image with calibration plate.

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1147

sim_caltab is used to generate a simulated calibration image. The calibration plate description is read from the
file CalTabDescrFile and will be projected into the image plane using the given camera parameters (interior
camera parameters CamParam and exterior camera parameters CaltabPose), see also project_3d_point.
In the simulated image only the calibration plate is shown. The image background is set to the gray value
GrayBackground, the calibration plate background is set to GrayCaltab, and the calibration marks are set
to the gray value GrayMarks. The parameter ScaleFac influences the number of supporting points to approxi-
mate the elliptic contours of the calibration marks, see also disp_caltab. Increasing the number of supporting
points causes a more accurate determination of the mark boundary, but increases the computation time, too. For
each pixel of the simulated image which touches a subpixel-boundary of this kind, the gray value is set linearly
between GrayMarks and GrayCaltab dependent on the proportion Inside/Outside.
By applying the operator sim_caltab you can generate synthetic calibration images (with known camera pa-
rameters!) to test the quality of the calibration algorithm (see camera_calibration).
Parameter
. SimImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte
Simulated calibration image.
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. CaltabPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Exterior camera parameters (3D pose of the calibration plate in camera coordinates).
Number of elements : 7
. GrayBackground (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Gray value of image background.
Default Value : 128
Suggested values : GrayBackground ∈ {0, 32, 64, 96, 128, 160}
Restriction : (0 ≤ GrayBackground) ≤ 255
. GrayCaltab (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Gray value of calibration plate.
Default Value : 224
Suggested values : GrayCaltab ∈ {144, 160, 176, 192, 208, 224, 240}
Restriction : (0 ≤ GrayCaltab) ≤ 255
. GrayMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Gray value of calibration marks.
Default Value : 80
Suggested values : GrayMarks ∈ {16, 32, 48, 64, 80, 96, 112}
Restriction : (0 ≤ GrayMarks) ≤ 255
. ScaleFac (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Scaling factor to reduce oversampling.
Default Value : 1.0
Suggested values : ScaleFac ∈ {1.0, 0.5, 0.25, 0.125}
Recommended Increment : 0.05
Restriction : 1.0 ≥ ScaleFac
Example (Syntax: HDevelop)

* read calibration image


read_image(Image1, ’calib-01’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
* find calibration marks and initial pose
StartCamPar := [Focus, Kappa, Sx, Sy, Cx, Cy, ImageWidth, ImageHeight]
find_marks_and_pose(Image1, Caltab1, ’caltab.descr’, StartCamPar,

HALCON 8.0.2
1148 CHAPTER 15. TOOLS

128 ,10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,


StartPose1)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ)
* camera calibration
camera_calibration(NX, NY, NZ, RCoord1, CCoord1, StartCamPar,
StartPose1, 11, CamParam, FinalPose, Errors)
* simulate calibration image
sim_caltab(Image1Sim, ’caltab.descr’, CamParam, FinalPose, 128, 224, 80, 1)

Result
sim_caltab returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception handling is
raised.
Parallelization Information
sim_caltab is reentrant and processed without parallelization.
Possible Predecessors
camera_calibration, find_marks_and_pose, read_pose, read_cam_par,
hom_mat3d_to_pose
Possible Successors
find_caltab
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, create_pose,
hom_mat3d_to_pose, project_3d_point, gen_caltab
Module
Calibration

T_stationary_camera_self_calibration ( const Htuple NumImages,


const Htuple ImageWidth, const Htuple ImageHeight,
const Htuple ReferenceImage, const Htuple MappingSource,
const Htuple MappingDest, const Htuple HomMatrices2D,
const Htuple Rows1, const Htuple Cols1, const Htuple Rows2,
const Htuple Cols2, const Htuple NumCorrespondences,
const Htuple EstimationMethod, const Htuple CameraModel,
const Htuple FixedCameraParams, Htuple *CameraMatrices, Htuple *Kappa,
Htuple *RotationMatrices, Htuple *X, Htuple *Y, Htuple *Z,
Htuple *Error )

Perform a self-calibration of a stationary projective camera.


stationary_camera_self_calibration performs a self-calibration of a stationary projective camera.
Here, stationary means that the camera may only rotate around the optical center and may zoom. Hence, the
optical center may not move. Projective means that the camera model is a pinhole camera that can be described by
a projective 3D-2D transformation. In particular, radial distortions can only be modeled for cameras with constant
parameters. If the lens exhibits significant radial distortions they should be removed, at least approximately, with
change_radial_distortion_image.
The camera model being used can be described as follows:

x = PX .

Here, x is a homogeneous 2D vector, X a homogeneous 3D vector, and P a homogeneous 3×4 projection matrix.
The projection matrix P can be decomposed as follows:

P = K(R|t) .

Here, R is a 3×3 rotation matrix and t is an inhomogeneous 3D vector. These two entities describe
the position (pose) of the camera in 3D space. This convention is analogous to the convention used in

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1149

camera_calibration, i.e., for R = I and t = 0 the x axis points to the right, the y axis downwards, and
the z axis points forward. K is the calibration matrix of the camera (the camera matrix) which can be described as
follows:
 
af sf u
K= 0 f v  .
0 0 1

Here, f is the focal length of the camera in pixels, a the aspect ratio of the pixels, s is a factor that models the
skew of the image axes, and (u, v) is the principal point of the camera in pixels. In this convention, the x axis
corresponds to the column axis and the y axis to the row axis.
Since the camera is stationary, it can be assumed that t = 0. With this convention, it is easy to see that the
fourth coordinate of the homogeneous 3D vector X has no influence on the position of the projected 3D point.
Consequently, the fourth coordinate can be set to 0, and it can be seen that X can be regarded as a point at infinity,
and hence represents a direction in 3D. With this convention, the fourth coordinate of X can be omitted, and hence
X can be regarded as inhomogeneous 3D vector which can only be determined up to scale since it represents a
direction. With this, the above projection equation can be written as follows:

x = KRX .

If two images of the same point are taken with a stationary camera, the following equations hold:

x1 = K1 R1 X
x2 = K2 R2 X

and conseqently

x2 = K2 R2 R−1 −1 −1
1 K1 x1 = K2 R12 K1 x1 = H12 x1 .

If the camera parameters do not change when taking the two images, K1 = K2 holds. Because of the above, the
two images of the same 3D point are related by a projective 2D transformation. This transformation can be deter-
mined with proj_match_points_ransac. It needs to be taken into account that the order of the coordinates
of the projective 2D transformations in HALCON is the opposite of the above convention. Furthermore, it needs
to be taken into account that proj_match_points_ransac uses a coordinate system in which the origin
of a pixel lies in the upper left corner of the pixel, whereas stationary_camera_self_calibration
uses a coordinate system that corresponds to the definition used in camera_calibration, in which the
origin of a pixel lies in the center of the pixel. For projective 2D transformations that are determined with
proj_match_points_ransac the rows and columns must be exchanged and a translation of (0.5, 0.5) must
be applied. Hence, instead of H12 = K2 R12 K−11 the following equations hold in HALCON:

   
0 1 0.5 0 1 −0.5
H12 = 1 0 0.5  K2 R12 K−1
1
 1 0 −0.5 
0 0 1 0 0 1

and
   
0 1 −0.5 0 1 0.5
K2 R12 K1−1 = 1 0 −0.5  H12  1 0 0.5  .
0 0 1 0 0 1

From the above equation, constraints on the camera parameters can be derived in two ways. First, the rotation can
be eliminated from the above equation, leading to equations that relate the camera matrices with the projective 2D
transformation between the two images. Let Hij be the projective transformation from image i to image j. Then,

Kj K>
j = Hij Ki K> >
i Hij

K−> −1
j Kj = H−> −> −1 −1
ij Ki Ki Hij

HALCON 8.0.2
1150 CHAPTER 15. TOOLS

From the second equation, linear constraints on the camera parameters can be derived. This method is used for
EstimationMethod = ’linear’. Here, all source images i given by MappingSource and all destination
images j given by MappingDest are used to compute constraints on the camera parameters. After the camera
parameters have been determined from these constraints, the rotation of the camera in the respective images can
be determined based on the equation Rij = K−1 j Hij Ki and by constructing a chain of transformations from the
reference image ReferenceImage to the respective image. From the first equation above, a nonlinear method
to determine the camera parameters can be derived by minimizing the following error:

> > 2
X
Kj K>

E= j − Hij Ki Ki Hij F
(i,j)∈{(s,d)}

Here, analogously to the linear method, {(s, d)} is the set of overlapping images specified by MappingSource
and MappingDest. This method is used for EstimationMethod = ’nonlinear’. To start the minimization,
the camera parameters are initialized with the results of the linear method. These two methods are very fast and
return acceptable results if the projective 2D transformations Hij are sufficiently accurate. For this, it is essential
that the images do not have radial distortions. It can also be seen that in the above two methods the camera
parameters are determined independently from the rotation parameters, and consequently the possible constraints
are not fully exploited. In particular, it can be seen that it is not enforced that the projections of the same 3D
point lie close to each other in all images. Therefore, stationary_camera_self_calibration offers
a complete bundle adjustment as a third method (EstimationMethod = ’gold_standard’). Here, the camera
parameters and rotations as well as the directions in 3D corresponding to the image points (denoted by the vectors
X above), are determined in a single optimization by minimizing the following error:

n m
!
X X 1 2
2 2
E= kxij − Ki Ri Xj k + 2 (ui + vi )
i=1 j=1
σ

In this equation, only the terms for which the reconstructed direction Xj is visible in image i are taken into account.
The starting values for the parameters in the bundle adjustment are derived from the results of the nonlinear method.
Because of the high complexity of the minimization the bundle adjustment requires a significantly longer execution
time than the two simpler methods. Nevertheless, because the bundle adjustment results in significantly better
results, it should be preferred.
In each of the three methods the camera parameters that should be computed can be specified. The remaining
parameters are set to a constant value. Which parameters should be computed is determined with the parameter
CameraModel which contains a tuple of values. CameraModel must always contain the value ’focus’ that
specifies that the focal length f is computed. If CameraModel contains the value ’principal_point’ the principal
point (u, v) of the camera is computed. If not, the principal point is set to (ImageWidth/2, ImageHeight/2).
If CameraModel contains the value ’aspect’ the aspect ratio a of the pixels is determined, otherwise it is set to
1. If CameraModel contains the value ’skew’ the skew of the image axes is determined, otherwise it is set to
0. Only the following combinations of the parameters are allowed: ’focus’, [’focus’, ’principal_point’], [’focus’,
’aspect’], [’focus’, ’principal_point’, ’aspect’] und [’focus’, ’principal_point’, ’aspect’, ’skew’].
Additionally, it is possible to determine the parameter Kappa which models radial lens distortions, if
EstimationMethod = ’gold_standard’ has been selected and the camera parameters are assumed constant.
In this case, ’kappa’ can also be included in the parameter CameraModel.
When using EstimationMethod = ’gold_standard’ to determine the principal point, it is possible to penalize
estimations far away from the image center. This can be done by adding a sigma to the parameter ’principal_point:
0.5’. If no sigma is given the penalty term in the above equation for calculating the error is ommited.
The parameter FixedCameraParams determines whether the camera parameters can change in each im-
age or whether they should be assumed constant for all images. To calibrate a camera so that it can
later be used for measuring with the calibrated camera, only FixedCameraParams = ’true’ is use-
ful. The mode FixedCameraParams = ’false’ is mainly useful to compute spherical mosaics with
gen_spherical_mosaic if the camera zoomed or if the focus changed significantly when the mosaic images
were taken. If a mosaic with constant camera parameters should be computed, of course FixedCameraParams
= ’true’ should be used. It should be noted that for FixedCameraParams = ’false’ the camera calibration
problem is determined very badly, especially for long focal lengths. In these cases, often only the focal length can
be determined. Therefore, it may be necessary to use CameraModel = ’focus’ or to constrain the position of the
principal point by using a small Sigma for the penalty term for the principal point.

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1151

The number of images that are used for the calibration is passed in NumImages. Based on the number of images,
several constraints for the camera model must be observed. If only two images are used, even under the assumption
of constant parameters not all camera parameters can be determined. In this case, the skew of the image axes should
be set to 0 by not adding ’skew’ to CameraModel. If FixedCameraParams = ’false’ is used, the full set of
camera parameters can never be determined, no matter how many images are used. In this case, the skew should be
set to 0 as well. Furthermore, it should be noted that the aspect ratio can only be determined accurately if at least
one image is rotated around the optical axis (the z axis of the camera coordinate system) with respect to the other
images. If this is not the case the computation of the aspect ratio should be suppressed by not adding ’aspect’ to
CameraModel.
As described above, to calibrate the camera it is necessary that the projective transformation for each overlapping
image pair is determined with proj_match_points_ransac. For example, for a 2×2 block of images in
the following layout
1 2
3 4
the following projective transformations should be determined, assuming that all images overlap each other: 17→2,
17→3, 17→4, 27→3, 27→4 und 37→4. The indices of the images that determine the respective transformation are
given by MappingSource and MappingDest. The indices are start at 1. Consequently, in the above example
MappingSource = [1,1,1,2,2,3] and MappingDest = [2,3,4,3,4,4] must be used. The number of images
in the mosaic is given by NumImages. It is used to check whether each image can be reached by a chain of
transformations. The index of the reference image is given by ReferenceImage. On output, this image has the
identity matrix as its transformation matrix.
The 3 × 3 projective transformation matrices that correspond to the image pairs are passed in
HomMatrices2D. Additionally, the coordinates of the matched point pairs in the image pairs must
be passed in Rows1, Cols1, Rows2, and Cols2. They can be determined from the output of
proj_match_points_ransac with tuple_select or with the HDevelop function subset. To enable
stationary_camera_self_calibration to determine which point pair belongs to which image pair,
NumCorrespondences must contain the number of found point matches for each image pair.
The computed camera matrices Ki are returned in CameraMatrices as 3 × 3 matrices. For
FixedCameraParams = ’false’, NumImages matrices are returned. Since for FixedCameraParams =
’true’ all camera matrices are identical, a single camera matrix is returned in this case. The computed rotations Ri
are returned in RotationMatrices as 3 × 3 matrices. RotationMatrices always contains NumImages
matrices.
If EstimationMethod = ’gold_standard’ is used, (X, Y, Z) contains the reconstructed directions Xj . In ad-
dition, Error contains the average projection error of the reconstructed directions. This can be used to check
whether the optimization has converged to useful values.
If the computed camera parameters are used to project 3D points or 3D directions into the image i the respective
camera matrix should be multiplied with the corresponding rotation matrix (with hom_mat2d_compose).
Parameter

. NumImages (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Number of different images that are used for the calibration.
Restriction : NumImages ≥ 2
. ImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Width of the images from which the points were extracted.
Restriction : ImageWidth > 0
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Height of the images from which the points were extracted.
Restriction : ImageHeight > 0
. ReferenceImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Index of the reference image.
. MappingSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Indices of the source images of the transformations.
. MappingDest (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Indices of the target images of the transformations.
. HomMatrices2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Array of 3 × 3 projective transformation matrices.

HALCON 8.0.2
1152 CHAPTER 15. TOOLS

. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double / Hlong


Row coordinates of corresponding points in the respective source images.
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double / Hlong
Column coordinates of corresponding points in the respective source images.
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double / Hlong
Row coordinates of corresponding points in the respective destination images.
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double / Hlong
Column coordinates of corresponding points in the respective destination images.
. NumCorrespondences (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Number of point correspondences in the respective image pair.
. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Estimation algorithm for the calibration.
Default Value : "gold_standard"
List of values : EstimationMethod ∈ {"linear", "nonlinear", "gold_standard"}
. CameraModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char *
Camera model to be used.
Default Value : ["focus","principal_point"]
List of values : CameraModel ∈ {"focus", "aspect", "skew", "principal_point", "kappa"}
. FixedCameraParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Are the camera parameters identical for all images?
Default Value : "true"
List of values : FixedCameraParams ∈ {"true", "false"}
. CameraMatrices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
(Array of) 3 × 3 projective camera matrices that determine the interior camera parameters.
. Kappa (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Radial distortion of the camera.
. RotationMatrices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Array of 3 × 3 transformation matrices that determine rotation of the camera in the respective image.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x-array ; Htuple . double *
X-Component of the direction vector of each point if EstimationMethod = ’gold_standard’ is used.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y-array ; Htuple . double *
Y-Component of the direction vector of each point if EstimationMethod = ’gold_standard’ is used.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z-array ; Htuple . double *
Z-Component of the direction vector of each point if EstimationMethod = ’gold_standard’ is used.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Average error per reconstructed point if EstimationMethod = ’gold_standard’ is used.
Example (Syntax: HDevelop)

* Assume that Images contains four images in the layout given in the
* above description. Then the following example performs the camera
* self-calibration using these four images.
From := [1,1,1,2,2,3]
To := [2,3,4,3,4,4]
HomMatrices2D := []
Rows1 := []
Cols1 := []
Rows2 := []
Cols2 := []
NumMatches := []
for J := 0 to |From|-1 by 1
select_obj (Images, From[J], ImageF)
select_obj (Images, To[J], ImageT)
points_foerstner (ImageF, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsF, ColsF, _, _, _, _, _, _, _, _)
points_foerstner (ImageT, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsT, ColsT, _, _, _, _, _, _, _, _)

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1153

proj_match_points_ransac (ImageF, ImageT, RowsF, ColsF, RowsT, ColsT,


’ncc’, 10, 0, 0, 480, 640, 0, 0.5,
’gold_standard’, 2, 42, HomMat2D,
Points1, Points2)
HomMatrices2D := [HomMatrices2D,HomMat2D]
Rows1 := [Rows1,subset(RowsF,Points1)]
Cols1 := [Cols1,subset(ColsF,Points1)]
Rows2 := [Rows2,subset(RowsT,Points2)]
Cols2 := [Cols2,subset(ColsT,Points2)]
NumMatches := [NumMatches,|Points1|]
endfor
stationary_camera_self_calibration (4, 640, 480, 1, From, To,
HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches,
’gold_standard’,
[’focus’,’principal_point’],
’true’, CameraMatrix, Kappa,
RotationMatrices, X, Y, Z, Error)

Result
If the parameters are valid, the operator stationary_camera_self_calibration returns the value
H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
stationary_camera_self_calibration is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac
Possible Successors
gen_spherical_mosaic
See also
gen_projective_mosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Module
Calibration

T_write_cam_par ( const Htuple CamParam, const Htuple CamParFile )

Write the interior camera parameters to text file.


write_cam_par is used to write the interior camera parameters CamParam to a text file with name
CamParFile. CamParam is a tuple that contains the interior camera parameters in the two following sequence:
For area scan cameras:
[Focus, Kappa, Sx, Sy, Cx, Cy, ImageWidth, ImageHeight]
For line scan cameras:
[Focus, Kappa, Sx, Sy, Cx, Cy, ImageWidth, ImageHeight, Vx, Vy, Vz]
The interior camera parameters describe the projection process of the used combination of camera, lens, and frame
grabber; they can be determined calibrating the camera, see camera_calibration.
For the modeling of this projection process which is determined by the used combination of camera, lens, and
frame grabber, HALCON provides the following three 3D camera models:

• Area scan pinhole camera:


The combination of an area scan camera with a lens that effects a perspective projection and that may show
radial distortions.

HALCON 8.0.2
1154 CHAPTER 15. TOOLS

• Area scan telecentric camera:


The combination of an area scan camera with a telecentric lens that effects a parallel projection and that may
show radial distortions.
• Line scan pinhole camera:
The combination of a line scan camera with a lens that effects a perspective projection and that may show
radial distortions.

For area scan cameras, the projection of the point pc that is given in camera coordinates into a (sub-)pixel [r,c]
in the image consists of the following steps: First, the point is projected into the image plane, i.e., onto the sensor
chip. If the underlying camera model is an area scan pinhole camera, i.e., if the focal length passed in CamParam
is greater than 0, the projection is described by the following equations:
 
x
pc =  y 
z
x y
u = Focus · and v = Focus ·
z z
In contrast, if the focal length is passed as 0 in CamParam, the camera model of an area scan telecentric camera
is used, i.e., it is assumed that the optics of the lens of the camera performs a parallel projection. In this case, the
corresponding equations are:
 
x
pc =  y 
z

u = x and v=y

The following equations compensate for radial distortion:

2u 2v
ũ = p and ṽ = p
1+ 1− 4κ(u2 + v2 ) 1+ 1 − 4κ(u2 + v 2 )

Finally, the point is transformed from the image plane coordinate system into the image coordinate system, i.e.,
the pixel coordinate system:

ũ ṽ
c= + Cx and r= + Cy
Sx Sy

For line scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON,
the following assumptions for this motion are made:

1. the camera moves with constant velocity along a straight line


2. the orientation of the camera is constant
3. the motion is equal for all images

The motion is described by the motion vector V = (Vx , Vy , Vz )T that must be given in [meter/scanline] in the
camera coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact,
this is equivalent to the assumption of a fixed camera with the object travelling along −V .
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system is the
center of projection. The z-axis is identical to the optical axis and directed so that the visible points have positive z
coordinates. The y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector
has a positive y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a
right-handed coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves
relatively to the object, i.e., each image line has been imaged from a different position. This means, there would
be an individual pose for each image line. To make things easier, in HALCON, all transformations from world
coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion

HALCON/C Reference Manual, 2008-5-13


15.5. CALIBRATION 1155

V is taken into account during the projection of the point pc into the image. Consequently, only the pose of the
first image line is returned by the operators find_marks_and_pose and camera_calibration.
For line scan pinhole cameras, the projection of the point pc that is given in the camera coordinate system into a
(sub-)pixel [r,c] in the image is defined as follows:
Assuming


x
pc =  y  ,
z

the following set of equations must be solved for m, ũ, and t:

m · D · ũ = x − t · Vx
−m · D · pv = y − t · Vy
m · Focus = z − t · Vz

with

1
D =
1 + κ(ũ2 + (pv )2 )
pv = Sy · Cy

This already includes the compensation for radial distortions.


Finally, the point is transformed into the image coordinate system, i.e., the pixel coordinate system:


c= + Cx and r=t
Sx

The format of the text file CamParFile is a (HALCON-independent) generic parameter description. It allows to
group arbitrary sets of parameters hierarchically. The description of a single parameter within a parameter group
consists of the following 3 lines:

Name : Shortname : Actual value ;


Type : Lower bound (optional) : Upper bound (optional) ;
Description (optional) ;

Depending on the number of elements of CamParam, the parameter groups Camera:Parameter or LinescanCam-
era:Parameter, respectively, are written into the text file CamParFile (see read_cam_par for an example).
The parameter group Camera:Parameter consits of the 8 interior camera parameters of the area scan camera. The
parameter group LinescanCamera:Parameter consists of the 11 interior camera parameters of the line scan camera.
Parameter
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. CamParFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; Htuple . const char *
File name of interior camera parameters.
Default Value : "campar.dat"
List of values : CamParFile ∈ {"campar.dat", "campar.initial", "campar.final"}
Example (Syntax: HDevelop)

* read calibration images


read_image(Image1, ’calib-01’)
read_image(Image2, ’calib-02’)

HALCON 8.0.2
1156 CHAPTER 15. TOOLS

read_image(Image3, ’calib-03’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
find_caltab(Image2, Caltab2, ’caltab.descr’, 3, 112, 5)
find_caltab(Image3, Caltab3, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
StartCamPar := [Focus, Kappa, Sx, Sy, Cx, Cy, ImageWidth, ImageHeight]
find_marks_and_pose(Image1, Caltab1, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,
StartPose1)
find_marks_and_pose(Image2, Caltab2, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord2, CCoord2,
StartPose2)
find_marks_and_pose(Image3, Caltab3, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord3, CCoord3,
StartPose3)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ)
* camera calibration
camera_calibration(NX, NY, NZ, [RCoord1, RCoord2, RCoord3],
[CCoord1, CCoord2, CCoord3], StartCamPar,
[StartPose1, StartPose2, StartPose3], ’all’,
CamParam, NFinalPose, Errors)
* write interior camera parameters to file
write_cam_par(CamParam, ’campar.dat’)

Result
write_cam_par returns H_MSG_TRUE if all parameter values are correct and the file has been written suc-
cessfully. If necessary an exception handling is raised.
Parallelization Information
write_cam_par is local and processed completely exclusively without parallelization.
Possible Predecessors
camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
read_cam_par, write_pose, read_pose, project_3d_point, get_line_of_sight
Module
Foundation

15.6 Datacode
clear_all_data_code_2d_models ( )
T_clear_all_data_code_2d_models ( )

Delete all 2D data code models and free the allocated memory
The operator clear_all_data_code_2d_models deletes all 2D data code models that were created by
create_data_code_2d_model or read_data_code_2d_model. All memory used by the models is
freed. After the operator call all 2D data code handles are invalid.
Attention
clear_all_data_code_2d_models exists solely for the purpose of implementing the “reset program”
functionality in HDevelop. clear_all_data_code_2d_models must not be used in any application.
Result
The operator clear_all_data_code_2d_models returns the value H_MSG_TRUE if all 2D data code
models were freed correctly. Otherwise, an exception will be raised.

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1157

Parallelization Information
clear_all_data_code_2d_models is processed completely exclusively without parallelization.
Alternatives
clear_data_code_2d_model
See also
create_data_code_2d_model, read_data_code_2d_model
Module
Data Code

clear_data_code_2d_model ( Hlong DataCodeHandle )


T_clear_data_code_2d_model ( const Htuple DataCodeHandle )

Delete a 2D data code model and free the allocated memory


The operator clear_data_code_2d_model deletes a 2D data code model that was created by
create_data_code_2d_model or read_data_code_2d_model. All memory used by the model is
freed. The handle of the model is passed in DataCodeHandle. After the operator call it is invalid.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; Hlong
Handle of the 2D data code model.
Result
The operator clear_data_code_2d_model returns the value H_MSG_TRUE if a valid handle was passed
and the referred 2D data code model can be freed correctly. Otherwise, an exception will be raised.
Parallelization Information
clear_data_code_2d_model is processed completely exclusively without parallelization.
Alternatives
clear_all_data_code_2d_models
See also
create_data_code_2d_model, read_data_code_2d_model
Module
Data Code

create_data_code_2d_model ( const char *SymbolType,


const char *GenParamNames, const char *GenParamValues,
Hlong *DataCodeHandle )

T_create_data_code_2d_model ( const Htuple SymbolType,


const Htuple GenParamNames, const Htuple GenParamValues,
Htuple *DataCodeHandle )

Create a model of a 2D data code class.


The operator create_data_code_2d_model creates a model for a certain class of 2D data codes. In
DataCodeHandle the operator returns a handle to the 2D data code model, which is used for all further op-
erations on the data code, like modifying the model, reading a symbol, or accessing the results of the symbol
search.
Supported symbol types
The parameter SymbolType is used to determine the type of data codes to process. Presently, three types are
supported: ’Data Matrix ECC 200’, ’QR Code’, and ’PDF417’. Data matrix codes of type ECC 000-140 are not
supported. For the QR Code the older Model 1 as well as the new Model 2 can be read. The PDF417 can be read
in its conventional as well as in its compact form (’Compact/Truncated PDF417’).
For all symbol types, the data code reader supports the Extended Channel Interpretation (ECI) protocol. If the
symbol contains an ECI code, all characters with ASCII code 92 (backslash, ’\’) that occur in the normal data

HALCON 8.0.2
1158 CHAPTER 15. TOOLS

stream are, in compliance with the standard, doubled (’\\’) for the output. This is necessary in order to distinguish
data backslashs from the ECI sequence ’\nnnnnn’.
The information whether the symbol contains ECI codes (and consequently doubled backslashs) or not is stored
in the Symbology Identifier number that can be obtained for every succesfully decoded symbol with the help of
the operator get_data_code_2d_results passing the generic parameter ’symbology_ident’. How the code
number encodes additional information about the symbology and the data code reader, like the ECI support, is
defined in the different symbology specifications. For more information see the appropriate standards and the
operator get_data_code_2d_results.
The Symbology Indentifier code will not be preceded by the data code reader to the output data, even if the symbol
contains an ECI code. If this is needed, e.g., by a subsequent processing unit, the ’symbology_ident’ number
(obtained by the operator get_data_code_2d_results with parameter ’symbology_ident’) can be added to
the data stream manually together with the symbology flag and the symbol code: ’]d’, ’]Q’, or ’]L’ for DataMatrix
codes, QR codes, or PDF417 codes, respectively.
Standard default settings of the data code model
The default settings of the model were chosen to read a wide range of common symbols within a reasonable
amount of time. However, for run-time reasons some restrictions apply to the symbol (see the following table).
If the model was modified (as described later), it is at any time possible to reset it to these default settings by
passing the generic parameter ’default_parameters’ together with the value ’standard_recognition’ to the operator
set_data_code_2d_param.

Model parameter ’standard_recognition’ ’enhanced_recognition’

polarity: dark symbols on a light background in addition, light symbols on a dark


background
Minimum contrast: 30 10
Module size:
ECC 200, QR Code: 6 . . . 20 pixels ≥ 4 pixels (for sharp images ≥ 2)
PDF417:
Width: 3 . . . 15 pixels ≥ 3 pixels (for sharp images ≥ 2)
Aspect ratio: 1 ...4 1.0 . . . 10
Module shape: no or small gap between adjacent bigger gaps are also possible (up to
modules (< 10% of the module 50% of the module size) (only for
size) ECC 200, QR Code)
ECC200: Maximum slant: 10◦ (0.1745) 30◦ (0.5235)
ECC200: Module grid: fixed any (fixed or variable)
For QR Code: Number of posi- 3 2
tion detection patterns, that are
necessary for generating a new
candidate

Modify the data code model


If it is known that the symbol does not or may not comply with all of these restrictions (e.g., the symbol is brighter
than the background or the contrast is very low), or if first tests show that some of the symbols cannot be read
with the default settings, it is possible to adapt single model parameters – while others are kept to the default –
or the whole model can be extended in a single step by setting the generic parameter ’default_parameters’ to the
value ’enhanced_recognition’. This will lead to a more general model that covers a wider range of 2D data code
symbols. However, the symbol search with such a general model is more extensive, hence the run-time of the
operator find_data_code_2d may increase significantly. This is true especially in the following cases: no
readable data code is detected, the symbol is printed light on dark, or the modules are very small.
For this reason, the model should always be specified as exactly as possible by setting all known parameters.
The model parameters can be set directly during the creation of the model or later with the help of the oper-
ator set_data_code_2d_param. Both operators provide the generic parameters GenParamNames and
GenParamValues for this purpose. A detailed description of all supported generic parameters can be found
with the operator set_data_code_2d_param.
Another way for adapting the model is to train it based on sample images. Passing the parameter ’train’ to
the operator find_data_code_2d will cause the find operator to look for a symbol, determine its pa-
rameters, and modify the model accordingly. More details can be found with the description of the operator
find_data_code_2d.

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1159

It is possible to query the model parameters with the operator get_data_code_2d_param. The
names of all supported parameters for setting or querying the model are returned by the operator
query_data_code_2d_params.
Store the data code model
Furthermore, the operator write_data_code_2d_model allows to write the model into a file that can be
used later to create (e.g., in a different application) an identical copy of the model. Such a model copy is created
directly by read_data_code_2d_model (without calling create_data_code_2d_model).
Free the data code model
Since memory is allocated during create_data_code_2d_model and the following operations, the model
should be freed explicitly by the operator clear_data_code_2d_model if it is no longer used.
Parameter
. SymbolType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Type of the 2D data code.
Default Value : "Data Matrix ECC 200"
List of values : SymbolType ∈ {"Data Matrix ECC 200", "QR Code", "PDF417"}
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that can be adjusted for the 2D data code model.
Default Value : []
List of values : GenParamNames ∈ {"default_parameters", "strict_model", "persistence", "polarity",
"mirrored", "contrast_min", "model_type", "version", "version_min", "version_max", "symbol_size",
"symbol_size_min", "symbol_size_max", "symbol_cols", "symbol_cols_min", "symbol_cols_max",
"symbol_rows", "symbol_rows_min", "symbol_rows_max", "symbol_shape", "module_size",
"module_size_min", "module_size_max", "module_width", "module_width_min", "module_width_max",
"module_aspect", "module_aspect_min", "module_aspect_max", "module_gap", "module_gap_min",
"module_gap_max", "module_gap_col", "module_gap_col_min", "module_gap_col_max",
"module_gap_row", "module_gap_row_min", "module_gap_row_max", "slant_max", "module_grid",
"position_pattern_min"}
. GenParamValues (input_control) . . . . . . attribute.value(-array) ; (Htuple .) const char * / Hlong / double
Values of the generic parameters that can be adjusted for the 2D data code model.
Default Value : []
Suggested values : GenParamValues ∈ {"standard_recognition", "enhanced_recognition", "yes", "no",
"any", "dark_on_light", "light_on_dark", "square", "rectangle", "small", "big", "fixed", "variable", 0, 1, 2, 3, 4,
5, 6, 7, 8, 10, 30, 50, 70, 90, 12, 14, 16, 18, 20, 22, 24, 26, 32, 36, 40, 44, 48, 52, 64, 72, 80, 88, 96, 104, 120,
132, 144}
. DataCodeHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; (Htuple .) Hlong *
Handle for using and accessing the 2D data code model.
Example (Syntax: HDevelop)

* Two simple examples that show the use of create_data_code_2d_model


* to detect a Data matrix ECC 200 code and a QR Code.

* (1) Create a model for reading simple QR Codes


* (only dark symbols on a light background will be read)
create_data_code_2d_model (’QR Code’, [], [], DataCodeHandle)
* Read an image
read_image (Image, ’datacode/qrcode/qr_workpiece_01’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)

* (2) Create a model for reading a wide range of Data matrix ECC 200 codes
* (this model will also read light symbols on dark background)
create_data_code_2d_model (’Data Matrix ECC 200’, ’default_parameters’,
’enhanced_recognition’, DataCodeHandle)

HALCON 8.0.2
1160 CHAPTER 15. TOOLS

* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)

Result
The operator create_data_code_2d_model returns the value H_MSG_TRUE if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
create_data_code_2d_model is processed completely exclusively without parallelization.
Possible Successors
set_data_code_2d_param, find_data_code_2d
Alternatives
read_data_code_2d_model
See also
clear_data_code_2d_model, clear_all_data_code_2d_models
Module
Data Code

find_data_code_2d ( const Hobject Image, Hobject *SymbolXLDs,


Hlong DataCodeHandle, const char *GenParamNames, Hlong GenParamValues,
Hlong *ResultHandles, char *DecodedDataStrings )

T_find_data_code_2d ( const Hobject Image, Hobject *SymbolXLDs,


const Htuple DataCodeHandle, const Htuple GenParamNames,
const Htuple GenParamValues, Htuple *ResultHandles,
Htuple *DecodedDataStrings )

Detect and read 2D data code symbols in an image or train the 2D data code model.
The operator find_data_code_2d detects 2D data code symbols in the input image (Image) and reads
the data that is coded in the symbol. Before calling find_data_code_2d, a model of a class of 2D data
codes that matches the symbols in the images must be created with create_data_code_2d_model or
read_data_code_2d_model. The handle returned by these operators is passed to find_data_code_2d
in DataCodeHandle. To look for more than one symbol in an image, the generic parameter
’stop_after_result_num’ can be passed to GenParamNames together with the number of requested symbols as
GenParamValues.
As a result the operator returns for every successfully decoded symbol the surrounding XLD contour
(SymbolXLDs), a result handle, which refers to a candidate structure that stores additional information about
the symbol as well as the search and decoding process (ResultHandles), and the string that is encoded in
the symbol (DecodedDataStrings). If the string is longer than 1024 characters it is shortened to 1020
characters followed by ’. . . ’. In this case, accessing the complete string is only possible with the operator
get_data_code_2d_results. Passing the candidate handle from ResultHandles together with the
generic parameter ’decoded_data’ get_data_code_2d_results returns a tuple with the ASCII code of
all characters of the string.
Adjusting the model
If there is a symbol in the image that cannot be read, it should be verified, whether the properties of the symbol
fit the model parameters. Special attention should be paid to the correct polarity (’polarity’, light-on-dark or dark-
on-light), the symbol size (’symbol_size’ for ECC 200, ’version’ for QR Code, ’symbol_rows’ and ’symbol_cols’
for PDF417), the module size (’module_size’ for ECC 200 and QR Code, ’module_width’ and ’module_aspect’
for PDF417), the possibility of a mirroring of the symbol (’mirrored’), and the specified minimum contrast (’con-
trast_min’). Further relevant parameters are the gap between neighboring foreground modules and, for ECC 200,
the maximum slant of the L-shaped finder pattern (’slant_max’). The current settings for these parameters are

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1161

returned by the operator get_data_code_2d_param. If necessary, the appropriate model parameters can be
adjusted with set_data_code_2d_param.
It is recommended to adjust the model as well as possible to the symbols in the images also for run-time reasons.
In general, the run-time of find_data_code_2d is higher for a more general model than for a more specific
model. One should take into account that a general model leads to a high run-time especially if no valid data code
can be found.
Train the model
Besides setting the model parameters manually with set_data_code_2d_param, the model can also be
trained with find_data_code_2d based on one or several sample images. For this the generic parameter
’train’ must be passed in GenParamNames. The corresponding value passed in GenParamValues determines
the model parameters that should be learned. The following values are possible:

• All data code types:


’all’: all model parameters that can be trained,
’symbol_size’: symbol size and for ECC 200 also the symbol shape (rectangle or square); for QR Code it is
also possible to pass ’version’.
’module_size’: size of the modules; for PDF417 this includes the module width and the module aspect ratio.
’polarity’: polarity of the symbols: they may appear dark on a light background or light on a dark back-
ground.
’mirrored’: whether the symbols in the image are mirrored or not.
’contrast’: minimum contrast for detecting the symbols.
’image_proc’: adjusting different internal image processing parameters; until now, only the maximum slant
of the L-shaped finder pattern of the ECC 200 symbols is set; more parameters may follow in future.
• ECC 200 and QR Code only:
’module_shape’: shape of the modules, especially whether there is a gap between neighboring foreground
modules or whether they are connected.
• ECC 200 only:
’module_grid’: algorithm for calculating the module positions (fixed or variable grid).
• QR Code only:
’model_type’: whether the QR Code symbols follow the Model 1 or Model 2 specification.

It is possible to train several of these parameters in one call of find_data_code_2d by passing the generic pa-
rameter ’train’ in a tuple more than once in conjunction with the appropriate parameters: e.g., GenParamNames
= [’train’,’train’] and GenParamValues = [’polarity’,’module_size’]. Furthermore, in conjunction with ’train’
= ’all’ it is possible to exclude single parameters from training explicitly again by passing ’train’ more than once.
The names of the parameters to exclude, however, must be prefixed by ’˜’: GenParamNames = [’train’,’train’]
and GenParamValues = [’all’,’˜contrast’], e.g., trains all parameters except the minimum contrast.
For training the model, the following aspects should be considered:

• To use several images for the training, the operator find_data_code_2d must be called with the param-
eter ’train’ once for every sample image.
• It is also possible to train the model with several symbols in one image. Here, the generic parameter
’stop_after_result_num’ must be passed as a tuple to GenParamNames together with ’train’. The num-
ber of symbols in the image is passed in GenParamValues together with the training parameters.
• If the training image contains more symbols than the one that shall be used for the training the domain of the
image should be reduced to the symbol of interest with reduce_domain.
• In an application with very similar images, one image for training may be sufficient if the following assump-
tions are true: The symbol size (in modules) is the same for all symbols used in the application, foreground
and background modules are of the same size and there is no gap between neighboring foreground modules,
the background has no distinct texture; and the contrast of all images is almost the same. Otherwise, several
images should be used for training.
• In applications where the symbol size (in modules) is not fixed, the smallest as well as the biggest symbols
should be used for the training. If this can not be guaranteed, the limits for the symbol size should be adapted
manually after the training, or the symbol size should entirely be excluded from the training.

HALCON 8.0.2
1162 CHAPTER 15. TOOLS

• During the first call of find_data_code_2d in the training mode, the trained model parameters are
restricted to the properties of the detected symbol. Any successive training call will, where necessary, extend
the parameter range to cover the already trained symbols as well as the new symbols. Resetting the model with
set_data_code_2d_param to one of its default settings (’default_parameters’ = ’standard_recognition’
or ’enhanced_recognition’) will also reset the training state of the model.
• If find_data_code_2d is not able to read the symbol in the training image, this will produce
no error or exception handling. This can simply be tested in the program by checking the results of
find_data_code_2d: SymbolXLDs, ResultHandles, DecodedDataStrings. These tuples
will be empty, and the model will not be modified.
Functionality of the symbol search
Depending on the current settings of the 2D data code model (see set_data_code_2d_param), the operator
find_data_code_2d performs several passes for searching the data code symbols. The search starts at the
highest pyramid level, where – according to the maximum module size defined in the data code model – the
modules can be separated. In addition, in every pyramid level the preprocessing can vary depending on the presets
for the module gap. If the data code model enables dark symbols on a light background as well as light symbols
on a dark background, within the current pyramid level the dark symbols are searched first. Then the passes for
searching light symbols follow. A pass consists of two phases: The search phase is used to look for the finder
patterns and to generate a symbol candidate for every detected finder pattern, and the evaluation phase, where in a
lower pyramid level all candidates are investigated and – if possible – read.
The operator call is terminated after that pass in which the requested number of 2D data code symbols was suc-
cessfully decoded. The required number of symbols can be specified with the generic paramter GenParamNames
= ’stop_after_result_num’. The appropriate value is passed in GenParamValues; the default is 1.
While searching for more than one symbol in the image, it may happen that not all symbols are detected in the
same pass. In this case find_data_code_2d automatically continues the search until all symbols are found
or until the last pass was performed. Otherwise, if the input image contains several symbols but not all have to
be read, it is possible (especially if the symbols look similar) that more than the requested number of symbols are
returned as a result.
Query results of the symbol search
With the result handles and the operators get_data_code_2d_results and
get_data_code_2d_objects, additional data can be requested about the search process, e.g., the number
of internal search passes or the number of investigated candidates, and – together with the ResultHandles –
about the symbols, like the symbol and module size, the contrast, or the raw data coded in the symbol. In addition,
these operators provide information about all investigated candidates that could not be read. In particular, this
helps to determine if a candidate was actually generated at the symbol’s position during the preprocessing and – by
the value of a status variable – why the search or reading was aborted. Further information about the parameters
can be found with the operators get_data_code_2d_results and get_data_code_2d_objects.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image.
. SymbolXLDs (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *
XLD contours that surround the successfully decoded data code symbols.
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; (Htuple .) Hlong
Handle of the 2D data code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of (optional) parameters for controlling the behavior of the operator.
Default Value : []
List of values : GenParamNames ∈ {"train", "stop_after_result_num"}
. GenParamValues (input_control) . . . . . . attribute.value(-array) ; (Htuple .) Hlong / double / const char *
Values of the optional generic parameters.
Default Value : []
Suggested values : GenParamValues ∈ {"all", "model_type", "symbol_size", "version", "module_size",
"module_shape", "polarity", "mirrored", "contrast", "module_grid", "image_proc", 1, 2, 3}
. ResultHandles (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Handles of all successfully decoded 2D data code symbols.
. DecodedDataStrings (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Decoded data strings of all detected 2D data code symbols in the image.

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1163

Example (Syntax: HDevelop)

* Examples showing the use of find_data_code_2d.


* First, the operator is used to train the model, afterwards it is used to
* read the symbol in another image.

* Create a model for reading Data matrix ECC 200 codes


create_data_code_2d_model (’QR Code’, [], [], DataCodeHandle)
* Read a training image
read_image (Image, ’datacode/ecc200/ecc200_cpu_008’)
* Train the model with the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, ’train’, ’all’,
ResultHandles, DecodedDataStrings)
*
* End of training / begin of normal application
*

* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)

* Display all symbols, the strings encoded in them, and the module size
dev_set_color (’green’)
for i := 0 to |ResultHandles| - 1 by 1
SymbolXLD := SymbolXLDs[i+1]
dev_display (SymbolXLD)
get_contour_xld (SymbolXLD, Row, Col)
set_tposition (WindowHandle, max(Row), min(Col))
write_string (WindowHandle, DecodedDataStrings[i])
get_data_code_2d_results (DataCodeHandle, ResultHandles[i],
[’module_height’,’module_width’], ModuleSize)
new_line (WindowHandle)
write_string (WindowHandle, ’module size = ’ + ModuleSize[0] + ’x’ +
ModuleSize[1])
endfor

* Clear the model


clear_data_code_2d_model (DataCodeHandle)

Result
The operator find_data_code_2d returns the value H_MSG_TRUE if the given parameters are correct.
Otherwise, an exception will be raised.
Parallelization Information
find_data_code_2d is reentrant and processed without parallelization.
Possible Predecessors
create_data_code_2d_model, read_data_code_2d_model, set_data_code_2d_param
Possible Successors
get_data_code_2d_results, get_data_code_2d_objects, write_data_code_2d_model
See also
create_data_code_2d_model, set_data_code_2d_param, get_data_code_2d_results,
get_data_code_2d_objects
Module
Data Code

HALCON 8.0.2
1164 CHAPTER 15. TOOLS

get_data_code_2d_objects ( Hobject *DataCodeObjects,


Hlong DataCodeHandle, Hlong CandidateHandle, const char *ObjectName )

T_get_data_code_2d_objects ( Hobject *DataCodeObjects,


const Htuple DataCodeHandle, const Htuple CandidateHandle,
const Htuple ObjectName )

Access iconic objects that were created during the search for 2D data code symbols.
The operator get_data_code_2d_objects facilitates to access iconic objects that were created dur-
ing the last call of find_data_code_2d while searching and reading the 2D data code symbols. Be-
sides the name of the object (ObjectName), the 2D data code model (DataCodeHandle) must be passed
to get_data_code_2d_objects. In addition, in CandidateHandle a handle of a result or candi-
date structure or a string identifying a group of candidates (see get_data_code_2d_results) must be
passed. These handles are returned by find_data_code_2d for all successfully decoded symbols and by
get_data_code_2d_results for a group of candidates. If these operators return several handles in a tuple,
the individual handles can be accessed by normal tuple operations.
Some objects are not accessible without setting the model parameter ’persistence’ to 1 (see
set_data_code_2d_param). The persistence must be set before calling find_data_code_2d, either
while creating the model with create_data_code_2d_model or with set_data_code_2d_param.
Currently, the following iconic objects can be retrieved:
Regions of the modules

’module_1_rois’: all modules that were classified as foreground (set).


’module_0_rois’: all modules that were classified as background (not set).

These region arrays correspond to the areas that were used for the classification. The returned object is a region
array. Hence it cannot be requested for a group of candidates. Therefore, a single result handle must be passed in
CandidateHandle. The model persistence must be 1 for this object. In addition, requesting the module ROIs
makes sense only for symbols that were detected as valid symbols. For other candidates, whose processing was
aborted earlier, the module ROIs are not available.
XLD contour

’candidate_xld’: an XLD contour that surrounds the candidate or decoded symbol.

This object can be requested for any group of results or for any single candidate or symbol handle. The persistence
setting is of no relevance.
Pyramid images

’search_image’: pyramid image, in which the candidate was found.


’process_image’: pyramid image, in which the candidate was investigated more closely.

The persistence setting is also not relevant here.


Parameter
. DataCodeObjects (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject *
Objects that are created as intermediate results during the detection or evaluation of 2D data codes.
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; Hlong
Handle of the 2D data code model.
. CandidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Handle of the 2D data code candidate or name of a group of candidates for which the iconic data is requested.
Default Value : "all_candidates"
Suggested values : CandidateHandle ∈ {0, 1, 2, "all_candidates", "all_results", "all_undecoded",
"all_aborted"}
. ObjectName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the iconic object to return.
Default Value : "candidate_xld"
List of values : ObjectName ∈ {"module_1_rois", "module_0_rois", "candidate_xld", "search_image",
"process_image"}

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1165

Example (Syntax: HDevelop)

* Example demonstrating how to access the iconic objects of the data code
* search.

* Create a model for reading Data matrix ECC 200 codes


create_data_code_2d_model (’Data Matrix ECC 200’, [], [], DataCodeHandle)
* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)

* Get the handles of all candidates that were detected as a symbol but
* could not be read
get_data_code_2d_results (DataCodeHandle, ’all_undecoded’, ’handle’,
HandlesUndecoded)

* For every undecoded symbol, get the contour and the classified
* module regions
for i := 0 to |HandlesUndecoded| - 1 by 1
* Get the contour of the symbol
dev_set_color (’blue’)
get_data_code_2d_objects (SymbolXLD, DataCodeHandle, HandlesUndecoded[i],
’candidate_xld’)
* Get the module regions of the foreground modules
dev_set_color (’green’)
get_data_code_2d_objects (ModuleFG, DataCodeHandle, HandlesUndecoded[i],
’module_1_rois’)
* Get the module regions of the background modules
dev_set_color (’red’)
get_data_code_2d_objects (ModuleBG, DataCodeHandle, HandlesUndecoded[i],
’module_0_rois’)
* Stop for inspecting the image
stop ()
endfor

* Clear the model


clear_data_code_2d_model (DataCodeHandle)

Result
The operator get_data_code_2d_objects returns the value H_MSG_TRUE if the given parameters are
correct and the requested objects are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_data_code_2d_objects is reentrant and processed without parallelization.
Possible Predecessors
find_data_code_2d, query_data_code_2d_params
Possible Successors
get_data_code_2d_results
See also
query_data_code_2d_params, get_data_code_2d_results, get_data_code_2d_param,
set_data_code_2d_param
Module
Data Code

HALCON 8.0.2
1166 CHAPTER 15. TOOLS

get_data_code_2d_param ( Hlong DataCodeHandle,


const char *GenParamNames, char *GenParamValues )

T_get_data_code_2d_param ( const Htuple DataCodeHandle,


const Htuple GenParamNames, Htuple *GenParamValues )

Get one or several parameters that describe the 2D data code model.
The operator get_data_code_2d_param allows to query the parameters that are used to describe the 2D
data code model. The names of the desired parameters are passed in the generic parameter GenParamNames,
the corresponding values are returned in GenParamValues. All these parameters can be set and changed at any
time with the operator set_data_code_2d_param. A list with the names of all parameters that are valid for
the used 2D data code type is returned by the operator query_data_code_2d_params.
The following parameters can be queried – ordered by different categories and data code types:
Size and shape of the symbol:

• Data matrix ECC 200 (including the finder pattern):


’symbol_cols_min’: minimum number of module columns in the symbol.
’symbol_cols_max’: maximum number of module columns in the symbol.
’symbol_rows_min’: minimum number of module rows in the symbol.
’symbol_rows_max’: maximum number of module rows in the symbol.
’symbol_shape’: possible restrictions concerning the module shape (rectangle and/or square): ’square’, ’rect-
angle’, ’any’. Since HALCON 7.1.1, the same search algorithm is used for both shapes.
• QR Code (including the finder pattern):
’model_type’: type of the QR Code model specification: 1, 2, ’any’
’version_min’: minimum symbol version to be read: [1. . . 40] (Model 1: [1. . . 14])
’version_max’: maximum symbol version to be read: [1. . . 40] (Model 1: [1. . . 14])
’symbol_size_min’: minimum symbol size (this value is directly linked to the version ’version_min’):
[21. . . 177] (Model 1: [21. . . 73])
’symbol_size_max’: maximum symbol size (this value is directly linked to the version ’version_max’):
[21. . . 177] (Model 1: [21. . . 73])
• PDF417:
’symbol_cols_min’: minimum number of data columns in the symbol in codewords, i.e., excluding the code-
words of the start/stop pattern and of the two row indicators.
’symbol_cols_max’: maximum number of data columns in the symbol in codewords, i.e., excluding the
codewords of the start/stop pattern and of the two row indicators.
’symbol_rows_min’: minimum number of module rows in the symbol.
’symbol_rows_max’: maximum number of module rows in the symbol.

Appearance of the modules in the image:

• All data code types:


’polarity’: possible restrictions concerning the polarity of the modules, i.e., if they are printed dark on a light
background or vice versa: ’dark_on_light’, ’light_on_dark’, ’any’.
’mirrored’: describes whether the symbol is or may be mirrored (which is equivalent to swapping the rows
and columns of the symbol): ’yes’, ’no’, ’any’.
’contrast_min’: minimum contrast between the foreground and the background of the symbol (this measure
corresponds to the minimum gradient between the symbol’s foreground and the background).
• Data matrix ECC 200 and QR Code:
’module_size_min’: minimum module size in the image in pixels.
’module_size_max’: maximum module size in the image in pixels.
With the following parameters it is possible to specify whether neighboring foreground modules are con-
nected or whether there is or may be a gap between them (possible values are ’no’ (no gap) < ’small’ <
’big’):

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1167

’module_gap_col_min’: minimum gap in direction of the symbol columns.


’module_gap_col_max’: maximum gap in direction of the symbol columns.
’module_gap_row_min’: minimum gap in direction of the symbol rows.
’module_gap_row_max’: maximum gap in direction of the symbol rows.
• PDF417:
’module_width_min’: minimum module width in the image in pixels.
’module_width_max’: maximum module width in the image in pixels.
’module_aspect_min’: minimum module aspect ratio (module height to module width).
’module_aspect_max’: maximum module aspect ratio (module height to module width).
• Data matrix ECC 200:
’slant_max’: maximum slant of the L-shaped finder (the angle is returned in radians and corresponds to the
distortion that occurs when the symbol is printed or during the image acquisition).
’module_grid’: describes whether the size of the modules may vary (in a specific range) or not. Dependent
on the parameter different algorithms are used for the calculation of the module’s center positions. If
it is set to ’fixed’, an equidistant grid is used. Allowing a variable module size (’variable’), the grid is
aligned only to the alternating side of the finder pattern. With ’any’ both approaches are tested one after
the other.
• QR Code:
’position_pattern_min’: Number of position detection patterns that have to be visible for generating a new
symbol candidate (2 or 3).

General model behavior:

• All data code types:


’persistence’: controls whether certain intermediate results of the symbol search with
find_data_code_2d are stored only temporarily or persistently in the model: 0 (temporary),
1 (persistent).
’strict_model’: controls the behavior of find_data_code_2d while detecting symbols that could be
read but that do not fit the model restrictions concerning the size of the symbols: ’yes’ (strict: such
symbols are rejected), ’no’ (not strict: all readable symbols are returned as a result independent of their
size and the size specified in the model).

It is possible to query the values of several or all parameters with a single operator call by passing a tuple con-
taining the names of all desired parameters to GenParamNames. As a result a tuple of the same length with the
corresponding values is returned in GenParamValues.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; (Htuple .) Hlong
Handle of the 2D data code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that are to be queried for the 2D data code model.
Default Value : "contrast_min"
List of values : GenParamNames ∈ {"strict_model", "persistence", "polarity", "mirrored", "contrast_min",
"model_type", "version_min", "version_max", "symbol_size_min", "symbol_size_max", "symbol_cols_min",
"symbol_cols_max", "symbol_rows_min", "symbol_rows_max", "symbol_shape", "module_size_min",
"module_size_max", "module_width_min", "module_width_max", "module_aspect_min",
"module_aspect_max", "module_gap_col_min", "module_gap_col_max", "module_gap_row_min",
"module_gap_row_max", "slant_max", "module_grid", "position_pattern_min"}
. GenParamValues (output_control) . . . . . . . attribute.value(-array) ; (Htuple .) char * / Hlong * / double *
Values of the generic parameters.
Result
The operator get_data_code_2d_param returns the value H_MSG_TRUE if the given parameters are cor-
rect. Otherwise, an exception will be raised.
Parallelization Information
get_data_code_2d_param is reentrant and processed without parallelization.

HALCON 8.0.2
1168 CHAPTER 15. TOOLS

Possible Predecessors
query_data_code_2d_params, set_data_code_2d_param, find_data_code_2d
Possible Successors
find_data_code_2d, write_data_code_2d_model
Alternatives
write_data_code_2d_model
See also
query_data_code_2d_params, set_data_code_2d_param, get_data_code_2d_results,
get_data_code_2d_objects, find_data_code_2d
Module
Data Code

get_data_code_2d_results ( Hlong DataCodeHandle,


const char *CandidateHandle, const char *ResultNames,
char *ResultValues )

T_get_data_code_2d_results ( const Htuple DataCodeHandle,


const Htuple CandidateHandle, const Htuple ResultNames,
Htuple *ResultValues )

Get the alphanumerical results that were accumulated during the search for 2D data code symbols.
The operator get_data_code_2d_results allows to access several alphanumerical results that were calcu-
lated while searching and reading the 2D data code symbols. These results describe the search process in general
or one of the investigated candidates – independently of whether it could be read or not. The results are in most
cases not related to the symbol with the highest resolution but depend on the pyramid level that was investigated
when the reading process was aborted. To access a result, the name of the parameter (ResultNames) and the 2D
data code model (DataCodeHandle) must be passed. In addition, in CandidateHandle a handle of a result
or candidate structure or a string identifying a group of candidates must be passed. These handles are returned by
find_data_code_2d for all successfully decoded symbols and by get_data_code_2d_results for a
group of candidates. If these operators return several handles in a tuple, the individual handles can be accessed by
normal tuple operations.
Most results consist of one value. Several of these results can be queried for a specific candidate in a single call.
The values returned in ResultValues correspond to the appropriate parameter names in the ResultNames
tuple. As an alternative, these results can also be queried for a group of candidates (see below). In this case, only
one parameter can be requested per call, and ResultValues contains one value for every candidate.
Furthermore, there exists another group of results that consist of more than one value (e.g., ’bin_module_data’),
which are returned as a tuple. These parameters must always be queried exclusively: one result for one specific
candidate.
Apart from the candidate-specific results there are a number of results referring to the search process in general.
This is indicated by passing the string ’general’ in CandidateHandle instead of a candidate handle.
Candidate groups
The following candidate group names are predefined and can be passed as CandidateHandle instead of a
single handle:

’general’: This value is used for results that refer to the last find_data_code_2d call in general but not to a
specific candidate.
’all_candidates’: All candidates (including the successfully decoded symbols) that were investigated during the
last call of find_data_code_2d.
’all_results’: All symbols that were successfully decoded during the last call of find_data_code_2d.
’all_undecoded’: All candidates of the last call of find_data_code_2d that were detected as 2D data code
symbols, but could not be decoded. For these candidates the error correction detected too many errors, or
there was an failure while decoding the error-corrected data because of inconsistent data.
’all_aborted’: All candidates of the last call of find_data_code_2d that could not be identified as valid 2D
data code symbols and for which the processing was aborted.

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1169

Supported results
Currently, the access to the following results, which are returned in ResultValues, is supported:
General results that do not depend on specific candidates (all data code types) – ’general’:

’min_search_level’: lowest pyramid level that is searched for symbols.


’max_search_level’: highest pyramid level that is searched for symbols.
’pass_num’: number of passes that were completed.
’result_num’: number of successfully decoded symbols.
’candidate_num’: number of all investigated candidates.
’undecoded_num’: number of candidates that were identified as symbols but could not be read.
’aborted_num’: number of candidates that could not be identified as valid 2D data code symbols.

Results associated with a specific candidate:


Please consider that some of the following results will not return a useful value if the investigation of the candidate
was aborted.
Results that contain exactly one value and hence can be applied to a tuple of candidates:

• All data code types:


’handle’: handle to the candidate. This parameter is used to receive the handles of all candidates of the
specified group.
’pass’: number of the pass in which the candidate was generated and processed.
’status’: indicates whether the decoding was successful or why the processing was aborted.
’search_level’: pyramid level on which the finder pattern was found.
’process_level’: pyramid level on which the candidate was processed and decoded.
’polarity’: polarity of the symbol. This is the assumption about the polarity that was used for searching the
candidate.
’mirrored’: indicates whether a successfully decoded symbol is mirrored or not. For candidates that could
not be read, the parameter returns the mirroring specification of the model.
’symbol_rows’, ’symbol_cols’: ECC 200 and QR Code: detected size of the symbol in modules: number of
rows and columns including the finder pattern; PDF417: detected number of rows and data columns
(each 17 modules wide) within the symbol (excluding the start/stop patterns and the row indicators).
’module_height’, ’module_width’: height and width of the modules in pixels.
’contrast’: estimation of the symbol’s contrast. This value is based on the gradient of the edge between the
finder pattern and the background.
’decoded_string’: result string that is encoded in the symbol – this query is useful only for successfully
decoded strings. It returns the same string as find_data_code_2d and is subjected to the same re-
strictions concerning the maximum length of 1024 characters. If the result string is longer, the parameter
’decoded_data’ can be used to get a tuple with all ASCII characters of the decoded string.
’decoding_error’: decoding error – for successfully decoded symbols this is the number of errors that were
detected and corrected by the error correction. The number of errors corresponds here to the number of
code words that lead to errors when trying to read them. If the error correction failed, a negative error
code is returned.
’symbology_ident’: The Symbology Identifier is used to indicate that the data code contains the FNC1 and/or
ECI characters.
FNC1 (Function 1 Character) is used if the data formating conforms to specific predefined industry
standards.
The ECI protocol (Extended Channel Interpretation) is used to change the default interpretation of the
encoded data. A 6-digit code number after the ECI character switches the interpretation of the following
characters from the default to a specific code page like an international character set. In the output stream
the ECI switch is coded as ’\nnnnnn’. Therefore all backslashs (’\’, ASCII code 92), that occur in the
normal output stream have to be doubled.
The ’symbology_ident’ parameter returns only the actual identifier value m (m ∈ [0, 6] (ECC 200 and QR
Code) and m ∈ [0, 2] (PDF417)) according to the specification of Data matrix, QR Codes, and PDF417
but not the identifier prefixes ’]d’, ’]Q’, and ’]L’ for Data matrix, QR Codes, and PDF417 respectively.

HALCON 8.0.2
1170 CHAPTER 15. TOOLS

If required, this Symbology Identifier composed of the prefix and the value m has to be preceded the
decoded string (normally only if m > 1) manually. Symbols that contain ECI codes (and hence doubled
backslashs) can be recognised by the following identifier values: ECC 200: 4, 5, and 6, QR Code: 2, 4,
and 6, PDF417: 1.

• Data matrix ECC200 and QR Code:


’module_gap’: assumption about the module gap that was used for searching the candidate.

• Data Matrix ECC200:


’slant’: slant of the L-shaped finder pattern in radians. This is the difference between the angle of the ’L’ and
the right angle.
’module_grid’: For symbols that could be decoded, this parameter informs about the algorithm that was used
for calculating the module grid: If a variable grid was used it returns ’variable’, and otherwise ’fixed’.
For symbols that could not be decoded, it returns the method that was used during the last decoding trial
or, if the candidate was rejected before the decoding, the corresponding model setting.

• QR Codes:
’version’: version number that corresponds to the size of the symbol (version 1 = 21 × 21, version 2 = 25 ×
25, . . . , version 40 = 177 × 177).
’symbol_size’: detected size of the symbol in modules.
’model_type’: Type of the QR Code Model. In HALCON the older, original specification for QR Codes
Model 1 as well as the newer, enhanced form Model 2 are supported.
’mask_pattern_ref’, ’error_correction_level’: If a candidate is recognized as an QR Code the first step is
to read the format information encoded in the symbol. This includes a code for the pattern that was
used for masking the data modules (0 ≤ ’mask_pattern_ref’ ≤ 7) and the level of the error correction
(’error_correction_level’ ∈ [’L’, ’M’, ’Q’, ’H’]).

• PDF417:
’module_aspect’: module aspect ratio; this corresponds to the ratio of ’module_height’ to ’module_width’.
’error_correction_level’: If a candidate is recognized as a PDF417 the first step is to read the format infor-
mation encoded in the symbol. This includes the error correction level, which was used during encoding
(’error_correction_level’ ∈ [0, 8]).

Results that return a tuple of values and hence can be requested only separately and only for a single candidate:

• All data code types:


’bin_module_data’: binary symbol data that is read from the modules row by row – a value of 0 means that
the module was classified as background and 100 indicates that the module belongs to the foreground.
Values between 0 and 100 can be interpreted as foreground or background.
’raw_coded_data’: data obtained by mapping the binary data to data words according to the particular coding
scheme. Single bits may still be erroneous, and the words that are used for the error correction are still
included.
’corr_coded_data’: data obtained after applying the error correction: erroneous bits are corrected and all
redundant words are removed, but the words are still encoded according to the coding scheme that is
specific for the data code type
’decoded_data’: tuple with the decoded data words (= characters of the decoded data string) as ASCII code
or – for QR Code – as JIS8 and Shift JIS characters. In contrast to the decoded data string, there is no
restriction concerning the maximum length of 1024 characters.
’quality_isoiec15415’: tuple with the assessment of print quality in compliance with the international stan-
dard ISO/IEC 15415. The first element always contains the overall print quality of the symbol; the
length of the tuple and the denotation of the remaining elements depend on the specific data code type.
According to the standard the grades are whole numbers from 0 to 4, where 0 is the lowest and 4 the
highest grade. It is important to note that, even though the implementation is strictly based on the stan-
dard, the computation of the print quality grades depends on the preceding decoding algorithm. Thus,
different data code readers (of different vendors) can potentially produce slightly different results in the
print quality assessment.

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1171

For the 2D data codes ECC200 and QR Code, the print quality is described in a tuple with eight ele-
ments: (overall quality, contrast, modulation, fixed pattern damage, decode, axial nonuniformity, grid
nonuniformity, unused error correction).
The definition of the respective elements is as follows: The overall quality is the minimum of all indi-
vidual grades. The contrast is the range between the minimal and the maximal pixel intensity in the data
code domain, and a strong contrast results in a good grading. The modulation indicates how strong the
amplitudes of the data code modules are. Big amplitudes make the assignment of the modules to black
or white more certain, resulting in a high modulation grade. It is to note that the computation of the
modulation grade is influenced by the specific level of error correction capacity, meaning that the mod-
ulation degrades less for codes with higher error correction capacity. The fixed pattern of both ECC200
and QR Code is of high importance for detecting and decoding the codes. Degradation or damage of the
fixed pattern, or the respective quiet zones, is assessed with the fixed pattern damage quality. The decode
quality always takes the grade 4, meaning that the code could be decoded. Naturally, codes which can
not be decoded can not be assessed concerning print quality either. Originally, data codes have squared
modules, i.e. the width and height of the modules are the same. Due to a potentially oblique view
of the camera onto the data code or a defective fabrication of the data code itself, the width to height
ratio can be distorted. This deterioration results in a degraded axial nonuniformity. If apart from an
affine distortion the data code is subject to perspective or any other distortions too this degrades the grid
nonuniformity quality. As data codes are redundant codes, errors in the modules or codewords can be
corrected. The amount of error correcting capacities which is not already used by the present data code
symbol is expressed in the unused error correction quality. In a way, this grade reflects the reliability of
the decoding process. Note, that even codes with an unused error correction grading of 0, which could
possibly mean a false decoding result, can be decoded by the find_data_code_2d operator in a re-
liable way, because the implemented decoding functionality is more sophisticated and robust compared
to the reference decode algorithm proposed by the standard.
For the 2D stacked code PDF417 the print quality is described in a tuple with seven elements: (overall
quality, start/stop pattern, codeword yield, unused error correction, modulation, decodability, defects).
The definition of the respective elements is as follows: The overall quality is the minimum of all individ-
ual grades. As the PDF417 data code is a stacked code, which can be read by line scan devices as well,
print quality assessment is basically based on techniques for linear bar codes: a set of scan reflectance
profiles is generated across the symbol followed by the evaluation of the respective print qualities within
each scan, which are finally subsumed as overall print qualities. For more details the user is referred
to the standard for linear symbols ISO/IEC 14516. In start/stop pattern the start and stop patterns are
assessed concerning the quality of the reflectance profile and the correctness of the bar and space se-
quence. The grade codeword yield counts and evaluates the relative number of correct decoded words
acquired by the set of scan profiles. For the grade unused error correction the relative number of false
decoded words within the error correction blocks are counted. As for 2D data codes, the modulation
grade indicates how strong the amplitudes, i.e. the extremal intensities, of the bars and spaces are. The
grade decodability measures the deviation of the actual length of bars and spaces with respect to their
reference length. And finally, the grade defects refers to a measurement of how perfect the reflectance
profiles of bars and spaces are.

• Data Matrix ECC200 and QR Code:


’structured_append’: if the symbol is part of a group of symbols (“Structured Append”), this parameter
contains (1) the index of the symbol in the group, (2) the number of symbols that belong to the group,
and (3) a number that serves as a group identifier.

• PDF417:
’macro_exist’: symbols that are part of a group of symbols are called "‘Macro PDF417"’ symbols. These
symbols contain additional information within a control block. For macro symbols ’macro_exist’ returns
the value 1 while for conventional symbols 0 is returned.
’macro_segment_index’: returns the index of the symbol in the group. For macro symbols this information
is obligatory.
’macro_file_id’: returns the group identifier as a string. For macro symbols this information is obligatory.
’macro_segment_count’: returns the number of symbols that belong to the group. For macro symbols this
information is optional.
’macro_time_stamp’: returns the time stamp on the source file expressed as the elapsed time in seconds since
1970:01:01:00:00:00 GMT as a string. For macro symbols this information is optional.

HALCON 8.0.2
1172 CHAPTER 15. TOOLS

’macro_checksum’: returns the CRC checksum computed over the entire source file using the CCITT-16
polynomial. For macro symbols this information is optional.
’macro_last_symbol’: returns 1 if the symbol is the last one within the group of symbols. Otherwise 0 is
returned. For macro symbols this information is optional.

Status message
The status parameter that can be queried for all candidates reveals why and where in the evaluation phase a candi-
date was discarded. The following list shows the most important status messages in the order of their generation
during the evaluation phase:

• Data matrix ECC 200:


’aborted: too close to image border’ – The symbol candidate is too close to the border. Only symbols that
are completely within the image can be read.
’aborted adjusting: ...’ – It is not possible to determine the exact position of the finder pattern in the process-
ing image.
’aborted finder pattern: ...’ – It is not possible to determine the width of one of the two legs of the L-shaped
finder pattern.
’aborted leg widths: widths of the finder pattern legs differ too much’ – The widths of the two legs of the L-
shaped finder pattern differ too much.
’aborted alternating side: ...’ – For one dimension of the candidate, two opposite borders were found during
the symbol search phase. However, it is not possible to determine which is the alternating and which the
solid side of the finder pattern.
’aborted border search: ...’ – For one dimension of the candidate, only the border that belongs to the solid
side of the finder patten was found during the symbol search phase. Searching the opposite (the alternat-
ing) side failed.
’aborted symbol: invalid size’ – The number of rows and columns of the symbol that was deduced from the
alternating pattern does not yield in a valid ECC 200 code.
’aborted symbol: size does not fit strict model definition’ – Although the deduced symbol size is a valid ECC
200 size, it is not inside the range predefined by the model.
’aborted symbol: rectangular symbol does not fit strict mirror definition of model’ – The symbol was identi-
fied as a rectangular ECC 200 code. In conjunction with the mirroring parameter of the model, however,
the symbol’s rows and columns are swapped such that no valid ECC 200 code is achieved. This test is
of course not possible for square symbols. There, a wrong mirroring specification will effect the reading
of the symbol data and, in general, lead to the following error:
’error correction failed’ – The error correction failed because there are too many modules that couldn’t be
interpreted correctly. Normally, this indicates that the print and/or image quality is too bad, but it may
also be provoked by a wrong mirroring specification in the model.
’decoding failed: special decoding reader requested’ – The decoded data contains a message for program-
ming the data code reader. This feature is not supported.
’decoding failed: inconsistent data’ – The data coded in the symbol is not consistent and therefore cannot
be read.

• QR Code:
’aborted: too close to image border’ – The symbol candidate is too close to the border. Only symbols that
are completely within the image can be read.
’aborted adjusting: finder patterns’ – It is not possible to determine the exact position of the finder pattern
in the processing image.
’aborted symbol: different number of rows and columns’ – It is not possible to determine for both dimen-
sions a consistent symbol size by the size and the position of the detected finder pattern. When reading
Model 2 symbols, this error may occur only with small symbols (< version 7 or 45 × 45 modules). For
bigger symbols the size is coded within the symbol in the version information region. The estimated size
is used only as a hint for finding the version information region.
’aborted symbol: invalid size’ – The size determined by the size and the position of the detected finder pat-
tern is too small or (only Model 1) too big.
’decoding of version information failed’ – While processing a Model 2 symbol, the symbol version as deter-
mined by the finder pattern is at least 7 (≥ 45 × 45 modules). However, reading the version from the
appropriate region in the symbol failed.

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1173

’aborted symbol: size does not fit strict model definition’ – Although the deduced symbol size is valid, it is
not inside the range predefined by the model.
’decoding of format information failed’ – Reading the format information (mask pattern and error correction
level) from the appropriate region in the symbol failed.
’error correction failed’ – The error correction failed because there are too many modules that couldn’t be
interpreted correctly. Normally, this indicates that the print and/or image quality is too bad, but it may
also be provoked by a wrong mirroring specification in the model.
’decoding failed: inconsistent data’ – The data coded in the symbol is not consistent and therefore cannot
be read.

• PDF417:
’aborted: too close to image border’ – The symbol candidate is too close to the border. Only symbols that
are completely within the image can be read.
’aborted symbol: size does not fit strict model definition’ – Although the deduced symbol size is valid, it is
not inside the range predefined by the model.
’error correction failed’ – The error correction failed because there are too many modules that couldn’t be
interpreted correctly. Normally, this indicates that the print and/or image quality is too bad, but it may
also be provoked by a wrong mirroring specification in the model.
’decoding failed: special decoding reader requested’ – The decoded data contains a message for program-
ming the data code reader. This feature is not supported.
’decoding failed: inconsistent data’ – The data coded in the symbol is not consistent and therefore cannot
be read.

While processing a candidate, it is possible that internally several iterations for reading the symbol are performed.
If all attempts fail, normally the last abortion state is stored in the candidate structure. E.g., if the QR Code
model enables symbols with Model 1 and Model 2 specification, find_data_code_2d tries first to inter-
pret the symbol as Model 2 type. If this fails, Model 1 interpretation is performed. If this also fails, the sta-
tus variable is set to the latest failure state of the Model 1 interpretation. In order to get the error state of
the Model 2 branch, the ’model_type’ parameter of the data code model must be restricted accordingly (with
set_data_code_2d_param).
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; (Htuple .) Hlong
Handle of the 2D data code model.
. CandidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) const char * / Hlong
Handle of the 2D data code candidate or name of a group of candidates for which the data is required.
Default Value : "all_candidates"
Suggested values : CandidateHandle ∈ {0, 1, 2, "general", "all_candidates", "all_results",
"all_undecoded", "all_aborted"}
. ResultNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the results of the 2D data code to return.
Default Value : "status"
Suggested values : ResultNames ∈ {"min_search_level", "max_search_level", "pass_num", "result_num",
"candidate_num", "undecoded_num", "aborted_num", "handle", "pass", "status", "search_level",
"process_level", "polarity", "module_gap", "mirrored", "model_type", "symbol_rows", "symbol_cols",
"symbol_size", "version", "module_height", "module_width", "module_aspect", "slant", "contrast",
"module_grid", "decoded_string", "decoding_error", "symbology_ident", "mask_pattern_ref",
"error_correction_level", "bin_module_data", "raw_coded_data", "corr_coded_data", "decoded_data",
"quality_isoiec15415", "structured_append", "macro_exist", "macro_segment_index", "macro_file_id",
"macro_segment_count", "macro_time_stamp", "macro_checksum", "macro_last_symbol"}
. ResultValues (output_control) . . . . . . . . . . attribute.value(-array) ; (Htuple .) char * / Hlong * / double *
List with the results.
Example (Syntax: HDevelop)

* Example demonstrating how to access the results of the data code search.

* Create a model for reading Data matrix ECC 200 codes

HALCON 8.0.2
1174 CHAPTER 15. TOOLS

create_data_code_2d_model (’Data Matrix ECC 200’, [], [], DataCodeHandle)


* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)

* Get the number of passes


get_data_code_2d_results (DataCodeHandle, ’general’, ’pass_num’, Passes)

* Get a tuple with the status of all candidates


get_data_code_2d_results (DataCodeHandle, ’all_candidates’, ’status’,
AllStatus)
* Get the handles of all candidates that were detected as a symbol but
* could not be read
get_data_code_2d_results (DataCodeHandle, ’all_undecoded’, ’handle’,
HandlesUndecoded)

* For every undecoded symbol, get the contour, the symbol size, and
* the binary module data
dev_set_color (’red’)
for i := 0 to |HandlesUndecoded| - 1 by 1
* Get the contour of the symbol
get_data_code_2d_objects (SymbolXLD, DataCodeHandle, HandlesUndecoded[i],
’candidate_xld’)
* Get the symbol size
get_data_code_2d_results (DataCodeHandle, HandlesUndecoded[i],
[’symbol_rows’,’symbol_cols’], SymbolSize)
* Get the binary module data (has to be queried exclusively)
get_data_code_2d_results (DataCodeHandle, HandlesUndecoded[i],
’bin_module_data’, BinModuleData)
* Stop for inspecting the data
stop ()
endfor

* Clear the model


clear_data_code_2d_model (DataCodeHandle)

Result
The operator get_data_code_2d_results returns the value H_MSG_TRUE if the given parameters are
correct and the requested results are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_data_code_2d_results is reentrant and processed without parallelization.
Possible Predecessors
find_data_code_2d, query_data_code_2d_params
Possible Successors
get_data_code_2d_objects
See also
query_data_code_2d_params, get_data_code_2d_objects, get_data_code_2d_param,
set_data_code_2d_param
Module
Data Code

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1175

T_query_data_code_2d_params ( const Htuple DataCodeHandle,


const Htuple QueryName, Htuple *GenParamNames )

Get for a given 2D data code model the names of the generic parameters or objects that can be used in the other
2D data code operators.
The operator query_data_code_2d_params returns the names of the generic parameters that are sup-
ported by the 2D data code operators set_data_code_2d_param, get_data_code_2d_param,
find_data_code_2d, get_data_code_2d_results, and get_data_code_2d_objects. The
parameter QueryName is used to select the desired parameter group:

’get_model_params’: get_data_code_2d_param – parameters for querying the 2D data code model.


’set_model_params’: set_data_code_2d_param – parameters for adjusting the 2D data code model.
’find_params’: find_data_code_2d – parameters used while searching and reading the 2D data code sym-
bols.
’get_result_params’: get_data_code_2d_results – parameters for querying the alphanumerical results of
the symbol search.
’get_result_objects’: get_data_code_2d_objects – parameters for accessing the iconic objects of the sym-
bol search.

The returned parameter list depends only on the type of the data code and not on the current state of the model or
its results.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; Htuple . Hlong
Handle of the 2D data code model.
. QueryName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; Htuple . const char *
Name of the parameter group.
Default Value : "get_result_params"
List of values : QueryName ∈ {"get_model_params", "set_model_params", "find_params",
"get_result_params", "get_result_objects"}
. GenParamNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; Htuple . char *
List containing the names of the supported generic parameters.
Example (Syntax: HDevelop)

* This example demonstrates how the names of all available model parameters
* can be queried. This is used to request first the settings of the
* untrained and then the settings of the trained model.

* Create a model for reading Data matrix ECC 200 codes


create_data_code_2d_model (’Data Matrix ECC 200’, [], [], DataCodeHandle)
* Query all the names of the generic parameters that can be passed to the
* operator get_data_code_2d_param to request the model
query_data_code_2d_params (DataCodeHandle, ’get_model_params’, GenParamNames)
* Request the current settings of the (untrained) model
get_data_code_2d_param(DataCodeHandle, GenParamNames, ModelParams)

* Read a training image


read_image (Image, ’datacode/ecc200/ecc200_cpu_008’)
* train the model with the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, ’train’, ’all’,
ResultHandles, DecodedDataStrings)
* Request the current settings of the (now trained) model
get_data_code_2d_param(DataCodeHandle, GenParamNames, TrainedModelParams)
* Create a tuple that demonstrates the changings
ModelAdaption := GenParamNames + ’: ’ + ModelParams + ’ -> ’ +
TrainedModelParams

HALCON 8.0.2
1176 CHAPTER 15. TOOLS

* Clear the model


clear_data_code_2d_model (DataCodeHandle)

Result
The operator query_data_code_2d_params returns the value H_MSG_TRUE if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
query_data_code_2d_params is reentrant and processed without parallelization.
Possible Predecessors
create_data_code_2d_model
Possible Successors
get_data_code_2d_param, get_data_code_2d_results, get_data_code_2d_objects
Module
Data Code

read_data_code_2d_model ( const char *FileName,


Hlong *DataCodeHandle )

T_read_data_code_2d_model ( const Htuple FileName,


Htuple *DataCodeHandle )

Read a 2D data code model from a file and create a new model.
The operator read_data_code_2d_model reads the 2D data code model file FileName and creates a new
model that is an identical copy of the saved model. The parameter DataCodeHandle returns the handle of the
new model. The model file FileName must be created by the operator write_data_code_2d_model.
Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *


Name of the 2D data code model file.
Default Value : "data_code_model.dcm"
. DataCodeHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; Hlong *
Handle of the created 2D data code model.
Example (Syntax: HDevelop)

* This example demonstrates how a model that was saved in an earlier


* session can be used again by reading the model file

* Create a model by reading by reading a data code model file


read_data_code_2d_model (’ecc200_trained_model.dcm’, DataCodeHandle)
* Read a symbol image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)

Result
The operator read_data_code_2d_model returns the value H_MSG_TRUE if the named 2D data code file
was found and correctly read. Otherwise, an exception will be raised.
Parallelization Information
read_data_code_2d_model is processed completely exclusively without parallelization.
Possible Successors
find_data_code_2d

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1177

Alternatives
create_data_code_2d_model
See also
write_data_code_2d_model, clear_data_code_2d_model,
clear_all_data_code_2d_models
Module
Data Code

set_data_code_2d_param ( Hlong DataCodeHandle,


const char *GenParamNames, const char *GenParamValues )

T_set_data_code_2d_param ( const Htuple DataCodeHandle,


const Htuple GenParamNames, const Htuple GenParamValues )

Set selected parameters of the 2D data code model.


The operator set_data_code_2d_param is used to set or change the different parameters of a 2D data code
model in order to adapt the model to a particular symbol appearance. All parameters can also be set while creating
a 2D data code model with create_data_code_2d_model. The current configuration of the data code
model can be queried with get_data_code_2d_param. A list with the names of all parameters that can be
set for the given 2D data code type is returned by query_data_code_2d_params.
The following overview lists the different generic parameters with the respective value ranges and default values
in standard mode (’standard_recognition’) and, if differing, in enhanced mode (’enhanced_recognition’):
Basic default settings:

• All data code types:


’default_parameters’: reset all model parameters to one of the two basic default settings standard or
enhanced (see the following summary and create_data_code_2d_model). In addition to
the parameter values, the training state of the model is reset. Values: ’standard_recognition’, ’en-
hanced_recognition’
Default: ’standard_recognition’
Attention: If this parameter is set together with a list of other parameters, this parameter must be at the
first position.

Size and shape of the symbol:

• Data matrix ECC 200 (including the finder pattern):


’symbol_cols_min’: minimum number of module columns in the symbol.
Value range: [10 . . . 144] – even
Default: 10
’symbol_cols_max’: maximum number of module columns in the symbol.
Value range: [10 . . . 144] – even
Default: 144
’symbol_rows_min’: minimum number of module rows in the symbol.
Value range: [8 . . . 144] – even
Default: 8
’symbol_rows_max’: maximum number of module rows in the symbol.
Value range: [8 . . . 144] – even
Default: 144
’symbol_shape’: possible restrictions on the module shape (rectangle and/or square). Attention: setting the
symbol shape all previously made restrictions concerning the symbol size are lost. Since HALCON
7.1.1 the same search algorithm is used for both shapes. Thus, the parameter has no relevance for the
symbol search anymore.
Values: ’rectangle’, ’square’, ’any’
Default: ’any’
’symbol_cols’: set ’symbol_cols_min’ and ’symbol_cols_max’ to the same value.

HALCON 8.0.2
1178 CHAPTER 15. TOOLS

’symbol_rows’: set ’symbol_rows_min’ and ’symbol_rows_max’ to the same value.


’symbol_size_min’: set ’symbol_cols_min’ and ’symbol_rows_min’ to the same value.
’symbol_size_max’: set ’symbol_cols_max’ and ’symbol_rows_max’ to the same value.
’symbol_size’: set ’symbol_cols_min’, ’symbol_cols_max’, ’symbol_rows_min’ and ’symbol_rows_max’ to
the same value and ’symbol_shape’ to ’square’.
• QR-Code (including the finder pattern):
’model_type’: type of the QR Code model. The old QR Code Model 1 and the newer Model 2 are supported.
Values: 1, 2, ’any’
Default: ’any’
’version_min’: minimum symbol version. The symbol version is directly linked to the symbol size. Symbols
of version 1 are 21×21 moduls in size, version 2 = 25×25 moduls, etc. up to version 40 = 177×177
moduls. The maximum size of Model 1 symbols is 73×73 = version 14.
Value range: [1 . . . 40] (Model 1: [1 . . . 14])
Default: 1
’version_max’: maximum symbol version.
Value range: [1 . . . 40] (Model 1: [1 . . . 14])
Default: 40
’version’: set ’version_min’ and ’version_max’ to the same value.
’symbol_size_min’: minimum size of the symbol in modules. This parameter can be used as an alternative
to ’version_min’.
Value range: [21 . . . 177] (Model 1: [21 . . . 73])
Default: 21
’symbol_size_max’: maximum size of the symbol in modules. This parameter can be used as an alternative
to ’version_max’:
Value range: [21 . . . 177] (Model 1: [21 . . . 73])
Default: 177
’symbol_size’: set ’symbol_size_min’ and ’symbol_size_max’ to the same value.
• PDF417:
’symbol_cols_min’: minimum number of data columns in the symbol in codewords, i.e., excluding the code-
words of the start/stop pattern and of the two row indicators.
Value range: [1 . . . 30]
Default: 1
’symbol_cols_max’: maximum number of data columns in the symbol in codewords, i.e., excluding the two
codewords of the start/stop pattern and of the two indicators.
Value range: [1 . . . 30]
Default: 20 (enhanced: 30)
’symbol_rows_min’: minimum number of module rows in the symbol.
Value range: [3 . . . 90]
Default: 5 (enhanced: 3)
’symbol_rows_max’: maximum number of module rows in the symbol.
Value range: [3 . . . 90]
Default: 45 (enhanced: 90)
’symbol_cols’: set ’symbol_cols_min’ and ’symbol_cols_max’ to the same value.
’symbol_rows’: set ’symbol_rows_min’ and ’symbol_rows_max’ to the same value.

Appearance of the modules in the image:

• All data code types:


’polarity’: describes the polarity of the symbol in the image, i.e., the parameter determines if the symbol
appears light on a dark background or dark on a light background.
Values: ’dark_on_light’, ’light_on_dark, ’any’.
Default: ’dark_on_light’ (enhanced: ’any’)
’mirrored’: describes whether the symbol is or may be mirrored (which is equivalent to swapping rows and
columns of the symbol).
Values: ’no’, ’yes’, ’any’
Default: ’any’

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1179

’contrast_min’: minimum contrast between the foreground and the background of the symbol (this measure
corresponds with the minimum gradient between the symbol’s foreground and the background).
Values: [1 . . . 100]
Default: 30 (enhanced: 10)
• Datamatrix ECC 200 und QR-Code:
’module_size_min’: minimum size of the modules in the image in pixels.
Values: [2 . . . 100]
Default: 6 (enhanced: 2)
’module_size_max’: maximum size of the modules in the image in pixels.
Values: [2 . . . 100]
Default: 20 (enhanced: 100)
’module_size’: set ’module_size_min’ and ’module_size_max’ to the same value.
It is possible to specify whether neighboring foreground modules are connected or whether there is or may be
a gap between them. If the foreground modules are connected and fill the module space completely the gap
parameter can be set to ’no’. The parameter is set to ’small’ if there is a very small gap between two modules;
it can be set to ’big’ if the gap is slightly bigger. The last two settings may also be useful if the foreground
modules – although being connected – appear thinner as their entitled space (e.g., as a result of blooming
caused by a bright illuminant). If the foreground modules appear only as very small dots (in relation to the
module size: < 50%), in general, an appropriate preprocessing of the image for detecting or enlarging the
modules will be necessary (e.g., by gray_erosion_shape or gray_dilation_shape):
’module_gap_col_min’: minimum gap in direction of the symbol columns.
Values: ’no’, ’small’, ’big’
Default: ’no’
’module_gap_col_max’: maximum gap in direction of the symbol columns.
Values: ’no’, ’small’, ’big’
Default: ’small’ (enhanced: ’big’)
’module_gap_row_min’: minimum gap in direction of the symbol rows.
Values: ’no’, ’small’, ’big’
Default: ’no’
’module_gap_row_max’: maximum gap in direction of the symbol rows.
Values: ’no’, ’small’, ’big’
Default: ’small’ (enhanced: ’big’)
’module_gap_col’: set ’module_gap_col_min’ and ’module_gap_col_max’ to the same value.
’module_gap_row’: set ’module_gap_row_min’ and ’module_gap_row_max’ to the same value.
’module_gap_min’: set ’module_gap_col_min’ and ’module_gap_row_min’ to the same value.
’module_gap_max’: set ’module_gap_col_max’ and ’module_gap_row_max’ to the same value.
’module_gap’: set ’module_gap_col_min’, ’module_gap_col_max’, ’module_gap_row_min’, and ’mod-
ule_gap_row_max’ to the same value.
• PDF417:
’module_width_min’: minimum module width in the image in pixels.
Values: [2 . . . 100]
Default: 3 (enhanced: 2)
’module_width_max’: maximum module width in the image in pixels.
Values: [2 . . . 100]
Default: 15 (enhanced: 100)
’module_width’: set ’module_width_min’ and ’module_width_max’ to the same value.
’module_aspect_min’: minimum module aspect ratio (module height to module width).
Values: [0.5 . . . 20.0]
Default: 1.0
’module_aspect_max’: maximum module aspect ratio (module height to module width).
Values: [0.5 . . . 20.0]
Default: 4.0 (enhanced: 10.0)
’module_aspect’: set ’module_aspect_min’ and ’module_aspect_max’ to the same value.
• Data matrix ECC 200:

HALCON 8.0.2
1180 CHAPTER 15. TOOLS

’slant_max’: maximum deviation of the angle of the L-shaped finder pattern from the (ideal) right angle (the
angle is specified in radians and corresponds to the distortion that occurs when the symbol is printed or
during the image acquisition).
Value range: [0.0 . . . 0.5235]
Default: 0.1745 = 10◦ (enhanced: 0.5235 = 30◦ )
’module_grid’: describes whether the size of the modules may vary (in a specific range) or not. Dependent
on this parameter different algorithms are used for calculating the module’s center positions. If it is set to
’fixed’, an equidistant grid is used. Allowing a variable module size (’variable’), the grid is aligned only
to the alternating side of the finder pattern. With ’any’ both approaches are tested one after the other.
Values: ’fixed’, ’variable’, ’any’
Default: ’fixed’ (enhanced: ’any’)
• QR Code:
’position_pattern_min’: Number of position detection patterns that have to be visible for generating a new
symbol candidate.
Value range: [2, 3]
Default: 3 (enhanced: 2)

General model behavior:

• All data code types:


’persistence’: controls whether certain intermediate results of the symbol search with
find_data_code_2d are stored temporarily or persistently in the model. The memory re-
quirements of find_data_code_2d are significantly smaller if the data is stored temporarily
(default). On the other hand, by using the persistent storage it is possible to access some of the data for
debugging reasons after searching for symbols, e.g., to investigate why a symbol could not be read.
Values: 0 (temporary), 1 (persistent)
Default: 0
’strict_model’: controls the behavior of find_data_code_2d while detecting symbols that could be
read but that do not fit the model restrictions on the size of the symbols. They can be rejected (strict
model, set to ’yes’) or returned as a result independent of their size and the size specified in the model
(lax model, set to ’no’).
Values: ’yes’ (strict), ’no’ (not strict)
Default: ’yes’

When setting the model parameters, attention should be payed especially to the following issues:

• Symbols whose size does not comply with the size restrictions made in the model (with the generic parameters
’symbol_rows*’, ’symbol_cols*’, ’symbol_size*’, or ’version*’) will not be read if ’strict_model’ is set to
’yes’, which is the default. This behavior is useful if symbols of a specific size have to be detected while
other symbols should be ignored. On the other hand, neglecting this parameter can lead to problems, e.g.,
if one symbol of an image sequence is used to adjust the model (including the symbol size), but later in the
application the symbol size varies, which is quite common in practice.
• The run-time of find_data_code_2d depends mostly on the following model parameters, namely in
cases where the requested number of symbols cannot be found in the image: ’polarity’, ’module_size_min’
(ECC 200 and QR Code) and ’module_size_min’ together with ’module_aspect_min’ (PDF417), and if the
minimum module size is very small also the parameters ’module_gap_*’ (ECC 200 and QR Code), for QR
Code also ’position_pattern_min’.

Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; (Htuple .) Hlong
Handle of the 2D data code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that shall be adjusted for the 2D data code.
Default Value : "polarity"
List of values : GenParamNames ∈ {"default_parameters", "strict_model", "persistence", "polarity",
"mirrored", "contrast_min", "model_type", "version", "version_min", "version_max", "symbol_size",
"symbol_size_min", "symbol_size_max", "symbol_cols", "symbol_cols_min", "symbol_cols_max",
"symbol_rows", "symbol_rows_min", "symbol_rows_max", "symbol_shape", "module_size",

HALCON/C Reference Manual, 2008-5-13


15.6. DATACODE 1181

"module_size_min", "module_size_max", "module_width_min", "module_width_max",


"module_aspect_min", "module_aspect_max", "module_gap", "module_gap_min", "module_gap_max",
"module_gap_col", "module_gap_col_min", "module_gap_col_max", "module_gap_row",
"module_gap_row_min", "module_gap_row_max", "slant_max", "module_grid", "position_pattern_min"}
. GenParamValues (input_control) . . . . . . attribute.value(-array) ; (Htuple .) const char * / Hlong / double
Values of the generic parameters that are adjusted for the 2D data code.
Default Value : "light_on_dark"
Suggested values : GenParamValues ∈ {"standard_recognition", "enhanced_recognition", "yes", "no",
"any", "dark_on_light", "light_on_dark", "square", "rectangle", "small", "big", "fixed", "variable", 0, 1, 2, 3, 4,
5, 6, 7, 8, 10, 30, 50, 70, 90, 12, 14, 16, 18, 20, 22, 24, 26, 32, 36, 40, 44, 48, 52, 64, 72, 80, 88, 96, 104, 120,
132, 144}
Example (Syntax: HDevelop)

* This examples shows how a model can be adapted to a specific symbol if


* the symbol parameters are known

* Create a model for reading Data matrix ECC 200 codes


create_data_code_2d_model (’Data Matrix ECC 200’, [], [], DataCodeHandle)
* Restrict the model by setting the module size
set_data_code_2d_param (DataCodeHandle,
[’module_size_min’,’module_size_max’], [4,7])
* Change the polarity setting of the model from ’dark_on_light’ to
* ’light_on_dark’ and, at the same time, specify a new minimum contrast
set_data_code_2d_param (DataCodeHandle, [’polarity’,’contrast_min’],
[’light_on_dark’,10])

* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)

Result
The operator set_data_code_2d_param returns the value H_MSG_TRUE if the given parameters are cor-
rect. Otherwise, an exception will be raised.
Parallelization Information
set_data_code_2d_param is reentrant and processed without parallelization.
Possible Predecessors
create_data_code_2d_model, read_data_code_2d_model
Possible Successors
get_data_code_2d_param, find_data_code_2d, write_data_code_2d_model
Alternatives
read_data_code_2d_model
See also
query_data_code_2d_params, get_data_code_2d_param, get_data_code_2d_results,
get_data_code_2d_objects
Module
Data Code

HALCON 8.0.2
1182 CHAPTER 15. TOOLS

write_data_code_2d_model ( Hlong DataCodeHandle,


const char *FileName )

T_write_data_code_2d_model ( const Htuple DataCodeHandle,


const Htuple FileName )

Writes a 2D data code model into a file.


The operator write_data_code_2d_model writes a 2D data code model, which was created by
create_data_code_2d_model, into a file with the name FileName. This facilitates creating an identi-
cal copy of the saved model in a later session with the operator read_data_code_2d_model. The handle of
the model to write is passed in DataCodeHandle.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; Hlong
Handle of the 2D data code model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of the 2D data code model file.
Default Value : "data_code_model.dcm"
Example (Syntax: HDevelop)

* This example demonstrates how a trained model can be saved for


* a future session

* Create a model for reading Data matrix ECC 200 codes


create_data_code_2d_model (’Data Matrix ECC 200’, [], [], DataCodeHandle)
* Read a training image
read_image (Image, ’datacode/ecc200/ecc200_cpu_008’)
* Train the model with the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, ’train’, ’all’,
ResultHandles, DecodedDataStrings)
* Write the model into a file
write_data_code_2d_model (DataCodeHandle, ’ecc200_trained_model.dcm’)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)

Result
The operator write_data_code_2d_model returns the value H_MSG_TRUE if the passed handle is valid
and if the model can be written into the named file. Otherwise, an exception will be raised.
Parallelization Information
write_data_code_2d_model is reentrant and processed without parallelization.
Possible Predecessors
set_data_code_2d_param, find_data_code_2d
Alternatives
get_data_code_2d_param
See also
create_data_code_2d_model, set_data_code_2d_param, find_data_code_2d
Module
Data Code

15.7 Fourier-Descriptor

T_abs_invar_fourier_coeff ( const Htuple RealInvar,


const Htuple ImaginaryInvar, const Htuple CoefP, const Htuple CoefQ,
const Htuple AZInvar, Htuple *RealAbsInvar,
Htuple *ImaginaryAbsInvar )

Normalizing of the Fourier coefficients with respect to the displacment of the starting point.

HALCON/C Reference Manual, 2008-5-13


15.7. FOURIER-DESCRIPTOR 1183

The operator abs_invar_fourier_coeff normalizes the Fourier coefficients with regard to the displace-
ments of the starting point. These occur when an object is rotated. The contour tracer get_region_contour
starts with recording the contour in the upper lefthand corner of the region and follows the contour clockwise. If
the object is rotated, the starting value for the contour point chain is different which leads to a phase shift in the
frequency space. The following two kinds of normalizing are available:

abs_amount: The phase information will be eliminated; the normalizing does not retain the structure, i.e. if the
AZ-invariants are backtransformed, no similarity with the pattern can be recognized anymore.
az_invar1: AZ-invariants of the 1st order execute the normalizing with respect to displacing the starting point so
that the structure is retained; they are however more prone to local and global disturbances, in particular to
projective distortions.

Parameter

. RealInvar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double


Real parts of the normalized Fourier coefficients.
. ImaginaryInvar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Imaginary parts of the normalized Fourier coefficients.
. CoefP (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Normalizing coefficients p.
Default Value : 1
Suggested values : CoefP ∈ {1, 2}
Restriction : CoefP ≥ 1
. CoefQ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Normalizing coefficients q.
Default Value : 1
Suggested values : CoefQ ∈ {1, 2}
Restriction : (CoefQ ≥ 1) ∧ (CoefQ 6= CoefP)
. AZInvar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Order of the AZ-invariants.
Default Value : "abs_amount"
List of values : AZInvar ∈ {"abs_amount", "az_invar1"}
. RealAbsInvar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Real parts of the normalized Fourier coefficients.
. ImaginaryAbsInvar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Imaginary parts of the normalized Fourier coefficients.
Example (Syntax: C++)

get_region_contour(single,&row,&col);
length_of_contour = length_tuple(row);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",&param_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
fourier_1dim_inv(absrow,abscol,length_of_contour,&fsynrow,&fsyncol);

Parallelization Information
abs_invar_fourier_coeff is reentrant and processed without parallelization.
Possible Predecessors
invar_fourier_coeff
Possible Successors
fourier_1dim_inv, match_fourier_coeff
Module
Foundation

HALCON 8.0.2
1184 CHAPTER 15. TOOLS

T_fourier_1dim ( const Htuple Rows, const Htuple Columns,


const Htuple ParContour, const Htuple MaxCoef, Htuple *RealCoef,
Htuple *ImaginaryCoef )

Calculate the Fourier coefficients of a parameterized contour.


The operator fourier_1dim calculates the Fourier coefficients of a parameterized contour by using a
valid parameter scale. This parameter scale may, for instance, be created with the help of the procedure
prep_contour_fourier. This function serves to calculate the Fourier coefficients of closed contours which
are treated like complex-valued curves. Therefore, in order to determine the Fourier coefficients, the Fourier trans-
form for periodical functions is used. Hereby the parameter MaxCoef determines the absolutevalue + 1 of
the maximal number of Fourier coefficients, i.e. if n coefficients are indicated, the procedure will calculate coeffi-
cients ranging from −n to n. The contour will be approximated without loss, if n = numberof thecontourpoints,
whereby n = 100 approximates the contour so well that an error can hardly be distinguished; n ∈ [40, 50] however
is sufficient for most applications. If the parameter MaxCoef is set to 0, all coefficients will be determined.
Parameter

. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.y-array ; Htuple . Hlong


Row coordinates of the contour.
. Columns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.x-array ; Htuple . Hlong
Column coordinates of the contour.
. ParContour (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Parameter scale.
. MaxCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Desired number of Fourier coefficients or all of them (0).
Default Value : 50
Suggested values : MaxCoef ∈ {0, 5, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 150, 200, 400}
Restriction : MaxCoef ≥ 0
. RealCoef (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Real parts of the Fourier coefficients.
. ImaginaryCoef (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Imaginary parts of the Fourier coefficients.
Example (Syntax: C++)

get_region_contour(single,&row,&col);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",&param_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);

Parallelization Information
fourier_1dim is reentrant and processed without parallelization.
Possible Predecessors
prep_contour_fourier
Possible Successors
invar_fourier_coeff, disp_polygon
Module
Foundation

T_fourier_1dim_inv ( const Htuple RealCoef,


const Htuple ImaginaryCoef, const Htuple MaxCoef, Htuple *Rows,
Htuple *Columns )

One dimensional Fourier synthesis (inverse Fourier transform).

HALCON/C Reference Manual, 2008-5-13


15.7. FOURIER-DESCRIPTOR 1185

Backtransformation of Fourier coefficients respectively of Fourier descriptors. The number of values to be back-
transformed should not exceed the length of the transformed contour.
Parameter
. RealCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Real parts.
. ImaginaryCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Imaginary parts.
. MaxCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Input of the steps for the backtransformation.
Default Value : 100
Suggested values : MaxCoef ∈ {5, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 150, 200, 400}
Restriction : MaxCoef ≥ 1
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.y-array ; Htuple . double *
Row coordinates.
. Columns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.x-array ; Htuple . double *
Column coordinates.
Example (Syntax: C++)

get_region_contour(single,&row,&col);
length_of_contour = row.Num();
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",&param_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
fourier_1dim_inv(absrow,abscol,length_of_contour,&fsynrow,&fsyncol);

Parallelization Information
fourier_1dim_inv is reentrant and processed without parallelization.
Possible Predecessors
invar_fourier_coeff, fourier_1dim
Possible Successors
disp_polygon
Module
Foundation

T_invar_fourier_coeff ( const Htuple RealCoef,


const Htuple ImaginaryCoef, const Htuple NormPar,
const Htuple InvarType, Htuple *RealInvar, Htuple *ImaginaryInvar )

Normalize the Fourier coefficients.


Elimination of affine information from the Fourier coefficients, determination of affine invariants. The Fourier
coefficients will be normalized suitably so that all affine correlated contours will be projected to one and the same
contour. The following levels of affine mappings are available:
1. Translations (InvarType = ’transl_invar’)
2. + Rotations (InvarType = ’congr_invar’)
3. + Scalings (InvarType = ’simil_invar’)
4. + Slanting (InvarType = ’affine_invar’)
The control parameter InvarType indicates up to which level the affine representation shall be normalized.
Please note that indicating a certain level implies that the normalizing is executed with regard to all levels below.
For most applications a subsequent normalizing of the starting point is recommended!

HALCON 8.0.2
1186 CHAPTER 15. TOOLS

Parameter

. RealCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double


Real parts of the Fourier coefficients.
. ImaginaryCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Imaginary parts of the Fourier coefficients.
. NormPar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Input of the normalizing coefficients.
Default Value : 1
Suggested values : NormPar ∈ {1, 2}
Restriction : NormPar ≥ 1
. InvarType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Indicates the level of the affine mappings.
Default Value : "affine_invar"
List of values : InvarType ∈ {"affine_invar", "simil_invar", "congr_invar", "transl_invar"}
. RealInvar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Real parts of the normalized Fourier coefficients.
. ImaginaryInvar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Imaginary parts of the normalized Fourier coefficients.
Example (Syntax: C++)

prep_contour_fourier(trow,tcol,"unsigned_area",&param_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);

Parallelization Information
invar_fourier_coeff is reentrant and processed without parallelization.
Possible Predecessors
fourier_1dim
Possible Successors
invar_fourier_coeff
Module
Foundation

T_match_fourier_coeff ( const Htuple RealCoef1,


const Htuple ImaginaryCoef1, const Htuple RealCoef2,
const Htuple ImaginaryCoef2, const Htuple MaxCoef,
const Htuple Damping, Htuple *Distance )

Similarity of two contours.


The operator match_fourier_coeff calculates the Euclidean distance between two contours which are avail-
able as Fourier coefficients. In order to avoid that the higher frequencies are in some way too dominant, the
following attenuation can be used:

none: No attenuation.
1/index: Absolute amounts of the Fourier coefficients will be divided by their index.
1/(index*index): Absolute amounts of the Fourier coefficients will be divided by their square index.

The higher the result value, the greater the differences between the pattern and the test contour. If the number of
coefficients is not the same, only the first n coefficients will be compared. The parameter MaxCoef indicates the
number of the coefficients to be compared. If MaxCoef is set to zero, all coefficients will be used.

HALCON/C Reference Manual, 2008-5-13


15.7. FOURIER-DESCRIPTOR 1187

Parameter

. RealCoef1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double


Real parts of the pattern Fourier coefficients.
. ImaginaryCoef1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Imaginary parts of the pattern Fourier coefficients.
. RealCoef2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Real parts of the Fourier coefficients to be compared.
. ImaginaryCoef2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Imaginary parts of the Fourier coefficients to be compared.
. MaxCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Total number of Fourier coefficients.
Default Value : 50
Suggested values : MaxCoef ∈ {0, 5, 10, 15, 20, 30, 40, 50, 70, 100, 200, 400}
Restriction : MaxCoef ≥ 0
. Damping (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Kind of attenuation.
Default Value : "1/index"
Suggested values : Damping ∈ {"none", "1/index", "1/(index*index)"}
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Similarity of the contours.
Example (Syntax: C++)

prep_contour_fourier(trow,tcol,"unsigned_area",&param_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,
"az_invar1",&absrow,&abscol);
match_fourier_coeff(contur1_row, contur1_col,
contur2_row, contur2_col, 50,
"1/index", &Distance_wert);

Parallelization Information
match_fourier_coeff is reentrant and processed without parallelization.
Possible Predecessors
invar_fourier_coeff
Module
Foundation

T_move_contour_orig ( const Htuple Rows, const Htuple Columns,


Htuple *RowsMoved, Htuple *ColumnsMoved )

Transformation of the origin into the centre of gravity.


The operator move_contour_orig relocates the input contour so that the origin lies in the centre of gravity.
Parameter

. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.y-array ; Htuple . Hlong


Row coordinates of the contour.
. Columns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.x-array ; Htuple . Hlong
Column coordinates of the contour.
. RowsMoved (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.y-array ; Htuple . Hlong *
Row coordinates of the displaced contour.
. ColumnsMoved (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.x-array ; Htuple . Hlong *
Column coordinates of the displaced contour.

HALCON 8.0.2
1188 CHAPTER 15. TOOLS

Parallelization Information
move_contour_orig is processed completely exclusively without parallelization.
Possible Predecessors
get_region_contour
Possible Successors
prep_contour_fourier
Module
Foundation

T_prep_contour_fourier ( const Htuple Rows, const Htuple Columns,


const Htuple TransMode, Htuple *ParContour )

Parameterize the passed contour.


The operator prep_contour_fourier parameterizes the transmitted contour in order to prepare it for the
one dimensional Fourier transformation. Hereby the contour must be available in closed form. Three parameter
functions are available for the control parameter TransMode:

arc: Parameterization by the radian.


signed_area: Parameterization by the signed area.
unsigned_area: Parameterization by the absolute area.

Please note that in contrast to the signed or unsigned area the affine mapping of the radian will not be transformed
linearly.
Parameter

. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.y-array ; Htuple . Hlong


Row indices of the contour.
. Columns (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.x-array ; Htuple . Hlong
Column indices of the contour.
. TransMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Kind of parameterization.
Default Value : "signed_area"
Suggested values : TransMode ∈ {"arc", "unsigned_area", "signed_area"}
. ParContour (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Parameterized contour.
Example (Syntax: C++)

get_region_contour(single,&row,&col);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",&param_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);

Parallelization Information
prep_contour_fourier is reentrant and processed without parallelization.
Possible Predecessors
move_contour_orig
Possible Successors
fourier_1dim
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.8. FUNCTION 1189

15.8 Function
T_abs_funct_1d ( const Htuple Function, Htuple *FunctionAbsolute )

Absolute value of the y values.


abs_funct_1d calculates the absolute values of all y values of Function.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function.
. FunctionAbsolute (output_control) . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Function with the absolute values of the y values.
Parallelization Information
abs_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

T_compose_funct_1d ( const Htuple Function1, const Htuple Function2,


const Htuple Border, Htuple *ComposedFunction )

Compose two functions.


compose_funct_1d composes two functions, i.e., calculates

ComposedFunction(x) = Function2(Function1(x)) .
ComposedFunction has the same domain (x-range) as Function1. If the range (y-value range) of
Function1 is larger than the domain of Function2, the parameter Border determines the border treatment of
Function2. For Border=’zero’ values outside the domain of Function2 are set to 0, for Border=’constant’
they are set to the corresponding value at the border, for Border=’mirror’ they are mirrored at the border, and for
Border=’cyclic’ they are continued cyclically. To obtain y-values, Function2 is interpolated linearly.
Parameter
. Function1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function 1.
. Function2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function 2.
. Border (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Border treatment for the input functions.
Default Value : "constant"
List of values : Border ∈ {"zero", "constant", "mirror", "cyclic"}
. ComposedFunction (output_control) . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Composed function.
Parallelization Information
compose_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

T_create_funct_1d_array ( const Htuple YValues, Htuple *Function )

Create a function from a sequence of y-values.

HALCON 8.0.2
1190 CHAPTER 15. TOOLS

create_funct_1d_array creates a one-dimensional function from a set of y-values YValues. The resulting
function can then be processed and analyzed with the operators for 1d functions. YValues is interpreted as
follows: the first value of YValues is the function value at zero, the second value is the function value at one, etc.
Thus, the values define a function at equidistant x values (with distance 1), starting at 0.
Alternatively, the operator create_funct_1d_pairs can be used to create a function.
create_funct_1d_pairs also allows to define a function with non-equidistant x valus by specifiying
them explicitely. Thus to get the same definition as with create_funct_1d_array, one would pass a tuple
of x values to create_funct_1d_pairs that has the same length as YValues and contains values starting
at 0 and increasing by 1 in each position. Note, however, that create_funct_1d_pairs leads to a different
internal representation of the function which needs more storage (because all (x,y) pairs are stored) and sometimes
cannot be processed as efficiently as functions created by create_funct_1d_array.
Parameter

. YValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong


X value for function points.
. Function (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Created function.
Parallelization Information
create_funct_1d_array is reentrant and processed without parallelization.
Possible Successors
write_funct_1d, gnuplot_plot_funct_1d, y_range_funct_1d, get_pair_funct_1d,
transform_funct_1d
Alternatives
create_funct_1d_pairs, read_funct_1d
See also
funct_1d_to_pairs
Module
Foundation

T_create_funct_1d_pairs ( const Htuple XValues, const Htuple YValues,


Htuple *Function )

Create a function from a set of (x,y) pairs.


create_funct_1d_pairs creates a one-dimensional function from a set of pairs of (x,y) values. The
XValues of the functions have to be passed in ascending order. The resulting function can then be processed
and analyzed with the operators for 1d functions.
Alternatively, functions can be created with the operator create_funct_1d_array. In contrast to this oper-
ator, x values with arbitrary positions can be specified with create_funct_1d_pairs. Hence, it is the more
general operator. It should be noted, however, that because of this generality the processing of a function created
with create_funct_1d_pairs cannot be carried out as efficiently as for equidistant functions. In particular,
not all operators accept such functions. If necessary, a function can be transformed into an equidistant function
with the operator sample_funct_1d.
Parameter

. XValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong


X value for function points.
. YValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Y-value for function points.
. Function (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Created function.
Parallelization Information
create_funct_1d_pairs is reentrant and processed without parallelization.
Possible Successors
write_funct_1d, gnuplot_plot_funct_1d, y_range_funct_1d, get_pair_funct_1d

HALCON/C Reference Manual, 2008-5-13


15.8. FUNCTION 1191

Alternatives
create_funct_1d_array, read_funct_1d
See also
funct_1d_to_pairs
Module
Foundation

T_derivate_funct_1d ( const Htuple Function, const Htuple Mode,


Htuple *Derivative )

Calculate the derivatives of a function.


derivate_funct_1d calculates the derivatives of the function Function up to the second degree. It uses a
finite difference approximation of order O(h2 ). The derivative is also a function with the same sampling points as
Function. With the parameter Mode, ’first’ and ’second’ derivatives can be selected.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of derivative
Default Value : "first"
List of values : Mode ∈ {"first", "second"}
. Derivative (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Derivative of the input function
Parallelization Information
derivate_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array, smooth_funct_1d_gauss,
smooth_funct_1d_mean
Module
Foundation

T_distance_funct_1d ( const Htuple Function1, const Htuple Function2,


const Htuple Mode, const Htuple Sigma, Htuple *Distance )

Compute the distance of two functions.


distance_funct_1d calculates the distance of two functions. The two functions may differ in length.
Parameter
. Function1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function 1.
. Function2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function 2.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Modes of invariants.
Default Value : "length"
List of values : Mode ∈ {"length", "mean"}
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
Variance of the optional smoothing with a Gaussian filter.
Default Value : 0.0
Suggested values : Sigma ∈ {0.0, 0.5, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0, 15.0, 20.0, 25.0, 30.0, 40.0, 50.0}
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double * / Hlong *
Distance of the functions.

HALCON 8.0.2
1192 CHAPTER 15. TOOLS

Parallelization Information
distance_funct_1d is reentrant and processed without parallelization.
Module
Foundation

T_funct_1d_to_pairs ( const Htuple Function, Htuple *XValues,


Htuple *YValues )

Access to the x/y values of a function.


funct_1d_to_pairs splits the input function Function into tuples for the x and y values.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function.
. XValues (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
X values of the function.
. YValues (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
Y values of the function.
Parallelization Information
funct_1d_to_pairs is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

T_get_pair_funct_1d ( const Htuple Function, const Htuple Index,


Htuple *X, Htuple *Y )

Access a function value using the index of the control points.


get_pair_funct_1d accesses a function value of Function. This is done by specifying the index of one or
more control points of the function.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong
Index of the control points.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double *
X value at the given control points.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double *
Y value at the given control points.
Parallelization Information
get_pair_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

T_get_y_value_funct_1d ( const Htuple Function, const Htuple X,


const Htuple Border, Htuple *Y )

Return the value of a function at an arbitrary position.

HALCON/C Reference Manual, 2008-5-13


15.8. FUNCTION 1193

get_y_value_funct_1d returns the y value of the function Function at the x coordinates specified by X. To
obtain the y values, the input function is interpolated linearly. The parameter Border determines the values of the
function Function outside of its domain. For Border=’zero’ these values are set to 0, for Border=’constant’
they are set to the corresponding value at the border, for Border=’mirror’ they are mirrored at the border, for
Border=’cyclic’ they are continued cyclically, and for Border=’error’ an exception handling is raised.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function.
. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
X coordinate at which the function should be evaluated.
. Border (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Border treatment for the input function.
Default Value : "constant"
List of values : Border ∈ {"zero", "constant", "mirror", "cyclic", "error"}
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double *
Y value at the given x value.
Parallelization Information
get_y_value_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

T_integrate_funct_1d ( const Htuple Function, Htuple *Positive,


Htuple *Negative )

Compute the positive and negative areas of a function.


integrate_funct_1d integrates the function Function (see create_funct_1d_array and
create_funct_1d_pairs) and returns the integral of the positive and negative parts of the function in
Positive and Negative, respectively. Hence, the integral of the function is the difference Positive -
Negative. The integration is done on the interval on which the function is defined. For the integration, the
function is interpolated linearly.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double
Input function.
. Positive (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double *
Area under the positive part of the function.
. Negative (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Area under the negative part of the function.
Parallelization Information
integrate_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
See also
create_funct_1d_array, create_funct_1d_pairs
Module
Foundation

T_invert_funct_1d ( const Htuple Function, Htuple *InverseFunction )

Calculate the inverse of a function.

HALCON 8.0.2
1194 CHAPTER 15. TOOLS

invert_funct_1d calculates the inverse function of the input function Function and returns it in
InverseFunction. The function Function must be monotonic. If this is not the case an error message
is returned.
Parameter

. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong


Input function.
. InverseFunction (output_control) . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Inverse of the input function.
Parallelization Information
invert_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

T_local_min_max_funct_1d ( const Htuple Function, const Htuple Mode,


const Htuple Interpolation, Htuple *Min, Htuple *Max )

Calculate the local minimum and maximum points of a function.


local_min_max_funct_1d searches for the local minima Min and maxima Max of the function Function.
Since the function values are only known at discrete sampling points, the function can interpolated by parabolas be-
tween these points. Setting the parameter Interpolation to ’true’, enables this feature. If Interpolation
is ’false’, extrema are always sampling points.
If Mode is set to ’strict_min_max’, extrema are only calculated close to points with a function value that is strictly
smaller or strictly greater than the values of its direct neighbors.
If Mode is set to ’plateaus_center’, areas with a function value that is constant throughout several sampling points
are also considered. If such an area is identified as being a flat extremum, its center coordinate is returned.
Parameter

. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong


Input function
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Handling of plateaus
Default Value : "strict_min_max"
List of values : Mode ∈ {"strict_min_max", "plateaus_center"}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Interpolation of the input function
Default Value : "true"
List of values : Interpolation ∈ {"true", "false"}
. Min (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Minimum points of the input function
. Max (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Maximum points of the input function
Parallelization Information
local_min_max_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array, smooth_funct_1d_gauss,
smooth_funct_1d_mean
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.8. FUNCTION 1195

T_match_funct_1d_trans ( const Htuple Function1,


const Htuple Function2, const Htuple Border, const Htuple ParamsConst,
const Htuple UseParams, Htuple *Params, Htuple *ChiSquare,
Htuple *Covar )

Calculate transformation parameters between two functions.


match_funct_1d_trans calculates the transformation parameters between two functions given as the tuples
Function1 and Function2 (see create_funct_1d_array und create_funct_1d_pairs). The
following model is used for the transformation between the two functions:

y1 (x) = a1 y2 (a3 x + a4 ) + a2 .

The transformation parameters are determined by a least-squares minimization of the following function:

n−1
X 2
y1 (xi ) − a1 y2 (a3 xi + a4 ) + a2 .
i=0

The values of the function y2 are obtained by linear interpolation. The parameter Border determines the val-
ues of the function Function2 outside of its domain. For Border=’zero’ these values are set to 0, for
Border=’constant’ they are set to the corresponding value at the border, for Border=’mirror’ they are mirrored
at the border, and for Border=’cyclic’ they are continued cyclically. The calculated transformation parameters
are returned as a 4-tuple in Params. If some of the parameter values are known, the respective parameters can
be excluded from the least-squares adjustment by setting the corresponding value in the tuple UseParams to the
value ’false’. In this case, the tuple ParamsConst must contain the known value of the respective parameter. If
a parameter is used for the adjustment (UseParams = ’true’), the corresponding parameter in ParamsConst is
ignored. On output, match_funct_1d_trans additionally returns the sum of the squared errors ChiSquare
of the resulting function, i.e., the function obtained by transforming the input function with the transformation pa-
rameters, as well as the covariance matrix Covar of the transformation parameters Params. These parameters
can be used to decide whether a successful matching of the functions was possible.
Parameter

. Function1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong


Function 1.
. Function2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Function 2.
. Border (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Border treatment for function 2.
Default Value : "constant"
List of values : Border ∈ {"zero", "constant", "mirror", "cyclic"}
. ParamsConst (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double
Values of the parameters to remain constant.
Default Value : [1.0,0.0,1.0,0.0]
Number of elements : 4
. UseParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char *
Should a parameter be adapted for it?
Default Value : ["true","true","true","true"]
List of values : UseParams ∈ {"true", "false"}
Number of elements : 4
. Params (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Transformation parameters between the functions.
Number of elements : 4
. ChiSquare (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double *
Quadratic error of the output function.
. Covar (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Covariance Matrix of the transformation parameters.
Number of elements : 16

HALCON 8.0.2
1196 CHAPTER 15. TOOLS

Parallelization Information
match_funct_1d_trans is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_array, create_funct_1d_pairs
See also
gray_projections
Module
Foundation

T_negate_funct_1d ( const Htuple Function, Htuple *FunctionInverted )

Negation of the y values.


negate_funct_1d negates all y values of Function.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function.
. FunctionInverted (output_control) . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Function with the negated y values.
Parallelization Information
negate_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

T_num_points_funct_1d ( const Htuple Function, Htuple *Length )

Number of control points of the function.


num_points_funct_1d calculates the number of control points of Function.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function.
. Length (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong *
Number of control points.
Parallelization Information
num_points_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

read_funct_1d ( const char *FileName, double *Function )


T_read_funct_1d ( const Htuple FileName, Htuple *Function )

Read a function from a file.


The operator read_funct_1d reads the contents of FileName and converts it into the function Function.
The file has be generated by write_funct_1d.

HALCON/C Reference Manual, 2008-5-13


15.8. FUNCTION 1197

Parameter

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; (Htuple .) const char *


Name of the file to be read.
. Function (output_control) . . . . . . . . . . . . . . . . . . . . . . . . function_1d(-array) ; (Htuple .) double * / Hlong *
Function from the file.
Result
If the parameters are correct the operator read_funct_1d returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
read_funct_1d is reentrant and processed without parallelization.
Alternatives
fread_string, read_tuple
See also
write_funct_1d, gnuplot_plot_ctrl, write_image, write_region, open_file
Module
Foundation

T_sample_funct_1d ( const Htuple Function, const Htuple XMin,


const Htuple XMax, const Htuple XDist, const Htuple Border,
Htuple *SampledFunction )

Sample a function equidistantly in an interval.


sample_funct_1d samples the input function Function in the interval [XMin,XMax] at equidistant points
with the distance XDist. The last point lies in the interval if XMax-XMin is not an integer multiple of XDist. To
obtain the samples, the input function is interpolated linearly. The parameter Border determines the values of the
function Function outside of its domain. For Border=’zero’ these values are set to 0, for Border=’constant’
they are set to the corresponding value at the border, for Border=’mirror’ they are mirrored at the border, and for
Border=’cyclic’ they are continued cyclically.
Parameter

. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong


Input function.
. XMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Minimum x value of the output function.
. XMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Maximum x value of the output function.
Restriction : XMax > XMin
. XDist (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Distance of the samples.
Restriction : XDist > 0
. Border (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Border treatment for the input function.
Default Value : "constant"
List of values : Border ∈ {"zero", "constant", "mirror", "cyclic"}
. SampledFunction (output_control) . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Sampled function.
Parallelization Information
sample_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
transform_funct_1d, create_funct_1d_array, create_funct_1d_pairs
Module
Foundation

HALCON 8.0.2
1198 CHAPTER 15. TOOLS

T_scale_y_funct_1d ( const Htuple Function, const Htuple Mult,


const Htuple Add, Htuple *FunctionScaled )

Multiplication and addition of the y values.


scale_y_funct_1d multiplies and adds the y values of Function with the parameters Mult and Add.
Parameter

. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong


Input function.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Factor for scaling of the y values.
Default Value : 2
Suggested values : Mult ∈ {0.1, 0.3, 0.5, 1, 2, 5, 10}
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Constant which is added to the y values.
Default Value : 0
Suggested values : Add ∈ {-10, -5, 1, 0, 5, 10}
. FunctionScaled (output_control) . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Transformed function.
Parallelization Information
scale_y_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

T_smooth_funct_1d_gauss ( const Htuple Function, const Htuple Sigma,


Htuple *SmoothedFunction )

Smooth an equidistant 1D function with a Gaussian function.


The operator smooth_funct_1d_gauss smooths a one-dimensional function with a Gaussian function. The
function must be equidistant, i.e., created with create_funct_1d_array, sample_funct_1d or similar.
Parameter

. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong


Function to be smoothed.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Sigma of the Gaussian function for the smoothing.
Default Value : 2.0
Suggested values : Sigma ∈ {0.5, 1.0, 2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.1 ≤ Sigma ≤ 50.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.2
. SmoothedFunction (output_control) . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Smoothed function.
Parallelization Information
smooth_funct_1d_gauss is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Possible Successors
match_funct_1d_trans, distance_funct_1d
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.8. FUNCTION 1199

T_smooth_funct_1d_mean ( const Htuple Function,


const Htuple SmoothSize, const Htuple Iterations,
Htuple *SmoothedFunction )

Smooth an equidistant 1D function by averaging its values.


The operator smooth_funct_1d_mean smooths a one dimensional function by applying an average (mean)
filter multiple times. The function must be equidistant, i.e., created with create_funct_1d_array,
sample_funct_1d or similar.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . Hlong / double
1D function.
. SmoothSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Size of the averaging mask.
Default Value : 10
Suggested values : SmoothSize ∈ {1, 3, 5, 7, 9, 11, 13, 15, 21, 31, 51}
Typical range of values : 1 ≤ SmoothSize ≤ 1000 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : SmoothSize > 0
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of iterations for the smoothing.
Default Value : 3
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9}
Typical range of values : 1 ≤ Iterations ≤ 100 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Iterations ≥ 1
. SmoothedFunction (output_control) . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Smoothed function.
Parallelization Information
smooth_funct_1d_mean is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_array
Alternatives
smooth_funct_1d_gauss
Module
Foundation

T_transform_funct_1d ( const Htuple Function, const Htuple Params,


Htuple *TransformedFunction )

Transform a function using given transformation parameters.


transform_funct_1d transforms the input function Function using the transformation parameters
given in Params. The function Function is passed as a tuple (see create_funct_1d_array und
create_funct_1d_pairs). The following model is used for the transformation between the two functions
(see match_funct_1d_trans):

yt (x) = a1 y(a3 x + a4 ) + a2 .

The output function TransformedFunction is obtained by transforming the x and y values of the input func-
tion separately with the above formula, i.e., the output function is not sampled again. Therefore, the parameter a3
is restricted to a3 6= 0.0 . To resample a function, the operator sample_funct_1d can be used.

HALCON 8.0.2
1200 CHAPTER 15. TOOLS

Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function.
. Params (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double
Transformation parameters between the functions.
Number of elements : 4
. TransformedFunction (output_control) . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Transformed function.
Parallelization Information
transform_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array, match_funct_1d_trans
Module
Foundation

T_write_funct_1d ( const Htuple Function, const Htuple FileName )

Write a function to a file.


The operator write_funct_1d writes the contents of Function to a file. The data is written in an ASCII
format. Therefore, the file can be exchanged between different architectures. The data can be read by the operator
read_funct_1d. There is no specific extension for this kind of file.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Function to be written.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; Htuple . const char *
Name of the file to be written.
Result
If the parameters are correct the operator write_funct_1d returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
write_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Alternatives
write_tuple, fwrite_string
See also
read_funct_1d, write_image, write_region, open_file
Module
Foundation

T_x_range_funct_1d ( const Htuple Function, Htuple *XMin,


Htuple *XMax )

Smallest and largest x value of the function.


x_range_funct_1d calculates the smallest and the largest x value of Function.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function.
. XMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double *
Smallest x value.

HALCON/C Reference Manual, 2008-5-13


15.8. FUNCTION 1201

. XMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double *


Largest x value.
Parallelization Information
x_range_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

T_y_range_funct_1d ( const Htuple Function, Htuple *YMin,


Htuple *YMax )

Smallest and largest y value of the function.


y_range_funct_1d calculates the smallest and the largest y value of Function.
Parameter

. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong


Input function.
. YMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double *
Smallest y value.
. YMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double *
Largest y value.
Parallelization Information
y_range_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation

T_zero_crossings_funct_1d ( const Htuple Function,


Htuple *ZeroCrossings )

Calculate the zero crossings of a function.


zero_crossings_funct_1d calculates the zero crossings ZeroCrossings of the function Function.
A linear interpolation is applied to the function between its sampling points so that the coordinates of the zero
crossing can be calculated exactly. If an entire line segment between two sampling points has a value of 0, only
the end points of its supporting interval are returned.
Parameter

. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong


Input function
. ZeroCrossings (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Zero crossings of the input function
Parallelization Information
zero_crossings_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array, smooth_funct_1d_gauss,
smooth_funct_1d_mean
Module
Foundation

HALCON 8.0.2
1202 CHAPTER 15. TOOLS

15.9 Geometry

angle_ll ( double RowA1, double ColumnA1, double RowA2, double ColumnA2,


double RowB1, double ColumnB1, double RowB2, double ColumnB2,
double *Angle )

T_angle_ll ( const Htuple RowA1, const Htuple ColumnA1,


const Htuple RowA2, const Htuple ColumnA2, const Htuple RowB1,
const Htuple ColumnB1, const Htuple RowB2, const Htuple ColumnB2,
Htuple *Angle )

Calculate the angle between two lines.


The operator angle_ll calculates the angle between two lines. As input the coordinates of to points on the first
line (RowA1,ColumnA1, RowA2,ColumnA2) and on the second line (RowB1,ColumnB1, RowB2,ColumnB2)
are expected. The calculation is performed as follows: We interpret the lines as vectors with starting points
RowA1,ColumnA1 and RowB1,ColumnB1 and end points RowA2,ColumnA2 and RowB2,ColumnB2, respec-
tively. Rotating the vector A counter clockwise onto the vector B (the center of rotation is the intersection point
of the two lines) yields the angle. The result depends on the order of the points and on the order of the lines. The
parameter Angle returns the angle in radians, ranging from −π ≤ Angle ≤ π.
Parameter
. RowA1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the first line.
. ColumnA1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the first line.
. RowA2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the first line.
. ColumnA2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the first line.
. RowB1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the second line.
. ColumnB1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the second line.
. RowB2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the second line.
. ColumnB2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the second line.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Angle between the lines [rad].
Example (Syntax: HDevelop)

RowA1 := 255
ColumnA1 := 10
RowA2 := 255
ColumnA2 := 501
disp_line (WindowHandle, RowA1, ColumnA1, RowA2, ColumnA2)
RowB1 := 255
ColumnB1 := 255
for i := 1 to 360 by 1
RowB2 := 255 + sin(rad(i)) * 200
ColumnB2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, RowB1, ColumnB1, RowB2, ColumnB2)
angle_ll (RowA1, ColumnA1, RowA2, ColumnA2,
RowB1, ColumnB1, RowB2, ColumnB2, Angle)
endfor

Result
angle_ll returns H_MSG_TRUE.

HALCON/C Reference Manual, 2008-5-13


15.9. GEOMETRY 1203

Parallelization Information
angle_ll is reentrant and processed without parallelization.
Alternatives
angle_lx
Module
Foundation

angle_lx ( double Row1, double Column1, double Row2, double Column2,


double *Angle )

T_angle_lx ( const Htuple Row1, const Htuple Column1, const Htuple Row2,
const Htuple Column2, Htuple *Angle )

Calculate the angle between one line and the vertical axis.
The operator angle_lx calculates the angle between one line and the abscissa. As input the coordinates of two
points on the line (Row1,Column1, Row2,Column2) are expected. The calculation is performed as follows: We
interprete the line as a vector with starting point Row1,Column1 and end point Row2,Column2. Rotating the
vector counter clockwise onto the abscissa (center of rotation is the intersection point of the abscissa) yields the
angle. The result depends of the order of the points on line. The parameter Angle returns the angle in radians,
ranging from −π ≤ Angle ≤ π.
Parameter
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate the first point of the line.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Angle between the line and the abscissa [rad].
Example (Syntax: HDevelop)

RowX1 := 255
ColumnX1 := 10
RowX2 := 255
ColumnX2 := 501
disp_line (WindowHandle, RowX1, ColumnX1, RowX2, ColumnX2)
Row1 := 255
Column1 := 255
for i := 1 to 360 by 1
Row2 := 255 + sin(rad(i)) * 200
Column2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, Row1, Column1, Row2, Column2)
angle_lx (Row1, Column1, Row2, Column2, Angle)
endfor

Result
angle_lx returns H_MSG_TRUE.
Parallelization Information
angle_lx is reentrant and processed without parallelization.
Alternatives
angle_ll
Module
Foundation

HALCON 8.0.2
1204 CHAPTER 15. TOOLS

distance_cc ( const Hobject Contour1, const Hobject Contour2,


const char *Mode, double *DistanceMin, double *DistanceMax )

T_distance_cc ( const Hobject Contour1, const Hobject Contour2,


const Htuple Mode, Htuple *DistanceMin, Htuple *DistanceMax )

Calculate the distance between two contours.


The operator distance_cc calculates the minimum and maximum distance between the base points of two con-
tours ( Contour1 and Contour2). The parameters DistanceMin and DistanceMax contain the resulting
distance.
The parameter Mode sets the type of computing the distance: ’point_to_point’ only determines the minimum
and maximum distance between the base points of the contours. This results in faster algorithm but may lead to
inaccurate minimum distances. In contrast, ’point_to_segment’ determines the actual minimum distance of the
contour segments.
In both cases, the search algorithm has a quadratic complexitity (n*n). If only the minimum distance is required, the
operator distance_cc_min can be used alternatively since it offers algorithms with a complexity of n*log(n).
Parameter
. Contour1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject
First input contour.
. Contour2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject
Second input contour.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Distance calculation mode.
Default Value : "point_to_point"
List of values : Mode ∈ {"point_to_point", "point_to_segment"}
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between both contours.
. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum distance between both contours.
Example

gen_contour_polygon_rounded_xld(Cont1, [0,100,100,0,0], [0,0,100,100,0],


[50,50,50,50,50], 0.5);
gen_contour_polygon_rounded_xld(Cont2, [41,91,91,41,41], [41,41,91,91,41],
[25,25,25,25,25], 0.5);
distance_cc(Cont1, Cont2, ’point_to_point’, &distance_min, &distance_max);

Result
distance_cc returns H_MSG_TRUE.
Parallelization Information
distance_cc is reentrant and processed without parallelization.
Alternatives
distance_sc, distance_pc, distance_cc_min
See also
distance_sr, distance_pr
Module
Foundation

distance_cc_min ( const Hobject Contour1, const Hobject Contour2,


const char *Mode, double *DistanceMin )

T_distance_cc_min ( const Hobject Contour1, const Hobject Contour2,


const Htuple Mode, Htuple *DistanceMin )

Calculate the minimum distance between two contours.

HALCON/C Reference Manual, 2008-5-13


15.9. GEOMETRY 1205

distance_cc_min calculates the minimum distance between two contours Contour1 and Contour2. The
minimum distance is returned in DistanceMin.
The parameter Mode sets the type of computing the distance. ’point_to_point’ determines the distance of the
closest contour points, ’fast_point_to_segment’ calculates the distance of the line segments adjacent to these points,
and ’point_to_segment’ determines the actual minimum distance of the contour segments.
While ’point_to_point’ and ’fast_point_to_segment’ are efficient algorithms with a complexity of n*log(n),
’point_to_segment’ has quadratic complexity and thus takes a longer time to execute, especially for contours with
many line segments.
Parameter

. Contour1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject


First input contour.
. Contour2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject
Second input contour.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Distance calculation mode.
Default Value : "fast_point_to_segment"
List of values : Mode ∈ {"point_to_point", "point_to_segment", "fast_point_to_segment"}
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the two contours.
Example

gen_contour_polygon_rounded_xld(Cont1, [0,100,100,0,0], [0,0,100,100,0],


[50,50,50,50,50], 0.5);
gen_contour_polygon_rounded_xld(Cont2, [41,91,91,41,41], [41,41,91,91,41],
[25,25,25,25,25], 0.5);
distance_cc_min(Cont1, Cont2, "fast_point_to_segment", &distance_min);

Result
distance_cc_min returns H_MSG_TRUE.
Parallelization Information
distance_cc_min is reentrant and processed without parallelization.
Alternatives
distance_sc, distance_pc, distance_cc
See also
distance_sr, distance_pr
Module
Foundation

distance_lc ( const Hobject Contour, double Row1, double Column1,


double Row2, double Column2, double *DistanceMin,
double *DistanceMax )

T_distance_lc ( const Hobject Contour, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2,
Htuple *DistanceMin, Htuple *DistanceMax )

Calculate the distance between a line and one contour.


The operator distance_lc calculates the orthogonal distance between a line and the segments of one contour.
As input the coordinates of two points on a line (Row1,Column1, Row2,Column2) and one contour (Contour)
are expected. The parameters DistanceMin and DistanceMax return the result of the calculation.

HALCON 8.0.2
1206 CHAPTER 15. TOOLS

Parameter
. Contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject
Input contour.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the line.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the line and the contour.
. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum distance between the line and the contour.
Result
distance_lc returns H_MSG_TRUE.
Parallelization Information
distance_lc is reentrant and processed without parallelization.
Alternatives
distance_pc, distance_sc, distance_cc, distance_cc_min
See also
distance_lr, distance_pr, distance_sr
Module
Foundation

distance_lr ( const Hobject Region, double Row1, double Column1,


double Row2, double Column2, double *DistanceMin,
double *DistanceMax )

T_distance_lr ( const Hobject Region, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2,
Htuple *DistanceMin, Htuple *DistanceMax )

Calculate the distance between a line and a region.


The operator distance_lr calculates the orthogonal distance between a line and one region. As input the coor-
dinates of two points on a line (Row1,Column1, Row2,Column2) and one region are expected. The parameters
DistanceMin and DistanceMax return the result of the calculation.
Attention
Due to efficiency of distance_lr holes are ignored.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Input region.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the line.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the line and the region

HALCON/C Reference Manual, 2008-5-13


15.9. GEOMETRY 1207

. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *


Maximum distance between the line and the region
Example (Syntax: HDevelop)

dev_close_window ()
read_image (Image, ’fabrik’)
dev_open_window (0, 0, 512, 512, ’white’, WindowHandle)
threshold (Image, Region, 180, 255)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’,
5000, 100000000)
dev_clear_window ()
dev_set_color (’black’)
dev_display (SelectedRegions)
dev_set_color (’red’)
Row1 := 100
Row2 := 400
for Col := 50 to 400 by 4
disp_line (WindowHandle, Row1, Col+100, Row2, Col)
distance_lr (SelectedRegions, Row1, Col+100, Row2, Col,
DistanceMin, DistanceMax)
endfor

Result
distance_lr returns H_MSG_TRUE.
Parallelization Information
distance_lr is reentrant and processed without parallelization.
Alternatives
distance_lc, distance_pr, distance_sr, diameter_region
See also
hamming_distance, select_region_point, test_region_point, smallest_rectangle2
Module
Foundation

distance_pc ( const Hobject Contour, double Row, double Column,


double *DistanceMin, double *DistanceMax )

T_distance_pc ( const Hobject Contour, const Htuple Row,


const Htuple Column, Htuple *DistanceMin, Htuple *DistanceMax )

Calculate the distance between a point and one contour.


The operator distance_pc calculates the distance between points and one contour. As input the coordinates
of the points (Row,Column) and one contour (Contour) are expected. The parameters DistanceMin and
DistanceMax return the result of the calculation.
Parameter

. Contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject


Input contour.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the point.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the point.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the point and the contour.

HALCON 8.0.2
1208 CHAPTER 15. TOOLS

. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *


Maximum distance between the point and the contour.
Result
distance_pc returns H_MSG_TRUE.
Parallelization Information
distance_pc is reentrant and processed without parallelization.
Alternatives
distance_lc, distance_sc, distance_cc, distance_cc_min
See also
distance_pr, distance_lr, distance_sr, hamming_distance, select_xld_point,
test_xld_point
Module
Foundation

distance_pl ( double Row, double Column, double Row1, double Column1,


double Row2, double Column2, double *Distance )

T_distance_pl ( const Htuple Row, const Htuple Column,


const Htuple Row1, const Htuple Column1, const Htuple Row2,
const Htuple Column2, Htuple *Distance )

Calculate the distance between one point and one line.


The operator distance_pl calculates the orthogonal distance between points (Row,Column) and lines, given
by two arbitrary points on the line. The result is passed in Distance.
distance_pl calculates the distances between a set of n points and one line as well as the distances between a
set of n points and n lines.
Parameter
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the point.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column of the point.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the line.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Distance between the points.
Example

double row,column,row1,column1,row2,column2,distance;

draw_point(WindowHandle,&row,&column);
draw_line(WindowHandle,&row1,&column1,&row2,&column2);
distance_pl(row,column,row1,column1,row2,column2,&distance);

Result
distance_pl returns H_MSG_TRUE.
Parallelization Information
distance_pl is reentrant and processed without parallelization.

HALCON/C Reference Manual, 2008-5-13


15.9. GEOMETRY 1209

Alternatives
distance_ps
See also
distance_pp, distance_pr
Module
Foundation

distance_pp ( double Row1, double Column1, double Row2, double Column2,


double *Distance )

T_distance_pp ( const Htuple Row1, const Htuple Column1,


const Htuple Row2, const Htuple Column2, Htuple *Distance )

Calculate the distance between two points.


The operator distance_pp calculates the distance between pairs of points according to the following formula:
p
Distance = ((Row1 − Row2)2 + (Column1 − Column2)2 )

The result is returned in Distance.


Parameter

. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong


Row coordinate of the first point.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Distance between the points.
Example

double row1,column1,row2,column2,distance;

draw_point(WindowHandle,&row1,&column1);
draw_point(WindowHandle,&row2,&column2);
distance_pp(row1,column1,row2,column2,&distance);

Result
distance_pp returns H_MSG_TRUE.
Parallelization Information
distance_pp is reentrant and processed without parallelization.
Alternatives
distance_ps
See also
distance_pl, distance_pr
Module
Foundation

HALCON 8.0.2
1210 CHAPTER 15. TOOLS

distance_pr ( const Hobject Region, double Row, double Column,


double *DistanceMin, double *DistanceMax )

T_distance_pr ( const Hobject Region, const Htuple Row,


const Htuple Column, Htuple *DistanceMin, Htuple *DistanceMax )

Calculate the distance between a point and a region.


The operator distance_pr calculates the distance between a point and one region. As input the coordinates of
the points (Row,Column) and one region are expected. If a point is inside of the region, its minimum distance is
zero. The parameters DistanceMin and DistanceMax return the result of the calculation.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Input region.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the point.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the point.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the point and the region.
. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum distance between the point and the region.
Example (Syntax: HDevelop)

dev_close_window ()
read_image (Image, ’mreut’)
dev_open_window (0, 0, 512, 512, ’white’, WindowHandle)
dev_set_color (’black’)
threshold (Image, Region, 180, 255)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’,
10000, 100000000)
Row1 := 255
Column1 := 255
dev_clear_window ()
dev_display (SelectedRegions)
dev_set_color (’red’)
for i := 1 to 360 by 1
Row2 := 255 + sin(rad(i)) * 200
Column2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, Row1, Column1, Row2, Column2)
distance_pr (SelectedRegions, Row2, Column2,
DistanceMin, DistanceMax)
endfor

Result
distance_pr returns H_MSG_TRUE.
Parallelization Information
distance_pr is reentrant and processed without parallelization.
Alternatives
distance_pc, distance_lr, distance_sr, diameter_region
See also
hamming_distance, select_region_point, test_region_point, smallest_rectangle2
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.9. GEOMETRY 1211

distance_ps ( double Row, double Column, double Row1, double Column1,


double Row2, double Column2, double *DistanceMin,
double *DistanceMax )

T_distance_ps ( const Htuple Row, const Htuple Column,


const Htuple Row1, const Htuple Column1, const Htuple Row2,
const Htuple Column2, Htuple *DistanceMin, Htuple *DistanceMax )

Calculate the distances between a point and a line segment.


The operator distance_ps calculates the minimum and maximum distance between a point (Row,Column)
and a line segment which is represented by the start point (Row1,Column1) and the end point (Row2,Column2).
DistanceMax is the maximum distance between the point and the end points of the line segment.
DistanceMin is identical to distance_pl in the case that the point is “between” the two endpoints. Other-
wise, the minimum distance to one of the end points is used.
Parameter

. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong


Row coordinate of the first point.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the line segment.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line segment.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line segment.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line segment.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the point and the line segment.
. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum distance between the point and the line segment.
Example

double row,column,row1,column1,row2,column2;
double distance_min,distance_max;

distance_ps(row,column,row1,column1,row2,column2,
&distance_min,&distance_max);

Result
distance_ps returns H_MSG_TRUE.
Parallelization Information
distance_ps is reentrant and processed without parallelization.
Alternatives
distance_pl
See also
distance_pp, distance_pr
Module
Foundation

HALCON 8.0.2
1212 CHAPTER 15. TOOLS

distance_rr_min ( const Hobject Regions1, const Hobject Regions2,


double *MinDistance, Hlong *Row1, Hlong *Column1, Hlong *Row2,
Hlong *Column2 )

T_distance_rr_min ( const Hobject Regions1, const Hobject Regions2,


Htuple *MinDistance, Htuple *Row1, Htuple *Column1, Htuple *Row2,
Htuple *Column2 )

Minimum distance between the contour pixels of two regions each.


The operator distance_rr_min calculates the minimum distance of pairs of regions. If several regions are
passed in Regions1 and Regions2 the distance between the contour pixels of each i-th element is calculated
and then forms the i-th entry in the output parameter MinDistance. The Euclidean distance is used. The
parameters (Row1, Column1) and (Row2, Column2) indicate the position on the contour of Regions1 and
Regions2, respectively, that have the minimum distance.
The calculation is carried out by comparing all contour pixels (see get_region_contour). This means in
particular that each region must consist of exactly one connected component and that holes in the regions are
ignored. Furthermore, it is not checked whether one region lies completely within the other region. In this case, a
minimum distance > 0 is returned. It is also not checked whether both regions contain a nonempty intersection. In
the latter case, a minimum distance of 0 or > 0 can be returned, depending on whether the contours of the regions
contain a common point or not.
Attention
Both input parameters must contain the same number of regions. The regions must not be empty.
Parameter

. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. MinDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Minimum distance between contours of the regions.
Assertion : 0 ≤ MinDistance
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) Hlong *
Line index on contour in Regions1.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) Hlong *
Column index on contour in Regions1.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) Hlong *
Line index on contour in Regions2.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) Hlong *
Column index on contour in Regions2.
Complexity
If N 1,N 2 are the lengths of the contours the runtime complexity is O(N 1 ∗ N 2).
Result
The operator distance_rr_min returns the value H_MSG_TRUE if the input is not empty. Otherwise an
exception handling is raised.
Parallelization Information
distance_rr_min is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
distance_rr_min_dil, dilation1, intersection
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.9. GEOMETRY 1213

distance_rr_min_dil ( const Hobject Regions1, const Hobject Regions2,


Hlong *MinDistance )

T_distance_rr_min_dil ( const Hobject Regions1,


const Hobject Regions2, Htuple *MinDistance )

Minimum distance between two regions with the help of dilatation.


The operator distance_rr_min_dil calculates the minimum distance between pairs of regions. If several
regions are passed in Regions1 and Regions2 the distance between the i-th elements in each case is calculated.
It then forms the i-th entry in the output parameter MinDistance. The calculation is carried out with the help of
dilatation with the Golay element ’h’. The result is:

N umberiterations ∗ 2 − 1

.
The mask ’h’ has the effect that precisely the maximum metrics are calculated.
Attention
Both parameters must contain the same number of regions. The regions must not be empty.
Parameter

. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject


Regions to be examined.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. MinDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Minimum distances of the regions.
Assertion : -1 ≤ MinDistance
Result
The operator distance_rr_min_dil returns the value H_MSG_TRUE if the input is not empty. Otherwise
an exception handling is raised.
Parallelization Information
distance_rr_min_dil is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
distance_rr_min, dilation1, intersection
Module
Foundation

distance_sc ( const Hobject Contour, double Row1, double Column1,


double Row2, double Column2, double *DistanceMin,
double *DistanceMax )

T_distance_sc ( const Hobject Contour, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2,
Htuple *DistanceMin, Htuple *DistanceMax )

Calculate the distance between a line segment and one contour.


The operator distance_sc calculates the distance between a line segment and the line segments of one contour.
Row1, Column1, Row2, Column2 are the start and end coordinates of a line segment, Contour represents the
input contour. The parameters DistanceMin and DistanceMax contain the resulting distances.

HALCON 8.0.2
1214 CHAPTER 15. TOOLS

Parameter

. Contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject


Input contour.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the line segment.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line segment.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line segment.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line segment.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the line segment and the contour.
. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum distance between the line segment and the contour.
Result
distance_sr returns H_MSG_TRUE.
Parallelization Information
distance_sc is reentrant and processed without parallelization.
Alternatives
distance_lc, distance_pc, distance_cc, distance_cc_min
See also
distance_sr, distance_lr, distance_pr, select_xld_point, test_xld_point
Module
Foundation

distance_sl ( double RowA1, double ColumnA1, double RowA2,


double ColumnA2, double RowB1, double ColumnB1, double RowB2,
double ColumnB2, double *DistanceMin, double *DistanceMax )

T_distance_sl ( const Htuple RowA1, const Htuple ColumnA1,


const Htuple RowA2, const Htuple ColumnA2, const Htuple RowB1,
const Htuple ColumnB1, const Htuple RowB2, const Htuple ColumnB2,
Htuple *DistanceMin, Htuple *DistanceMax )

Calculate the distances between a line segment and a line.


The operator distance_sl calculates the minimum and maximum orthogonal distance between a line segment
and a line. As input the coordinates of two points on the line segment (RowA1,ColumnA1,RowA2,ColumnA2)
and on the line (RowB1,ColumnB1,RowB2,ColumnB2) are expected. The parameters DistanceMin and
DistanceMax return the result of the calculation. If the line segments are intersecting, DistanceMin returns
zero.
Parameter

. RowA1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong


Row coordinate of the first point of the line segment.
. ColumnA1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line segment.
. RowA2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line segment.
. ColumnA2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line segment.
. RowB1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the line.

HALCON/C Reference Manual, 2008-5-13


15.9. GEOMETRY 1215

. ColumnB1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong


Column coordinate of the first point of the line.
. RowB2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line.
. ColumnB2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the line segment and the line.
. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum distance between the line segment and the line.
Example

create_tuple(&RowA1, 1);
set_i(RowA1, 8, 0);
create_tuple(&ColumnA1, 1);
set_i(ColumnA1, 7, 0);
create_tuple(&RowA2, 1);
set_i(RowA2, 15, 0);
create_tuple(&ColumnA2, 1);
set_i(ColumnA2, 11, 0);
create_tuple(&RowB1, 1);
set_i(RowB1, 2, 0);
create_tuple(&ColumnB1, 1);
set_i(ColumnB1, 4, 0);
create_tuple(&RowB2, 1);
set_i(RowB2, 6, 0);
create_tuple(&ColumnB2, 1);
set_i(ColumnB2, 10, 0);
T_distance_sl(RowA1,ColumnA1,RowA2,ColumnA2,RowB1,ColumnB1,RowB2,ColumnB2,
&distance_min,&distance_max);
aa_min = get_d(distance_min,0);
aa_max = get_d(distance_max,0);

Result
distance_sl returns H_MSG_TRUE.
Parallelization Information
distance_sl is reentrant and processed without parallelization.
Alternatives
distance_pl
See also
distance_ps, distance_pp
Module
Foundation

distance_sr ( const Hobject Region, double Row1, double Column1,


double Row2, double Column2, double *DistanceMin,
double *DistanceMax )

T_distance_sr ( const Hobject Region, const Htuple Row1,


const Htuple Column1, const Htuple Row2, const Htuple Column2,
Htuple *DistanceMin, Htuple *DistanceMax )

Calculate the distance between a line segment and one region.


The operator distance_sr calculates the distance between a line segment and one region. Row1, Column1,
Row2, Column2 are the start and end coordinates of a line segment. The parameters DistanceMin and
DistanceMax contain the resulting distances.

HALCON 8.0.2
1216 CHAPTER 15. TOOLS

Attention
To enhance distance_sr, holes are ignored.
Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Input region.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the line segment.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line segment.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line segment.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line segment.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the line segment and the region.
. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum distance between the line segment and the region.
Example

threshold(Image, &Region, 0.0, 120.0);


distance_sr(Region,row1,column1,row2,column2
&distance_min, &distance_max);

Result
distance_sr returns H_MSG_TRUE.
Parallelization Information
distance_sr is reentrant and processed without parallelization.
Alternatives
distance_sc, distance_lr, distance_pr, diameter_region
See also
hamming_distance, select_region_point, test_region_point, smallest_rectangle2
Module
Foundation

distance_ss ( double RowA1, double ColumnA1, double RowA2,


double ColumnA2, double RowB1, double ColumnB1, double RowB2,
double ColumnB2, double *DistanceMin, double *DistanceMax )

T_distance_ss ( const Htuple RowA1, const Htuple ColumnA1,


const Htuple RowA2, const Htuple ColumnA2, const Htuple RowB1,
const Htuple ColumnB1, const Htuple RowB2, const Htuple ColumnB2,
Htuple *DistanceMin, Htuple *DistanceMax )

Calculate the distances between two line segments.


The operator distance_ss calculates the minimum and maximum distance between two line seg-
ments. As input the coordinates of the start and end point of the first line segment (RowA1,ColumnA1,
RowA2,ColumnA2) and of the second line segment (RowB1,ColumnB1,RowB2,ColumnB2) are used. The
parameters DistanceMin and DistanceMax return the result of the calculation. If the line segments are
intersecting, DistanceMin returns zero.

HALCON/C Reference Manual, 2008-5-13


15.9. GEOMETRY 1217

Parameter

. RowA1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong


Row coordinate of the first point of the line segment.
. ColumnA1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line segment.
. RowA2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line segment.
. ColumnA2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line segment.
. RowB1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the line.
. ColumnB1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column of the first point of the line.
. RowB2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line.
. ColumnB2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the line segments.
. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum distance between the line segments.
Example

create_tuple(&RowA1, 1);
set_i(RowA1, 8, 0);
create_tuple(&ColumnA1, 1);
set_i(ColumnA1, 7, 0);
create_tuple(&RowA2, 1);
set_i(RowA2, 15, 0);
create_tuple(&ColumnA2, 1);
set_i(ColumnA2, 11, 0);
create_tuple(&RowB1, 1);
set_i(RowB1, 2, 0);
create_tuple(&ColumnB1, 1);
set_i(ColumnB1, 4, 0);
create_tuple(&RowB2, 1);
set_i(RowB2, 6, 0);
create_tuple(&ColumnB2, 1);
set_i(ColumnB2, 10, 0);
T_distance_ss(RowA1,ColumnA1,RowA2,ColumnA2,RowB1,ColumnB1,RowB2,ColumnB2,
&distance_min,&distance_max);
aa_min = get_d(distance_min,0);
aa_max = get_d(distance_max,0);

Result
distance_ss returns H_MSG_TRUE.
Parallelization Information
distance_ss is reentrant and processed without parallelization.
Alternatives
distance_pp
See also
distance_pl, distance_ps
Module
Foundation

HALCON 8.0.2
1218 CHAPTER 15. TOOLS

get_points_ellipse ( double Angle, double Row, double Column,


double Phi, double Radius1, double Radius2, double *RowPoint,
double *ColPoint )

T_get_points_ellipse ( const Htuple Angle, const Htuple Row,


const Htuple Column, const Htuple Phi, const Htuple Radius1,
const Htuple Radius2, Htuple *RowPoint, Htuple *ColPoint )

Calculate a point of an ellipse corresponding to a specific angle.


get_points_ellipse returns the point (RowPoint,ColPoint) on the specified ellipse corresponding to
the angle in Angle, which refers to the main axis of the ellipse. The ellipse itself is characterized by the center
(Row, Column), the orientation of the main axis Phi relative to the horizontal axis, the length of the larger
(Radius1) and the smaller half axis (Radius2). The angles are measured counter clockwise in radiants.
Parameter

. Angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double


Angle corresponding to the resulting point [rad].
Default Value : 0
Restriction : (Angle ≥ 0) ∧ (Angle ≤ 6.283185307)
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.y ; (Htuple .) double
Row coordinate of the center of the ellipse.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.x ; (Htuple .) double
Column coordinate of the center of the ellipse.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.angle.rad ; (Htuple .) double
Orientation of the main axis [rad].
Restriction : (Phi ≥ 0) ∧ (Phi ≤ 6.283185307)
. Radius1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius1 ; (Htuple .) double
Length of the larger half axis.
Restriction : Radius1 > 0
. Radius2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius2 ; (Htuple .) double
Length of the smaller half axis.
Restriction : Radius2 ≥ 0
. RowPoint (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *
Row coordinate of the point on the ellipse.
. ColPoint (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column coordinates of the point on the ellipse.
Example (Syntax: HDevelop)

draw_ellipse(WindowHandle,Row,Column,Phi,Radius1,Radius2)
get_points_ellipse([0,3.14],Row,Column,Phi,Radius1,Radius2,RowPoint,ColPoint)

Result
get_points_ellipse returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
is raised.
Parallelization Information
get_points_ellipse is reentrant and processed without parallelization.
Possible Predecessors
fit_ellipse_contour_xld, draw_ellipse, gen_ellipse_contour_xld
See also
gen_ellipse_contour_xld
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.9. GEOMETRY 1219

intersection_ll ( double RowA1, double ColumnA1, double RowA2,


double ColumnA2, double RowB1, double ColumnB1, double RowB2,
double ColumnB2, double *Row, double *Column, Hlong *IsParallel )

T_intersection_ll ( const Htuple RowA1, const Htuple ColumnA1,


const Htuple RowA2, const Htuple ColumnA2, const Htuple RowB1,
const Htuple ColumnB1, const Htuple RowB2, const Htuple ColumnB2,
Htuple *Row, Htuple *Column, Htuple *IsParallel )

Calculate the intersection point of two lines.


The operator intersection_ll calculates the intersection point of two lines. As input the two points on each
line are expected (RowA1,ColumnA1, RowA2,ColumnA2) and (RowB1,ColumnB1, RowB2,ColumnB2). The
parameters Row and Column return the result of the calculation. If the lines are parallel, the values of Row and
Column are undefined and IsParallel is 1. Otherwise, IsParallel is 0.
Attention
If the lines are parallel the values of Row and Column are undefined.
Parameter

. RowA1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong


Row coordinate of the first point of the first line.
. ColumnA1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the first line.
. RowA2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the first line.
. ColumnA2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the first line.
. RowB1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the second line.
. ColumnB1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the second line.
. RowB2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the second line.
. ColumnB2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the second line.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *
Row coordinate of the intersection point.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column coordinate of the intersection point.
. IsParallel (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) Hlong *
Are the two lines parallel?
Example

create_tuple(&rowA1, 1);
set_i(rowA1, 8, 0);
create_tuple(&columnA1, 1);
set_i(columnA1, 7, 0);
create_tuple(&rowA2, 1);
set_i(rowA2, 15, 0);
create_tuple(&columnA2, 1);
set_i(columnA2, 11, 0);
create_tuple(&RowB1, 1);
set_i(RowB1, 2, 0);
create_tuple(&ColumnB1, 1);
set_i(ColumnB1, 4, 0);
create_tuple(&RowB2, 1);
set_i(RowB2, 6, 0);

HALCON 8.0.2
1220 CHAPTER 15. TOOLS

create_tuple(&ColumnB2, 1);
set_i(ColumnB2, 10, 0);
T_intersection_ll(rowA1,columnA1,rowA2,columnA2,RowB1,ColumnB1,RowB2,ColumnB2,
&row_i,&column_i,&parallel);
aa_min = get_d(row_i,0);
aa_max = get_d(column_i,0);

Result
intersection_ll returns H_MSG_TRUE.
Parallelization Information
intersection_ll is reentrant and processed without parallelization.
Module
Foundation

projection_pl ( double Row, double Column, double Row1, double Column1,


double Row2, double Column2, double *RowProj, double *ColProj )

T_projection_pl ( const Htuple Row, const Htuple Column,


const Htuple Row1, const Htuple Column1, const Htuple Row2,
const Htuple Column2, Htuple *RowProj, Htuple *ColProj )

Calculate the projection of a point onto a line.


The operator projection_pl calculates the projection of a point (Row,Column) onto a line which is repre-
sented by the two points (Row1,Column1) and (Row2,Column2). The coordinates of the projected point are
returned in RowProj and ColProj.
Parameter

. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong


Row coordinate of the point.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the point.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point on the line.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point on the line.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point on the line.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point on the line.
. RowProj (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Row coordinate of the projected point.
. ColProj (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Column coordinate of the projected point
Example

projection_pl(row,column,row1,column1,row2,column2,
&row_proj,&col_proj);

Result
projection_pl returns H_MSG_TRUE.
Parallelization Information
projection_pl is reentrant and processed without parallelization.
Module
Foundation

HALCON/C Reference Manual, 2008-5-13


15.10. GRID-RECTIFICATION 1221

15.10 Grid-Rectification
T_connect_grid_points ( const Hobject Image, Hobject *ConnectingLines,
const Htuple Row, const Htuple Col, const Htuple Sigma,
const Htuple MaxDist )

Establish connections between the grid points of the rectification grid.


connect_grid_points searches for connecting lines between the grid points (Row,Col) of the rectification
grid. The connecting lines are extracted from the input image Image by a combination of an edge detector, a
smoothing filter, and a line detector, each of size σ. The σ to be used is determined as follows: When a single
value is passed in Sigma, this value is used. When a tuple of three values (sigma_min, sigma_max, sigma_step) is
passed, connect_grid_points tests every σ within a range from sigma_min to sigma_max with a step width
of sigma_step and chooses the σ that causes the greatest number of connecting lines. The same happens when a
tuple of only two values sigma_min and sigma_max is passed. However, in this case a fixed step width of 0.05 is
used.
Then, the extracted connecting lines are split at the grid points and those line segments are selected that start as
well as end at a grid point. Note that edge detectors typically don’t work very accurately in the proximity of edge
junctions, and thus in general the connecting lines will not hit the grid points. Therefore, actually those connecting
lines are split and selected that start at, end at, or pass a grid point at a maximum distance of MaxDist. The
connecting lines are modified in order to start and end exactly in the corresponding grid points, and are returned in
ConnectingLines as XLD contours.
Additionally, connect_grid_points calculates for each output XLD contour its type of transition and stores
it in its global attribute ’bright_dark’. The attribute is set to 1.0, if the connecting line forms a bright-dark transition
(left to right, viewed from start point to end point), otherwise it is set to 0.0.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. ConnectingLines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld-array ; Hobject *
Output contours.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Row coordinates of the grid points.
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
Column coordinates of the grid points.
Restriction : number(Col) = number(Row)
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . Hlong / double
Size of the applied Gaussians.
Default Value : 0.9
Suggested values : Sigma ∈ {0.7, 0.9, 1.1, 1.3, 1.5}
Number of elements : (1 ≤ Sigma) ∧ (Sigma ≤ 3)
Restriction : 0.7 ≤ Sigma
. MaxDist (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Maximum distance of the connecting lines from the grid points.
Default Value : 5.5
Suggested values : MaxDist ∈ {1.5, 3.5, 5.5, 7.5, 9.5}
Restriction : 0.0 ≤ MaxDist
Result
connect_grid_points returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
connect_grid_points is reentrant and processed without parallelization.
Possible Predecessors
saddle_points_sub_pix
Possible Successors
gen_grid_rectification_map
Module
Calibration

HALCON 8.0.2
1222 CHAPTER 15. TOOLS

create_rectification_grid ( double Width, Hlong NumSquares,


const char *GridFile )

T_create_rectification_grid ( const Htuple Width,


const Htuple NumSquares, const Htuple GridFile )

Generate a PostScript file, which describes the rectification grid.


create_rectification_grid generates a checkered pattern with NumSquares × NumSquares alter-
nating black and white squares. This pattern is Width meters wide (and high). Around the pattern there is an
inner frame of 0.3 times the width of one square, which continues the checkered pattern. The pattern is com-
pleted by a solid white outer frame of 0.7 times the width of one square. In the center of the pattern there are
two circular marks, one black on a white square and one white on a black square. These marks are used by
gen_grid_rectification_map to rotate the detected layout of the grid points into the correct orientation.
It is assumed that the black mark is positioned to the left of the white mark, when oriented correctly. The file
GridFile contains the PostScript description of the rectification grid.
Parameter
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Width of the checkered pattern in meters (without the two frames).
Default Value : 0.17
Suggested values : Width ∈ {1.2, 0.8, 0.6, 0.4, 0.2, 0.1}
Recommended Increment : 0.1
Restriction : 0.0 < Width
. NumSquares (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of squares per row and column.
Default Value : 17
Suggested values : NumSquares ∈ {11, 13, 15, 17, 19, 21, 23, 25, 27}
Recommended Increment : 2
Restriction : 2 ≤ NumSquares
. GridFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name of the PostScript file.
Default Value : "rectification_grid.ps"
Result
find_rectification_grid returns H_MSG_TRUE if all parameter values are correct and the file has been
written successfully. If necessary, an exception handling is raised.
Parallelization Information
create_rectification_grid is processed completely exclusively without parallelization.
See also
find_rectification_grid, saddle_points_sub_pix, connect_grid_points,
gen_grid_rectification_map
Module
Foundation

find_rectification_grid ( const Hobject Image, Hobject *GridRegion,


double MinContrast, double Radius )

T_find_rectification_grid ( const Hobject Image, Hobject *GridRegion,


const Htuple MinContrast, const Htuple Radius )

Segment the rectification grid region in the image.


find_rectification_grid searches in the image Image for image parts that contain the rectification
grid and returns them in the region GridRegion. To do so, essentially image areas with a contrast of at least
MinContrast are extracted and the holes in these areas are filled up. Then, an opening with the radius Radius
is applied to these areas to eliminate smaller areas of high contrast.
During grid-rectification, a careful reduction of the input region to those image parts that actually contain
the rectification grid is useful for two purposes: First, the computing time can be reduced and secondly,

HALCON/C Reference Manual, 2008-5-13


15.10. GRID-RECTIFICATION 1223

saddle_points_sub_pix and connect_grid_points can be prevented from detecting false grid points
and connecting lines.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. GridRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Output region containing the rectification grid.
. MinContrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Minimum contrast.
Default Value : 8.0
Suggested values : MinContrast ∈ {2.0, 4.0, 8.0, 16.0, 32.0}
Restriction : MinContrast ≥ 0
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double / Hlong
Radius of the circular structuring element.
Default Value : 7.5
Suggested values : Radius ∈ {1.5, 2.5, 3.5, 4.5, 5.5, 7.5, 9.5, 12.5, 15.5, 19.5, 25.5, 33.5, 45.5, 60.5, 110.5}
Restriction : Radius ≥ 0.5
Example (Syntax: HDevelop)

find_rectification_grid (Image, GridRegion, 8, 10)


dilation_circle (GridRegion, GridRegionDilated, 5.5)
reduce_domain (Image, GridRegionDilated, ImageReduced)
saddle_points_sub_pix (ImageReduced, ’facet’, 1.5, 5, Row, Col)
connect_grid_points (ImageReduced, ConnectingLines, Row, Col, 1.1, 5.5)
gen_grid_rectification_map (ImageReduced, ConnectingLines, Map, Meshes, 20,
’auto’, Row, Col)
map_image (Image, Map, ImageMapped)

Result
find_rectification_grid returns H_MSG_TRUE if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
find_rectification_grid is reentrant and processed without parallelization.
Possible Successors
dilation_circle, reduce_domain
Module
Calibration

T_gen_arbitrary_distortion_map ( Hobject *Map,


const Htuple GridSpacing, const Htuple Row, const Htuple Col,
const Htuple GridWidth, const Htuple ImageWidth,
const Htuple ImageHeight )

Generate a projection map that describes the mapping between an arbitrarily distorted image and the rectified
image.
gen_arbitrary_distortion_map computes the mapping Map between an arbitrarily distorted image and
the rectified image. Assuming that the points (Row,Col) form a regular grid in the rectified image, each grid cell,
which is defined by the coordinates (Row,Col) of its four corners in the distorted image, is projected onto a square
of GridSpacing×GridSpacing pixels. The coordinates of the grid points must be passed line by line in Row
and Col. GridWidth is the width of the point grid in grid points. To compute the mapping Map, additionally
the width ImageWidth and height ImageHeight of the images to be rectified must be passed.
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image, the
linearized coordinates of the pixel in the input image that is in the upper left position relative to the transformed co-
ordinates are stored. The four other channels contain the weights of the four neighboring pixels of the transformed
coordinates, which are used for the bilinear interpolation, in the following order:

HALCON 8.0.2
1224 CHAPTER 15. TOOLS

2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
In contrary to gen_grid_rectification_map, gen_arbitrary_distortion_map is used when
the coordinates (Row,Col) of the grid points in the distorted image are already known or the relevant part of the
image consist of regular grid structures, which the coordinates can be derived from.
Parameter

. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; Hobject * : int4 / uint2


Image containing the mapping data.
. GridSpacing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Distance of the grid points in the rectified image.
Restriction : GridSpacing > 0
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Row coordinates of the grid points in the distorted image.
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
Column coordinates of the grid points in the distorted image.
Restriction : number(Row) = number(Col)
. GridWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Width of the point grid (number of grid points).
. ImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Htuple . Hlong
Width of the images to be rectified.
Restriction : ImageWidth > 0
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Htuple . Hlong
Height of the images to be rectified.
Restriction : ImageHeight > 0
Result
gen_arbitrary_distortion_map returns H_MSG_TRUE if all parameter values are correct. If necessary,
an exception handling is raised.
Parallelization Information
gen_arbitrary_distortion_map is reentrant and processed without parallelization.
Possible Successors
map_image
See also
create_rectification_grid, find_rectification_grid, connect_grid_points,
gen_grid_rectification_map
Module
Calibration

T_gen_grid_rectification_map ( const Hobject Image,


const Hobject ConnectingLines, Hobject *Map, Hobject *Meshes,
const Htuple GridSpacing, const Htuple Rotation, const Htuple Row,
const Htuple Col )

Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
gen_grid_rectification_map calculates the mapping between the grid points (Row,Col), which have
been actually detected in the distorted image Image (typically using saddle_points_sub_pix), and the
corresponding grid points of the ideal regular point grid. First, all paths that lead from their initial point via ex-
actly four different connecting lines back to the initial point are assembled from the grid points (Row,Col) and
the connecting lines ConnectingLines (detected by connect_grid_points). In case that the input of
grid points (Row,Col) and of connecting lines ConnectingLines was meaningful, one such ’mesh’ corre-
sponds to exactly one grid cell in the rectification grid. Afterwards, the meshes are combined to the point grid.
According to the value of Rotation, the point grid is rotated by 0, 90, 180 or 270 degrees. Note that the point

HALCON/C Reference Manual, 2008-5-13


15.10. GRID-RECTIFICATION 1225

grid does not necessarily have the correct orientation. When passing ’auto’ in Rotation, the point grid is ro-
tated such that the black circular mark in the rectification grid is positioned to the left of the white one (see also
create_rectification_grid). Finally, the mapping Map between the distorted image and the rectified
image is calculated by interpolation between the grid points. Each grid cell, for which the coordinates (Row,Col)
of all four corner points are known, is projected onto a square of GridSpacing × GridSpacing pixels.
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image, the
linearized coordinates of the pixel in the input image that is in the upper left position relative to the transformed co-
ordinates are stored. The four other channels contain the weights of the four neighboring pixels of the transformed
coordinates, which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
gen_grid_rectification_map additionally returns the calculated meshes as XLD contours in Meshes.
In contrary to gen_arbitrary_distortion_map, gen_grid_rectification_map and its prede-
cessors are used when the coordinates (Row,Col) of the grid points in the distorted image are neither known nor
can be derived from the image contents.
Attention
Each input XLD contour ConnectingLines must own the global attribute ’bright_dark’, as it is described with
connect_grid_points!
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. ConnectingLines (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld-array ; Hobject
Input contours.
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; Hobject * : int4 / uint2
Image containing the mapping data.
. Meshes (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld-array ; Hobject *
Output contours.
. GridSpacing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Distance of the grid points in the rectified image.
Restriction : GridSpacing > 0
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char * / Hlong
Rotation to be applied to the point grid.
Default Value : "auto"
List of values : Rotation ∈ {"auto", 0, 90, 180, 270}
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Row coordinates of the grid points.
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
Column coordinates of the grid points.
Restriction : number(Col) = number(Row)
Result
gen_grid_rectification_map returns H_MSG_TRUE if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
gen_grid_rectification_map is reentrant and processed without parallelization.
Possible Predecessors
connect_grid_points
Possible Successors
map_image
See also
gen_arbitrary_distortion_map
Module
Calibration

HALCON 8.0.2
1226 CHAPTER 15. TOOLS

15.11 Hough

hough_circle_trans ( const Hobject Region, Hobject *HoughImage,


Hlong Radius )

T_hough_circle_trans ( const Hobject Region, Hobject *HoughImage,


const Htuple Radius )

Return the Hough-Transform for circles with a given radius.


The operator hough_circle_trans calculates the Hough transform for circles with a certain Radius in
the regions passed by Region. Hereby the centres of all possible circles in the parameter space (the Hough
or accumulator space respectively) will be accumulated for each point in the image space. Circle hypotheses
supported by many points in the input region thereby generate a maximum in the area showing the circle’s centre
in the output image (HoughImage). The circles’ centres in the image space can be deduced from the coordinates
of these maximums by subtracting the Radius. If more than one radius is transmitted, all Hough images will be
shifted according to the maximal radius.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Binary edge image in which the circles are to be detected.
. HoughImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : int2
Hough transform for circles with a given radius.
Number of elements : HoughImage = Radius
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Radius of the circle to be searched in the image.
Default Value : 12
Typical range of values : 3 ≤ Radius (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : (1 ≤ Radius) ≤ 500
Result
The operator hough_circle_trans returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
hough_circle_trans is reentrant and processed without parallelization.
Module
Foundation

hough_circles ( const Hobject RegionIn, Hobject *RegionOut,


Hlong Radius, Hlong Percent, Hlong Mode )

T_hough_circles ( const Hobject RegionIn, Hobject *RegionOut,


const Htuple Radius, const Htuple Percent, const Htuple Mode )

Centres of circles for a specific radius.


hough_circle_trans detects the centres of circles in regions with the help of the Hough transform for circles
with a specific radius.
Parameter
. RegionIn (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Binary edge image in which the circles are to be detected.
. RegionOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Centres of those circles which are included in the edge image by Percent percent.
Number of elements : RegionOut = ((Radius · Percent) · Mode)

HALCON/C Reference Manual, 2008-5-13


15.11. HOUGH 1227

. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong


Radius of the circle to be searched in the image.
Default Value : 12
Typical range of values : 2 ≤ Radius ≤ 500 (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : (1 ≤ Radius) ≤ 500
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Indicates the percentage (approximately) of the (ideal) circle which must be present in the edge image
RegionIn.
Default Value : 60
Typical range of values : 10 ≤ Percent ≤ 100 (lin)
Minimum Increment : 1
Recommended Increment : 5
Number of elements : (1 ≤ Percent) ≤ 100
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
The modus defines the position of the circle in question:
0 - the radius is equivalent to the outer border of the set pixels.
1 - the radius is equivalent to the centres of the circle lines´ pixels.
2 - both 0 and 1 (a little more fuzzy, but more reliable in contrast to circles set slightly differently, necessitates
50 % more processing capacity compared to 0 or 1 alone).
List of values : Mode ∈ {0, 1, 2}
Number of elements : (1 ≤ Mode) ≤ 3
Result
The operator hough_circles returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
hough_circles is reentrant and processed without parallelization.
Module
Foundation

hough_line_trans ( const Hobject Region, Hobject *HoughImage,


Hlong AngleResolution )

T_hough_line_trans ( const Hobject Region, Hobject *HoughImage,


const Htuple AngleResolution )

Produce the Hough transform for lines within regions.


The operator hough_line_trans calculates the Hough transform for lines in those regions transmitted by
Region. Thereby the angles and the lengths of the lines´ normal vectors are registered in the parameter space
(the Hough- or accumulator space respectively). This means that the parameterization is executed according to the
HNF.
The result is registered in a newly generated Int2-Image (HoughImage), whereby the x-axis is equivalent to the
angle between the normal vector and the x-axis (in the original image), and the y-axis is equivalent to the distance
of the line from the origin.
The angle ranges from -90 to 180 degrees and will be registered with a resolution of 1/AngleResolution,
which means that one pixel in x-direction is equivalent to 1/AngleResolution and that the HoughImage has
a width of 270 ∗ AngleResolution + 1 pixel. The height of the HoughImage corresponds to the distance
between the lower right corner of the surrounding rectangle of the input region and the origin.
The maxima in the result image are equivalent to the parameter values of the lines in the original image.

HALCON 8.0.2
1228 CHAPTER 15. TOOLS

Parameter

. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Binary edge image in which lines are to be detected.
. HoughImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : int2
Hough transform for lines.
. AngleResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Adjusting the resolution in the angle area.
Default Value : 4
List of values : AngleResolution ∈ {1, 2, 4, 8}
Result
The operator hough_line_trans returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
hough_line_trans is reentrant and processed without parallelization.
Possible Predecessors
threshold, skeleton
Possible Successors
threshold, local_max
See also
hough_circle_trans, gen_region_hline
Module
Foundation

hough_line_trans_dir ( const Hobject ImageDir, Hobject *HoughImage,


Hlong DirectionUncertainty, Hlong AngleResolution )

T_hough_line_trans_dir ( const Hobject ImageDir, Hobject *HoughImage,


const Htuple DirectionUncertainty, const Htuple AngleResolution )

Compute the Hough transform for lines using local gradient direction.
The operator hough_line_trans_dir calculates the Hough transform for lines in those regions passed in
the domain of ImageDir. To do so, the angles and the lengths of the lines’ normal vectors are registered in the
parameter space (the so-called Hough or accumulator space).
In contrast to hough_line_trans, additionally the edge direction in ImageDir (e.g., returned by
sobel_dir or edges_image) is taken into account. This results in a more efficient computation and in a
reduction of the noise in the Hough space.
The parameter DirectionUncertainty describes how much the edge direction of the individual points
within a line is allowed to vary. For example, with DirectionUncertainty = 10 a horizon-
tal line (i.e., edge direction = 0 degrees) may contain points with an edge direction between -10 and
+10 degrees. The higher DirectionUncertainty is chosen, the higher the computation time will
be. For DirectionUncertainty = 180 hough_line_trans_dir shows the same behavior as
hough_line_trans, i.e., the edge direction is ignored. DirectionUncertainty should be chosen at
least as high as the step width of the edge direction stored in ImageDir. The minimum step width is 2 degrees
(defined by the image type ’direction’).
The result is stored in a newly generated UINT2-Image (HoughImage), where the x-axis (i.e., columns) repre-
sents the angle between the normal vector and the x-axis of the original image, and the y-axis (i.e., rows) represents
the distance of the line from the origin.
The angle ranges from -90 to 180 degrees and will be stored with a resolution of 1/AngleResolution, which
means that one pixel in x-direction is equivalent to 1/AngleResolution degrees and that the HoughImage
has a width of 270∗AngleResolution+1 pixels. The height of the HoughImage corresponds to the distance
between the lower right corner of the surrounding rectangle of the input region and the origin.

HALCON/C Reference Manual, 2008-5-13


15.11. HOUGH 1229

The local maxima in the result image are equivalent to the parameter values of the lines in the original image.
Parameter

. ImageDir (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : direction


Image containing the edge direction. The edges must be described by the image domain.
. HoughImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : uint2
Hough transform.
. DirectionUncertainty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; Hlong
Uncertainty of the edge direction (in degrees).
Default Value : 2
Typical range of values : 2 ≤ DirectionUncertainty ≤ 180
Minimum Increment : 2
. AngleResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Resolution in the angle area (in 1/degrees).
Default Value : 4
List of values : AngleResolution ∈ {1, 2, 4, 8}
Result
The operator hough_line_trans_dir returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input is set via the operator set_system(’no_object_result’,<Result>).
If necessary an exception handling is raised.
Parallelization Information
hough_line_trans_dir is reentrant and processed without parallelization.
Possible Predecessors
edges_image, sobel_dir, threshold, hysteresis_threshold,
nonmax_suppression_dir, reduce_domain
Possible Successors
binomial_filter, gauss_image, threshold, local_max, plateaus_center
See also
hough_line_trans, hough_lines, hough_lines_dir
Module
Foundation

T_hough_lines ( const Hobject RegionIn, const Htuple AngleResolution,


const Htuple Threshold, const Htuple AngleGap, const Htuple DistGap,
Htuple *Angle, Htuple *Dist )

Detect lines in edge images with the help of the Hough transform and returns it in HNF.
The operator hough_lines allows the selection of linelike structures in a region, whereby it is not necessary
that the individual points of a line are connected. This process is based on the Hough transform. The lines are
returned in HNF, that is by the direction and length of their normal vector.
The parameter AngleResolution defines the degree of exactness concerning the determination of the angles.
It amounts to 1/AngleResolution degree. The parameter Threshold determines by how many points
of the original region a line’s hypothesis has to be supported at least in order to be taken over into the output.
The parameters AngleGap and DistGap define a neighborhood of the points in the Hough image in order to
determine the local maxima. The lines are returned in HNF.
Parameter

. RegionIn (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Binary edge image in which the lines are to be detected.
. AngleResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Adjusting the resolution in the angle area.
Default Value : 4
List of values : AngleResolution ∈ {1, 2, 4, 8}

HALCON 8.0.2
1230 CHAPTER 15. TOOLS

. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Threshold value in the Hough image.
Default Value : 100
Typical range of values : 2 ≤ Threshold
. AngleGap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Minimal distance of two maxima in the Hough image (direction: angle).
Default Value : 5
Typical range of values : 0 ≤ AngleGap
. DistGap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Minimal distance of two maxima in the Hough image (direction: distance).
Default Value : 5
Typical range of values : 0 ≤ DistGap
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hesseline.angle.rad-array ; Htuple . double *
Angles (in radians) of the detected lines’ normal vectors.
Typical range of values : -1.5707963 ≤ Angle ≤ 3.1415927
. Dist (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hesseline.distance-array ; Htuple . double *
Distance of the detected lines from the origin.
Typical range of values : 0 ≤ Dist
Number of elements : Dist = Angle
Result
The operator hough_lines returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
hough_lines is reentrant and processed without parallelization.
Possible Predecessors
threshold, skeleton
Possible Successors
select_matching_lines
See also
hough_line_trans, gen_region_hline, hough_circles
Module
Foundation

T_hough_lines_dir ( const Hobject ImageDir, Hobject *HoughImage,


Hobject *Lines, const Htuple DirectionUncertainty,
const Htuple AngleResolution, const Htuple Smoothing,
const Htuple FilterSize, const Htuple Threshold,
const Htuple AngleGap, const Htuple DistGap, const Htuple GenLines,
Htuple *Angle, Htuple *Dist )

Detect lines in edge images with the help of the Hough transform using local gradient direction and return them in
normal form.
The operator hough_lines_dir selects line-like structures in a region based on the Hough transform. The
individual points of a line can be unconnected. The region is given by the domain of ImageDir. The lines are
returned in Hessian normal form (HNF), that is by the direction and length of their normal vector.
In contrast to hough_lines, additionally the edge direction in ImageDir (e.g., returned by sobel_dir or
edges_image) is taken into account. This results in a more efficient computation and in a reduction of the noise
in the Hough space.
The parameter DirectionUncertainty describes how much the edge direction of the individual points
within a line is allowed to vary. For example, with DirectionUncertainty = 10 a horizontal line
(i.e., edge direction = 0 degrees) may contain points with an edge direction between -10 and +10 de-
grees. The higher DirectionUncertainty is chosen, the higher the computation time will be. For

HALCON/C Reference Manual, 2008-5-13


15.11. HOUGH 1231

DirectionUncertainty = 180 hough_lines_dir shows the same behavior as hough_lines, i.e.,


the edge direction is ignored. DirectionUncertainty should be chosen at least as high as the step width
of the edge direction stored in ImageDir. The minimum step width is 2 degrees (defined by the image type
’direction’).
The parameter AngleResolution defines how accurately the angles are determined. The accuracy amounts to
1/AngleResolution degrees. A subsequent smoothing of the Hough space results in an increased stability.
The smoothing filter can be selected by Smoothing, the degree of smoothing by the parameter FilterSize
(see mean_image or gauss_image for details). The parameter Threshold determines by how many
points of the original region a line’s hypothesis must at least be supported in order to be selected into the output.
The parameters AngleGap and DistGap define a neighborhood of the points in the Hough image in order to
determine the local maxima: AngleGap describes the minimum distance of two maxima in the Hough image
in angle direction and DistGap in distance direction, respectively. Thus, maxima exceeding Threshold but
lying close to an even higher maximum are eliminated. This can particularly be helpful when searching for short
and long lines simultaneously. Besides the unsmoothed Hough image HoughImage, the lines are returned in
HNF (Angle, Dist). If the parameter GenLines is set to ’true’, additionally those regions in ImageDir are
returned that contributed to the local maxima in Hough space. They are stored in the parameter Lines.
Parameter

. ImageDir (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : direction


Image containing the edge direction. The edges are described by the image domain.
. HoughImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : uint2
Hough transform.
. Lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Regions of the input image that contributed to the local maxima.
. DirectionUncertainty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; Htuple . Hlong
Uncertainty of edge direction (in degrees).
Default Value : 2
Typical range of values : 2 ≤ DirectionUncertainty ≤ 180
Minimum Increment : 2
. AngleResolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Resolution in the angle area (in 1/degrees).
Default Value : 4
List of values : AngleResolution ∈ {1, 2, 4, 8}
. Smoothing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Smoothing filter for hough image.
Default Value : "mean"
List of values : Smoothing ∈ {"none", "mean", "gauss"}
. FilterSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Required smoothing filter size.
Default Value : 5
List of values : FilterSize ∈ {3, 5, 7, 9, 11}
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Threshold value in the Hough image.
Default Value : 100
Typical range of values : 1 ≤ Threshold
. AngleGap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Minimum distance of two maxima in the Hough image (direction: angle).
Default Value : 5
Typical range of values : 0 ≤ AngleGap
. DistGap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Minimum distance of two maxima in the Hough image (direction: distance).
Default Value : 5
Typical range of values : 0 ≤ DistGap
. GenLines (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Create line regions if ’true’.
Default Value : "true"
List of values : GenLines ∈ {"true", "false"}

HALCON 8.0.2
1232 CHAPTER 15. TOOLS

. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hesseline.angle.rad-array ; Htuple . double *


Angles (in radians) of the detected lines’ normal vectors.
Typical range of values : -1.5707963 ≤ Angle ≤ 3.1415927
. Dist (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hesseline.distance-array ; Htuple . double *
Distance of the detected lines from the origin.
Typical range of values : 0 ≤ Dist
Number of elements : Dist = Angle
Result
The operator hough_lines returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
hough_lines_dir is reentrant and processed without parallelization.
Possible Predecessors
edges_image, sobel_dir, threshold, nonmax_suppression_dir, reduce_domain,
skeleton
Possible Successors
gen_region_hline, select_matching_lines
See also
hough_line_trans_dir, hough_line_trans, gen_region_hline, hough_circles
Module
Foundation

select_matching_lines ( const Hobject RegionIn, Hobject *RegionLines,


double AngleIn, double DistIn, Hlong LineWidth, Hlong Thresh,
double *AngleOut, double *DistOut )

T_select_matching_lines ( const Hobject RegionIn,


Hobject *RegionLines, const Htuple AngleIn, const Htuple DistIn,
const Htuple LineWidth, const Htuple Thresh, Htuple *AngleOut,
Htuple *DistOut )

Select those lines from a set of lines (in HNF) which fit best into a region.
Lines which fit best into a region can be selected from a set of lines which are available in HNF with the help of the
operator select_matching_lines; the region itself is also transmitted as a parameter (RegionIn). The
width of the lines can be indicated by the parameter LineWidth. The selected lines will be returned in HNF and
as regions (RegionLines).
The lines are selected iteratively in a loop: At first, the line showing the greatest overlap with the input region
is selected from the set of input lines. This line will then be taken over into the output set whereby all points
belonging to that line will not be considered in the further steps determining overlaps. The loop will be left when
the maximum overlap value of the region and the lines falls below a certain threshold value (Thresh). The
selected lines will be returned as regions as well as in HNF.
Parameter

. RegionIn (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject


Region in which the lines are to be matched.
. RegionLines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Region array containing the matched lines.
. AngleIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .hesseline.angle.rad(-array) ; (Htuple .) double
Angles (in radians) of the normal vectors of the input lines.
Typical range of values : -1.5707963 ≤ AngleIn ≤ 3.1415927
. DistIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hesseline.distance(-array) ; (Htuple .) double
Distances of the input lines form the origin.
Number of elements : DistIn = AngleIn

HALCON/C Reference Manual, 2008-5-13


15.12. IMAGE-COMPARISON 1233

. LineWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong


Widths of the lines.
Default Value : 7
Typical range of values : 1 ≤ LineWidth
. Thresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Threshold value for the number of line points in the region.
Default Value : 100
Typical range of values : 1 ≤ Thresh
. AngleOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . hesseline.angle.rad(-array) ; (Htuple .) double *
Angles (in radians) of the normal vectors of the selected lines.
Typical range of values : -1.5707963 ≤ AngleOut ≤ 3.1415927
Number of elements : AngleOut ≤ AngleIn
. DistOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hesseline.distance(-array) ; (Htuple .) double *
Distances of the selected lines from the origin.
Typical range of values : 0 ≤ DistOut
Number of elements : DistOut = AngleOut
Result
The operator select_matching_lines returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
select_matching_lines is reentrant and processed without parallelization.
Possible Predecessors
hough_lines
Module
Foundation

15.12 Image-Comparison

clear_all_variation_models ( )
T_clear_all_variation_models ( )

Free the memory of all variation models.


clear_all_variation_models frees the memory of all variation models that were created by calling
create_variation_model. After calling clear_all_variation_models, no model can be used
any longer.
Attention
clear_all_variation_models exists solely for the purpose of implementing the “reset program” func-
tionality in HDevelop. clear_all_variation_models must not be used in any application.
Result
clear_all_variation_models always returns H_MSG_TRUE.
Parallelization Information
clear_all_variation_models is processed completely exclusively without parallelization.
Possible Predecessors
create_variation_model
Alternatives
clear_variation_model
Module
Matching

HALCON 8.0.2
1234 CHAPTER 15. TOOLS

clear_train_data_variation_model ( Hlong ModelID )


T_clear_train_data_variation_model ( const Htuple ModelID )

Free the memory of the training data of a variation model.


clear_train_data_variation_model frees the memory of a variation model that was created by
create_variation_model. clear_train_data_variation_model can be used to reduce the
amount of memory required for the variation model (in main memory as well as when writing the model to
file with write_variation_model). clear_train_data_variation_model can only be called
if the model has been prepared for comparison with an image with prepare_variation_model. Af-
ter the call to clear_train_data_variation_model the variation model can only be used for image
comparision with compare_variation_model or compare_ext_variation_model. The model
cannot be trained any further. Furthermore, the images used for the image comparison can no longer be read
with get_variation_model. If they are required, get_variation_model must be called before
clear_train_data_variation_model is called.
Parameter

. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong


ID of the variation model.
Result
clear_train_data_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
clear_train_data_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
prepare_variation_model
Possible Successors
compare_variation_model, compare_ext_variation_model, write_variation_model
Module
Matching

clear_variation_model ( Hlong ModelID )


T_clear_variation_model ( const Htuple ModelID )

Free the memory of a variation model.


clear_variation_model frees the memory of a variation model that was created by
create_variation_model. After calling create_variation_model, the model can no longer
be used. The handle ModelID becomes invalid.
Parameter

. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong


ID of the variation model.
Result
clear_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
clear_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
create_variation_model
Alternatives
clear_all_variation_models
Module
Matching

HALCON/C Reference Manual, 2008-5-13


15.12. IMAGE-COMPARISON 1235

compare_ext_variation_model ( const Hobject Image, Hobject *Region,


Hlong ModelID, const char *Mode )

T_compare_ext_variation_model ( const Hobject Image,


Hobject *Region, const Htuple ModelID, const Htuple Mode )

Compare an image to a variation model.


compare_ext_variation_model compares the input image Image to the variation model given by
ModelID. compare_ext_variation_model is an extension of compare_variation_model
that provides more modes for the image comparison. Before compare_ext_variation_model
can be called, the two internal threshold images of the variation model must have been created with
prepare_variation_model or prepare_direct_variation_model. Let c(x, y) denote the
input image Image and tu,l denote the two threshold images (see prepare_variation_model or
prepare_direct_variation_model). Then, for Mode = ’absolute’ the output region Region contains
all points that differ substantially from the model, i.e., the points that fulfill the following condition:

c(x, y) > tu (x, y) ∨ c(x, y) < tl (x, y) .

This mode is identical to compare_variation_model. For Mode = ’light’, Region contains all points that
are too bright:
c(x, y) > tu (x, y) .
For Mode = ’dark’, Region contains all points that are too dark:

c(x, y) < tl (x, y) .

Finally, for Mode = ’light_dark’ two regions are returned in Region. The first region contains the result of Mode
= ’light’, while the second region contains the result of Mode = ’dark’. The respective regions can be selected
with select_obj.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Image of the object to be trained.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Region containing the points that differ substantially from the model.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong
ID of the variation model.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Method used for comparing the variation model.
Default Value : "absolute"
Suggested values : Mode ∈ {"absolute", "light", "dark", "light_dark"}
Example (Syntax: HDevelop)

open_framegrabber (’File’, 1, 1, 0, 0, 0, 0, ’default’, -1,


’default’, -1, ’default’, ’model.seq’, ’default’,
-1, -1, FGHandle)
read_region (Region, ’model.reg’)
area_center (Region, Area, RowRef, ColumnRef)
read_shape_model (’model.shm’, TemplateID)
read_variation_model (’model.var’, ModelID)
for K := 1 to 10000 by 1
grab_image (Image, FGHandle)
find_shape_model (Image, TemplateID, 0, rad(360), 0.5, 1, 0.5,
’true’, 4, 0.9, Row, Column, Angle, Score)
disp_obj (Image, WindowHandle)
if (|Score| = 1)
vector_angle_to_rigid (Row, Column, Angle, RowRef,
ColumnRef, 0, HomMat2D)
affine_trans_image (Image, ImageTrans, HomMat2D, ’constant’,

HALCON 8.0.2
1236 CHAPTER 15. TOOLS

’false’)
compare_ext_variation_model (ImageTrans, RegionDiff, ModelID,
’light’)
disp_obj (RegionDiff, WindowHandle)
endif
endfor
clear_shape_model (TemplateID)
clear_variation_model (ModelID)
close_framegrabber (FGHandle)

Result
compare_ext_variation_model returns H_MSG_TRUE if all parameters are correct and
if the internal threshold images have been generated with prepare_variation_model or
prepare_direct_variation_model.
Parallelization Information
compare_ext_variation_model is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
prepare_variation_model, prepare_direct_variation_model
Possible Successors
select_obj, connection
Alternatives
compare_variation_model, dyn_threshold
See also
get_thresh_images_variation_model
Module
Matching

compare_variation_model ( const Hobject Image, Hobject *Region,


Hlong ModelID )

T_compare_variation_model ( const Hobject Image, Hobject *Region,


const Htuple ModelID )

Compare an image to a variation model.


compare_variation_model compares the input image Image to the variation model given by
ModelID. Before compare_variation_model can be called, the two internal threshold im-
ages of the variation model must have been created with prepare_variation_model or
prepare_direct_variation_model. Let c(x, y) denote the input image Image and tu,l denote the two
threshold images (see prepare_variation_model or prepare_direct_variation_model). Then
the output region Region contains all points that differ substantially from the model, i.e., the points that fulfill the
following condition:
c(x, y) > tu (x, y) ∨ c(x, y) < tl (x, y) .
If only too bright or too dark errors should be segmented the operator compare_ext_variation_model
can be used.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2


Image of the object to be trained.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Region containing the points that differ substantially from the model.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong
ID of the variation model.
Example (Syntax: HDevelop)

HALCON/C Reference Manual, 2008-5-13


15.12. IMAGE-COMPARISON 1237

open_framegrabber (’File’, 1, 1, 0, 0, 0, 0, ’default’, -1,


’default’, -1, ’default’, ’model.seq’, ’default’,
-1, -1, FGHandle)
read_region (Region, ’model.reg’)
area_center (Region, Area, RowRef, ColumnRef)
read_shape_model (’model.shm’, TemplateID)
read_variation_model (’model.var’, ModelID)
for K := 1 to 10000 by 1
grab_image (Image, FGHandle)
find_shape_model (Image, TemplateID, 0, rad(360), 0.5, 1, 0.5,
’true’, 4, 0.9, Row, Column, Angle, Score)
disp_obj (Image, WindowHandle)
if (|Score| = 1)
vector_angle_to_rigid (Row, Column, Angle, RowRef,
ColumnRef, 0, HomMat2D)
affine_trans_image (Image, ImageTrans, HomMat2D, ’constant’,
’false’)
compare_variation_model (ImageTrans, RegionDiff, ModelID)
disp_obj (RegionDiff, WindowHandle)
endif
endfor
clear_shape_model (TemplateID)
clear_variation_model (ModelID)
close_framegrabber (FGHandle)

Result
compare_variation_model returns H_MSG_TRUE if all parameters are correct and if
the internal threshold images have been generated with prepare_variation_model or
prepare_direct_variation_model.
Parallelization Information
compare_variation_model is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
prepare_variation_model, prepare_direct_variation_model
Possible Successors
connection
Alternatives
compare_ext_variation_model, dyn_threshold
See also
get_thresh_images_variation_model
Module
Matching

create_variation_model ( Hlong Width, Hlong Height, const char *Type,


const char *Mode, Hlong *ModelID )

T_create_variation_model ( const Htuple Width, const Htuple Height,


const Htuple Type, const Htuple Mode, Htuple *ModelID )

Create a variation model for image comparison.


create_variation_model creates a variation model that can be used for image comparison. The handle for
the variation model is returned in ModelID.
Typically, the variation model is used to discriminate correctly manufactured objects (“good objects”) from incor-
rectly manufactured objects (“bad objects”). It is assumed that the discrimination can be done solely based on the
gray values of the object.

HALCON 8.0.2
1238 CHAPTER 15. TOOLS

The variation model consists of an ideal image of the object to which the images of the objects to be tested are
compared later on with compare_variation_model or compare_ext_variation_model, and an
image that represents the amount of gray value variation at every point of the object. The size of the images with
which the object model is trained and with which the model is compared later on is passed in Width and Height,
respectively. The image type of the images used for training and comparison is passed in Type.
The variation model is trained using multiple images of good objects. Therefore, it is essential that the training
images show the objects in the same position and rotation. If this cannot be guarateed by external means, the pose
of the object can, for example, be determined by using matching (see find_shape_model). The image can
then be transformed to a reference pose with affine_trans_image.
The parameter Mode is used to determine how the image of the ideal object and the corresponding variation
image are computed. For Mode=’standard’, the ideal image of the object is computed as the mean of all training
images at the respective image positions. The corresponding variation image is computed as the standard deviation
of the training images at the respective image positions. This mode has the advantage that the variation model
can be trained iteratively, i.e., as soon as an image of a good object becomes available, it can be trained with
train_variation_model. The disadvantage of this mode is that great care must be taken to ensure that only
images of good objects are trained, because the mean and standard deviation are not robust against outliers, i.e., if
an image of a bad object is trained inadvertently, the accuracy of the ideal object image and that of the variation
image might be degraded.
If it cannot be avoided that the variation model is trained with some images of objects that can contain errors, Mode
can be set to ’robust’. In this mode, the image of the ideal object is computed as the median of all training images
at the respective image positions. The corresponding variation image is computed as a suitably scaled median
absolute deviation of the training images and the median image at the respective image positions. This mode has
the advantage that it is robust against outliers. It has the disadvantage that it cannot be trained iteratively, i.e., all
training images must be accumulated using concat_obj and be trained with train_variation_model
in a single call.
In some cases, it is impossible to acquire multiple training images. In this case, a useful variation image cannot
be trained from the single training image. To solve this problem, variations of the training image can be created
synthetically, e.g., by shifting the training image by ±1 pixel in the row and column directions or by using gray
value morphology (e.g., gray_erosion_shape und gray_dilation_shape), and then training the syn-
thetically modified images. A different possibility to create the variation model from a single image is to create
the model with Mode=’direct’. In this case, the variation model can only be trained by specifying the ideal image
and the variation image directly with prepare_direct_variation_model. Since the variation typically
is large at the edges of the object, edge operators like sobel_amp, edges_image, or gray_range_rect
should be used to create the variation image.
Parameter

. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong


Width of the images to be compared.
Default Value : 640
Suggested values : Width ∈ {160, 192, 320, 384, 640, 768}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the images to be compared.
Default Value : 480
Suggested values : Height ∈ {120, 144, 240, 288, 480, 576}
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of the images to be compared.
Default Value : "byte"
Suggested values : Type ∈ {"byte", "int2", "uint2"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Method used for computing the variation model.
Default Value : "standard"
Suggested values : Mode ∈ {"standard", "robust", "direct"}
. ModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong *
ID of the variation model.
Complexity
A variation model created with create_variation_model requires 12 ∗ Width ∗ Height bytes of mem-
ory for Mode = ’standard’ and Mode = ’robust’ for Type = ’byte’. For Type = ’uint2’ and Type = ’int2’,

HALCON/C Reference Manual, 2008-5-13


15.12. IMAGE-COMPARISON 1239

14 ∗ Width ∗ Height are required. For Mode = ’direct’ and after the training data has been cleared with
clear_train_data_variation_model, 2 ∗ Width ∗ Height bytes are required for Type = ’byte’ and
4 ∗ Width ∗ Height for the other image types.
Result
create_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
create_variation_model is processed completely exclusively without parallelization.
Possible Successors
train_variation_model, prepare_direct_variation_model
See also
prepare_variation_model, clear_variation_model,
clear_train_data_variation_model, find_shape_model, affine_trans_image
Module
Matching

get_thresh_images_variation_model ( Hobject *MinImage,


Hobject *MaxImage, Hlong ModelID )

T_get_thresh_images_variation_model ( Hobject *MinImage,


Hobject *MaxImage, const Htuple ModelID )

Return the threshold images used for image comparison by a variation model.
get_thresh_images_variation_model returns the threshold images of the variation
model ModelID in MaxImage and MinImage. The threshold images must be computed
with prepare_variation_model or prepare_direct_variation_model before
they can be read out. The formula used for calculating the threshold images is described with
prepare_variation_model or prepare_direct_variation_model. The threshold images
are used in compare_variation_model and compare_ext_variation_model to detect too large
deviations of an image with respect to the model. As described with compare_variation_model and
compare_ext_variation_model, gray values outside the interval given by MinImage and MaxImage
are regarded as errors.
Parameter

. MinImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / int2 / uint2


Threshold image for the lower threshold.
. MaxImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject * : real
Threshold image for the upper threshold.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong
ID of the variation model.
Result
get_thresh_images_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
get_thresh_images_variation_model is reentrant and processed without parallelization.
Possible Predecessors
prepare_variation_model, prepare_direct_variation_model
See also
compare_variation_model, compare_ext_variation_model
Module
Matching

HALCON 8.0.2
1240 CHAPTER 15. TOOLS

get_variation_model ( Hobject *Image, Hobject *VarImage,


Hlong ModelID )

T_get_variation_model ( Hobject *Image, Hobject *VarImage,


const Htuple ModelID )

Return the images used for image comparison by a variation model.


get_variation_model returns the image of the ideal object and the corresponding variation image of the
variation model ModelID in Image and VarImage, respectively. The returned images can be used to check
whether an image of a bad object has been trained with train_variation_model. This can be seen from
the variation image. If an image of a bad object has been trained, the variation image typically has large variations
in areas that should exhibit no variations.
Parameter
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / int2 / uint2
Image of the trained object.
. VarImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject * : real
Variation image of the trained object.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong
ID of the variation model.
Result
get_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
get_variation_model is reentrant and processed without parallelization.
Possible Predecessors
train_variation_model
See also
prepare_variation_model, compare_variation_model, compare_ext_variation_model
Module
Matching

prepare_direct_variation_model ( const Hobject RefImage,


const Hobject VarImage, Hlong ModelID, double AbsThreshold,
double VarThreshold )

T_prepare_direct_variation_model ( const Hobject RefImage,


const Hobject VarImage, const Htuple ModelID,
const Htuple AbsThreshold, const Htuple VarThreshold )

Prepare a variation model for comparison with an image.


prepare_direct_variation_model prepares a variation model for the image comparison with
compare_variation_model or compare_ext_variation_model. The variation model
must have been created with Mode=’direct’ with create_variation_model. In contrast to
prepare_variation_model, the ideal image of the object and the corresponding variation image
are not computed with train_variation_model, but are specified directly in RefImage and
VarImage. This is useful if the variation model should be created from a single image, as described with
create_variation_model. The variation image should typically be created with edge operators like
sobel_amp, edges_image, or gray_range_rect.
prepare_direct_variation_model converts the ideal image RefImage and the variation image
VarImage into two threshold images and stores them in the variation model. These threshold images are used in
compare_variation_model or compare_ext_variation_model to perform the comparison of the
current image to the variation model.
Two thresholds are used to compute the threshold images. The parameter AbsThreshold determines the mini-
mum amount of gray levels by which the image of the current object must differ from the image of the ideal object.
The parameter VarThreshold determines a factor relative to the variation image for the minimum difference of

HALCON/C Reference Manual, 2008-5-13


15.12. IMAGE-COMPARISON 1241

the current image and the ideal image. AbsThreshold and VarThreshold each can contain one or two values.
If two values are specified, different thresholds can be determined for too bright and too dark pixels. In this mode,
the first value refers to too bright pixels, while the second value refers to too dark pixels. If one value is specified,
this value refers to both the too bright and too dark pixels. Let i(x, y) be the ideal image RefImage, v(x, y) the
variation image VarImage, au = AbsThreshold[0], al = AbsThreshold[1], bu = VarThreshold[0],
and bl = VarThreshold[1] (or au = AbsThreshold, al = AbsThreshold, bu = VarThreshold, and
bl = VarThreshold, respectively). Then the two threshold images tu,l are computed as follows:

tu (x, y) = i(x, y) + max{au , bu v(x, y)} tl (x, y) = i(x, y) − max{al , bl v(x, y)} .

If the current image c(x, y) is compared to the variation model using compare_variation_model, the output
region contains all points that differ substantially from the model, i.e., that fulfill the following condition:

c(x, y) > tu (x, y) ∨ c(x, y) < tl (x, y) .

In compare_ext_variation_model, extended comparison modes are available, which return only too
bright errors, only too dark errors, or bright and dark errors as separate regions.
After the threshold images have been created they can be read out with
get_thresh_images_variation_model.
It should be noted that RefImage and VarImage are not stored as the ideal and variation images in the model
to save memory in the model.
Parameter
. RefImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Reference image of the object.
. VarImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / int2 / uint2
Variation image of the object.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; (Htuple .) Hlong
ID of the variation model.
. AbsThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Absolute minimum threshold for the differences between the image and the variation model.
Default Value : 10
Suggested values : AbsThreshold ∈ {0, 5, 10, 15, 20, 30, 40, 50}
Restriction : AbsThreshold ≥ 0
. VarThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Threshold for the differences based on the variation of the variation model.
Default Value : 2
Suggested values : VarThreshold ∈ {1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5}
Restriction : VarThreshold ≥ 0
Example (Syntax: HDevelop)

read_image (Image, ’model’)


sobel_amp (Image, VarImage, ’sum_abs’, 3)
get_image_pointer1 (Image, Pointer, Type, Width, Height)
create_variation_model (Width, Height, Type, ’direct’, ModelID)
prepare_direct_variation_model (Image, VarImage, ModelID, 20, 1)
write_variation_model (ModelID, ’model.var’)
clear_variation_model (ModelID)

Result
prepare_direct_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
prepare_direct_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
sobel_amp, edges_image, gray_range_rect
Possible Successors
compare_variation_model, compare_ext_variation_model,
get_thresh_images_variation_model, write_variation_model

HALCON 8.0.2
1242 CHAPTER 15. TOOLS

Alternatives
prepare_variation_model
See also
create_variation_model
Module
Matching

prepare_variation_model ( Hlong ModelID, double AbsThreshold,


double VarThreshold )

T_prepare_variation_model ( const Htuple ModelID,


const Htuple AbsThreshold, const Htuple VarThreshold )

Prepare a variation model for comparison with an image.


prepare_variation_model prepares a variation model for the image comparison with
compare_variation_model or compare_ext_variation_model. This is done by convert-
ing the ideal image and the variation image that have been trained with train_variation_model
into two threshold images and storing them in the variation model. These threshold images are used in
compare_variation_model or compare_ext_variation_model to speed up the comparison of the
current image to the variation model.
Two thresholds are used to compute the threshold images. The parameter AbsThreshold determines the min-
imum amount of gray levels by which the image of the current object must differ from the image of the ideal
object. The parameter VarThreshold determines a factor relative to the variation image for the minimum dif-
ference of the current image and the ideal image. AbsThreshold and VarThreshold each can contain one
or two values. If two values are specified, different thresholds can be determined for too bright and too dark pix-
els. In this mode, the first value refers to too bright pixels, while the second value refers to too dark pixels. If
one value is specified, this value refers to both the too bright and too dark pixels. Let i(x, y) be the ideal image,
v(x, y) the variation image, au = AbsThreshold[0], al = AbsThreshold[1], bu = VarThreshold[0],
and bl = VarThreshold[1] (or au = AbsThreshold, al = AbsThreshold, bu = VarThreshold, and
bl = VarThreshold, respectively). Then the two threshold images tu,l are computed as follows:

tu (x, y) = i(x, y) + max{au , bu v(x, y)} tl (x, y) = i(x, y) − max{al , bl v(x, y)} .

If the current image c(x, y) is compared to the variation model using compare_variation_model, the output
region contains all points that differ substantially from the model, i.e., that fulfill the following condition:

c(x, y) > tu (x, y) ∨ c(x, y) < tl (x, y) .

In compare_ext_variation_model, extended comparison modes are available, which return only too
bright errors, only too dark errors, or bright and dark errors as separate regions.
After the threshold images have been created they can be read out with
get_thresh_images_variation_model. Furthermore, the training data can be deleted with
clear_train_data_variation_model to save memory.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; (Htuple .) Hlong
ID of the variation model.
. AbsThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Absolute minimum threshold for the differences between the image and the variation model.
Default Value : 10
Suggested values : AbsThreshold ∈ {0, 5, 10, 15, 20, 30, 40, 50}
Restriction : AbsThreshold ≥ 0
. VarThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Threshold for the differences based on the variation of the variation model.
Default Value : 2
Suggested values : VarThreshold ∈ {1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5}
Restriction : VarThreshold ≥ 0

HALCON/C Reference Manual, 2008-5-13


15.12. IMAGE-COMPARISON 1243

Result
prepare_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
prepare_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
train_variation_model
Possible Successors
compare_variation_model, compare_ext_variation_model,
get_thresh_images_variation_model, clear_train_data_variation_model,
write_variation_model
Alternatives
prepare_direct_variation_model
See also
create_variation_model
Module
Matching

read_variation_model ( const char *FileName, Hlong *ModelID )


T_read_variation_model ( const Htuple FileName, Htuple *ModelID )

Read a variation model from a file.


The operator read_variation_model reads a variation model, which has been written with
write_variation_model, from the file FileName.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
File name.
. ModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong *
ID of the variation model.
Result
If the file name is valid, the operator read_variation_model returns H_MSG_TRUE. If necessary an ex-
ception handling is raised.
Parallelization Information
read_variation_model is reentrant and processed without parallelization.
Possible Successors
compare_variation_model, compare_ext_variation_model
See also
write_variation_model
Module
Matching

train_variation_model ( const Hobject Images, Hlong ModelID )


T_train_variation_model ( const Hobject Images,
const Htuple ModelID )

Train a variation model.


train_variation_model trains the variation model that is passed in ModelID with one or more images,
which are passed in Images.
As described for create_variation_model, a variation model that has been created using the mode ’stan-
dard’ can be trained iteratively, i.e., as soon as images of good objects become available, they can be trained with

HALCON 8.0.2
1244 CHAPTER 15. TOOLS

train_variation_model. The ideal image of the object is computed as the mean of all previous training
images and the images that are passed in Images. The corresponding variation image is computed as the standard
deviation of the training images and the images that are passed in Images.
If the variation model has been created using the mode ’robust’, the model cannot be trained iteratively, i.e., all
training images must be accumulated using concat_obj and be trained with train_variation_model
in a single call. If any images have been trained previously, the training information of the previous call is dis-
carded. The image of the ideal object is computed as the median of all training images passed in Images. The
corresponding variation image is computed as a suitably scaled median absolute deviation of the training images
and the median image.
Attention
At most 65535 training images can be trained.
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Images of the object to be trained.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong
ID of the variation model.
Example (Syntax: HDevelop)

open_framegrabber (’File’, 1, 1, 0, 0, 0, 0, ’default’, -1,


’default’, -1, ’default’, ’model.seq’, ’default’,
-1, -1, FGHandle)
grab_image (Image, FGHandle)
get_image_pointer1 (Image, Pointer, Type, Width, Height)
disp_obj (Image, WindowHandle)
draw_region (Region, WindowHandle)
reduce_domain (Image, Region, ImageReduced)
area_center (Region, Area, RowRef, ColumnRef)
create_shape_model (ImageReduced, 4, 0, rad(360), rad(1), ’none’,
’use_polarity’, 40, 10, TemplateID)
create_variation_model (Width, Height, Type, ’standard’, ModelID)
for K := 1 to 100 by 1
grab_image (Image, FGHandle)
find_shape_model (Image, TemplateID, 0, rad(360), 0.5, 1, 0.5,
’true’, 4, 0.9, Row, Column, Angle, Score)
if (|Score| = 1)
vector_angle_to_rigid (Row, Column, Angle, RowRef,
ColumnRef, 0, HomMat2D)
affine_trans_image (Image, ImageTrans, HomMat2D, ’constant’,
’false’)
train_variation_model (ImageTrans, ModelID)
endif
endfor
prepare_variation_model (ModelID, 10, 4)
write_region (Region, ’model.reg’)
write_shape_model (TemplateID, ’model.shm’)
write_variation_model (ModelID, ’model.var’)
clear_shape_model (TemplateID)
clear_variation_model (ModelID)
close_framegrabber (FGHandle)

Result
train_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
train_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
create_variation_model, find_shape_model, affine_trans_image, concat_obj

HALCON/C Reference Manual, 2008-5-13


15.13. KALMAN-FILTER 1245

Possible Successors
prepare_variation_model
See also
prepare_variation_model, compare_variation_model, compare_ext_variation_model,
clear_variation_model
Module
Matching

write_variation_model ( Hlong ModelID, const char *FileName )


T_write_variation_model ( const Htuple ModelID,
const Htuple FileName )

Write a variation model to a file.


write_variation_model writes a variation model to the file FileName. The model can be read with
read_variation_model.
Parameter

. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong


ID of the variation model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the file name is valid (write permission), the operator write_variation_model returns H_MSG_TRUE.
If necessary an exception handling is raised.
Parallelization Information
write_variation_model is reentrant and processed without parallelization.
Possible Predecessors
train_variation_model
See also
read_variation_model
Module
Matching

15.13 Kalman-Filter
T_filter_kalman ( const Htuple Dimension, const Htuple Model,
const Htuple Measurement, const Htuple PredictionIn,
Htuple *PredictionOut, Htuple *Estimate )

Estimate the current state of a system with the help of the Kalman filtering.
The operator filter_kalman returns an estimate of the current state (or also a prediction of a future state)
of a discrete, stochastically disturbed, linear system. In practice, Kalman filters are used successfully in image
processing in the analysis of image sequences (background identification, lane tracking with the help of line tracing
or region analysis, etc.). A short introduction concerning the theory of the Kalman filters will be followed by a
detailed description of the routine filter_kalman itself.
KALMAN FILTER: A discrete, stochastically disturbed, linear system is characterized by the following markers:

• State x(t): Describes the current state of the system (speeds, temperatures,...).
• Parameter u(t): Inputs from outside into the system.
• Measurement y(t): Measurements gained by observing the system. They indicate the state of the system (or
at least parts of it).

HALCON 8.0.2
1246 CHAPTER 15. TOOLS

• An output function describing the dependence of the measurements on the state.


• A transition function indicating how the state changes with regard to time, the current value and the parame-
ters.

The output function and the transition function are linear. Their application can therefore be written as a multipli-
cation with a matrix.
The transition function is described with the help of the transition matrix A(t) and the parameter matrix , the initial
function is described by the measurement matrix C(t). Hereby C(t) characterizes the dependency of the new state
on the old, G(t) indicates the dependency on the parameters. In practice it is rarely possible (or at least too time
consuming) to describe a real system and its behaviour in a complete and exact way. Normally only a relatively
small number of variables will be used to simulate the behaviour of the system. This leads to an error, the so called
system error (also called system disturbance) v(t).
The output function, too, is usually not exact. Each measurement is faulty. The measurement errors will be called
w(t). Therefore the following system equations arise:
x(t + 1) = A(t)x(t) + G(t)u(t) + v(t)
y(t) = c(t)x(t) + w(t)
The system error v(t) and the measurement error w(t) are not known. As far as systems are concerned which
are interpreted with the help of the Kalman filter, these two errors are considered as Gaussian distributed random
vectors (therefore the expression "‘stochastically disturbed systems"’). Therefore the system can be calculated, if
the corresponding expected values for v(t) and w(t) as well as the covariance matrices are known.
The estimation of the state of the system is carried out in the same way as in the Gaussian-Markov-estimation.
However, the Kalman filter is a recursive algorithm which is based only on the current measurements y(t) and the
latest state x(t). The latter implicitly also includes the knowlegde about earlier measurements.
A suitable estimate value x_0, which is interpreted as the expected value of a random variable for x(0), must be
indicated for the initial value x(0). This variable should have an expected error value of 0 and the covariance
matrix P _0 which also has to be indicated. At a certain time t the expected values of both disturbances v(t) and
w(t) should be 0 and their covariances should be Q(t) and R(t). x(t), v(t) and w(t) will usually be assumed to be
not correlated (any kind of noise-process can be modelled - however the development of the necessary matrices by
the user will be considerably more demanding). The following conditions must be met by the searched estimate
values xt :

• The estimate values xt are linearly dependent on the actual value x(t) and on the measurement sequence
y(0), y(1), · · · , y(t).
• xt being hereby considered to meet its expectations, i.e. Ext = Ex(t).
• The grade criterion for xt is the criterion of minimal variance, i.e. the variance of the estimation error defined
as x(t) − xt , being as small as possible.

After the initialization


x̂(0) = x0 , P̂ (0) = P0
at each point in time t the Kalman filter executes the following calculation steps:

P̂ (t)C 0 (t)
(K − III) K(t) = C(t)P̂ (t)C 0 (t)+R(t)
(K − IV ) xt = x̂(t) + K(t)(y(t) − C(t)x̂(t))
(K − V ) P̃ (t) = P̂ (t) − K(t)C(t)P̂ (t)
(K − I) x̂(t + 1) = A(t)xt + G(t)u(t)
(K − II) P̂ (t + 1) = A(t)P̃ (t)A0 (t) + Q(t)

Hereby P̃ (t) is the covariance matrix of the estimation error, x̂(t) is the extrapolation value respective the predic-
tion value of the state, P̂ (t) are the covariances of the prediction error x̂ − x, K is the amplifier matrix (the so
called Kalman gain), and X 0 is the transposed of a matrix X.
Please note that the prediction of the future state is also possible with the equation (K-I). Somtimes this is very
useful in image processing in order to determine "‘regions of interest"’ in the next image.
As mentioned above, it is much more demanding to model any kind of noise processes. If for example the system
noise and the measurement noise are correlated with the corresponding covariance matrix L, the equations for the
Kalman gain and the error covariance matrix have to be modified:

HALCON/C Reference Manual, 2008-5-13


15.13. KALMAN-FILTER 1247

P̂ (t)C 0 (t)+L(t)
(K − III) K(t) = C(t)P̂ (t)+C(t)l(t)+L0 C 0 (t)+R(t)
(K − V ) P̃ (t) = (P̂ (t) − K(t)C(t)P̂ (t))P̂ (t) − K(t)L(t)

This means that the user himself has to establish the linear system equations from (K-I) up to (K-V) with respect to
the actual problem. The user must therefore develop a mathematical model upon which the solution to the problem
can be based. Statistical characteristics describing the inaccuracies of the system as well as the measurement
errors, which are to be expected, thereby have to be estimated if they cannot be calculated exactly. Therefore the
following individual steps are necessary:

1. Developing a mathematical model


2. Selecting characteristic state variables
3. Establishing the equations describing the changes of these state variables and their linearization (matrices A
and G)
4. Establishing the equations describing the dependency of the measurement values of the system on the state
variables and their linearization (matrix C)
5. Developing or estimating of statistical dependencies between the system disturbances (matrix Q)
6. Developing or estimating of statistical dependencies between the measurement errors (matrix R)
7. Initialization of the initial state

As mentioned above, the initialization of the system (point 7) hereby necessitates to indicate an estimate x0 of the
state of the system at the time 0 and the corresponding covariance matrix P0 . If the exact initial state is not known,
it is recommendable to set the components of the vector x0 to the average values of the corresponding range, and
to set high values for P0 (about the size of the squares of the range). After a few iterations (when the number of the
accumulated measurement values in total has exceeded the number of the system values), the values which have
been determined in this way are also useable.
If on the other hand the initial state is known exactly, all entries for P0 have to be set to 0, because P0 describes
the covariances of the error between the estimated value x0 and the actual value x(0).
THE FILTER ROUTINE:
A Kalman filter is dependent on a range of data which can be organized in four groups:

Model parameter: transition matrix A, control matrix G including the parameter u and the measurement matrix
C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L, and
measurement-error covariance matrix R
Measurement vector: y
History of the system: extrapolation vector x̂ and extrapolation-error covariance matrix P̂

Thereby many systems can work without input "‘from outside"’, i.e. without G and u. Further, system errors and
measurement errors are normally not correlated (L is dropped).
Actually the data necessary for the routine will be set by the following parameters:

Dimension: This parameter includes the dimensions of the status vector, the measurement vector and the con-
troller vector. Dimension thereby is a vector [n,m,p], whereby n indicates the number of the state variables,
m the number of the measurement values and p the number of the controller members. For a system without
determining control (i.e. without influence "‘from outside"’) therefore [n,m,0] has to be passed.
Model: This parameter includes the lined up matrices (vectors) A,C,Q,G,u and (if necessary) L having been stored
in row-major order. Model therefore is a vector of the length n × n + n × m + n × n + n × p + p[+n × m].
The last summand is dropped, in case the system errors and measurement errors are not correlated, i.e. there
is no value for L.
Measurement: This parameter includes the matrix R which has been stored in row-major order, and the mea-
surement vector y lined up. Measurement therefore is a vector of the dimension m × m + m.

HALCON 8.0.2
1248 CHAPTER 15. TOOLS

PredictionIn / PredictionOut: These two parameters include the matrix P̂ (the extrapolation-error co-
variance matrix) which has been stored in row-major order and the extrapolation vector x̂ lined up. This
means, they are vectors of the length n × n + n. PredictionIn therefore is an input parameter, which
must contain P̂ (t) and x̂(t) at the current time t. With PredictionOut the routine returns the correspond-
ing predictions P̂ (t + 1) and x̂(t + 1).
Estimate: With this parameter the routine returns the matrix P̃ (the estimation-error covariance matrix) which
has been stored in row-major order and the estimated state x̃ lined up. Estimate therefore is a vector of
the length n × n + n.

Please note that the covariance matrices (Q, R, P̂ , P̃ ) must of course be symmetric.
Parameter
. Dimension (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
The dimensions of the state vector, the measurement and the controller vector.
Default Value : [3,1,0]
Typical range of values : 0 ≤ Dimension ≤ 30
. Model (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The lined up matrices A, C, Q, possibly G and u, and if necessary L which have been stored in row-major
order.
Default Value : [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
Typical range of values : 0.0 ≤ Model ≤ 10000.0
. Measurement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The matrix R stored in row-major order and the measurement vector y lined up.
Default Value : [1.2,1.0]
Typical range of values : 0.0 ≤ Measurement ≤ 10000.0
. PredictionIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The matrix P̂ (the extrapolation-error covariances) stored in row-major order and the extrapolation vector x̂
lined up.
Default Value : [0.0,0.0,0.0,0.0,180.5,0.0,0.0,0.0,100.0,0.0,100.0,0.0]
Typical range of values : 0.0 ≤ PredictionIn ≤ 10000.0
. PredictionOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix P∗ (the extrapolation-error covariances)stored in row-major order and the extrapolation vector x̂
lined up.
. Estimate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix P̃ (the estimation-error covariances) stored in row-major order and the estimated state x̃ lined up.
Example

/* Typical procedure: */
/* 1. To initialize the variables
which describe the model, e.g. with */

read_kalman("kalman.init",Dim,Mod,Meas,Pred) ;

/* Generation of the first measurements (typically of the


first image of an image series) with an appropriate
problem-specific routine (there is a fictitious routine
extract_features in this example): */

extract_features(Image1,Meas,&Meas1) ;

/* first Kalman-Filtering: */

filter_kalman(Dim,Mod,Meas1,Pred,&Pred1,&Est1) ;

/* To use the estimate value (if need be the prediction too) */


/* with a problem-specific routine (here use_est): */

use_est(Est1) ;

HALCON/C Reference Manual, 2008-5-13


15.13. KALMAN-FILTER 1249

/* To get the next measurements (e.g. from the next image): */

extract_next_features(Image2,Meas1,&Meas2) ;

/* if need be Update of the model parameter (a constant model) */


/* second Kalman-Filtering: */

filter_kalman(Dim,Mod,Meas2,Pred1,&Pred2,&Est2) ;
use_est(Est2) ;
extract_next_features(Image3,Meas2,&Meas3) ;

/* etc. */

Result
If the parameter values are correct, the operator filter_kalman returns the value H_MSG_TRUE. Otherwise
an exception handling will be raised.
Parallelization Information
filter_kalman is reentrant and processed without parallelization.
Possible Predecessors
read_kalman, sensor_kalman
Possible Successors
update_kalman
See also
read_kalman, update_kalman, sensor_kalman
References
W.Hartinger: "‘Entwurf eines anwendungsunabh"angigen Kalman-Filters mit Untersuchungen im Bereich der
Bildfolgenanalyse"’; Diplomarbeit; Technische Universit"at M"unchen, Institut f"ur Informatik, Lehrstuhl Prof.
Radig; 1991.
R.E.Kalman: "‘A New Approach to Linear Filtering and Prediction Problems"’; Transactions ASME, Ser.D: Jour-
nal of Basic Engineering; Vol. 82, S.34-45; 1960.
R.E.Kalman, P.l.Falb, M.A.Arbib: "‘Topics in Mathematical System Theory"’; McGraw-Hill Book Company, New
York; 1969.
K-P. Karmann, A.von Brandt: "‘Moving Object Recognition Using an Adaptive Background Memory"’; Time-
Varying Image Processing and Moving Object Recognition 2 (ed.: V. Cappellini), Proc. of the 3rd Interantional
Workshop, Florence, Italy, May, 29th - 31st, 1989; Elsevier, Amsterdam; 1990.
Module
Foundation

T_read_kalman ( const Htuple FileName, Htuple *Dimension,


Htuple *Model, Htuple *Measurement, Htuple *Prediction )

Read the description file of a Kalman filter.


The operator read_kalman reads the description file FileName of a Kalman filter. Kalman filters return
an estimate of the current state (or even the prediction of a future state) of a discrete, stochastically disturbed,
linear system. They are successfully used in image processing, especially in the analysis of image sequences. A
Kalman filtering is based on a mathematical model of the system to be examined which at any point in time has
the following characteristics:

Model parameter: transition matrix A, control matrix G including the controller output u and the measurement
matrix C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L and
measurement-error covariance matrix R
Estimate of the initial state of the system: state x0 and corresponding covariance matrix P0

HALCON 8.0.2
1250 CHAPTER 15. TOOLS

Many systems do not need entries "‘from outside"’, and therefore G and u can be dropped. Further, system errors
and measurement errors are normally not correlated (L is dropped). The characteristics mentioned above can be
stored in an ASCII-file and then can be read with the help of the operator read_kalman. This ASCII-file must
have the following structure:
Dimension row
+ content row
+ matrix A
+ atrix C
+ matrix Q
[ + matrix G + vector u ]
[ + matrix L ]
+ matrix R
[ + matrix P0 ]
[ + vector x0 ]

The dimension row thereby is always of the following form:


n = <integer> m = <integer> p = <integer>
whereby n indicates the number of the state variables, m the number of the measurement values and p the number
of the controller members (see also Dimension). The maximal dimension will hereby be limited by a system
constant (= 30 for the time being).
The content row has the following form:
A ∗ C ∗ Q ∗ G ∗ u ∗ L ∗ R ∗ P ∗ x∗
and describes the following content of the file. Instead of ’∗’, ’+’ (= parameter is available) respectively ’-’ (=
parameter is missing) have to be set. Please note that only the parameters marked by [...] in the above list may be
left out in the description file. If the initial state estimate a0 is missing (i.e. ’x-’), the components of the vector will
supposed to be 0.0. If the covariance matrix P0 of the initial state estimate is missing (i.e. ’P-’), the error will be
supposed to be tremendous. In this case the matrix elements will be set to 10000.0. This value seems to be very
high, however, it is only sufficient if the range of components of the state vector x is smaller to the tenth power.
(r × s) matrices will be stored per row in the following form:

< Kommentar, d.h. string >


< a11 > < a12 > · · · < a1s >
.. .. ..
. . .
< ar1 > < ar2 > ··· < ars >
(the spaces and line feed characters can be chosen at will),
vectors will be stored correspondingly in the following form:
< comment, i.e.string >
< a1 > · · · < ak >
The following parameter values are returned by the operator read_kalman:

Dimension: This parameter includes the dimensions of the status vector, the measurement vector and the con-
troller vector. Dimension thereby is a vector [n,m,p], whereby n indicates the number of the state variables,
m the number of the measurement values and p the number of the controller members. For a system without
determining control (i.e. without influence "‘from outside"’) therefore Dimension = [n,m,0].
Model: This parameter includes the lined up matrices (vectors) A, C, Q, G, u and (if necessary) L having been
stored in row-major order. Model therefore is a vector of the length n×n+n×m+n×n+n×p+p[+n×m].
The last summand is dropped, in case the system errors and measurement errors are not correlated, i.e. there
is no value for L.
Measurement: This parameter includes the matrix R which has been stored in row-major order.
Measurement therefore is vector of the dimension m × m.
Prediction: This parameter includes the matrix P0 (the error covariance matrix of the initial state estimate)
and the initial state estimate x0 lined up. This means, it is a vector of the length n × n + n.

HALCON/C Reference Manual, 2008-5-13


15.13. KALMAN-FILTER 1251

Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
Description file for a Kalman filter.
Default Value : "kalman.init"
. Dimension (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
The dimensions of the state vector, the measurement vector and the controller vector.
. Model (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The lined up matrices A, C, Q, possibly G and u, and if necessary L stored in row-major order.
. Measurement (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix R stored in row-major order.
. Prediction (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix P0 (error covariance matrix of the initial state estimate) stored in row-major order and the initial
state estimate x0 lined up.
Example

/*An example of the description-file: */


/* */
/*n=3 m=1 p=0 */
/*A+C+Q+G-u-L-R+P+x+ */
/*transition matrix A: */
/*1 1 0.5 */
/*0 1 1 */
/*0 0 1 */
/*measurement matrix C: */
/*1 0 0 */
/*system-error covariance matrix Q: */
/*54.3 37.9 48.0 */
/*37.9 34.3 42.5 */
/*48.0 42.5 43.7 */
/*measurement-error covariance matrix R: */
/*1.2 */
/*estimation-error covariance matrix (for the initial estimate) P0: */
/*0 0 0 */
/*0 180.5 0 */
/*0 0 100 */
/*initial estimate x0: */
/*0 100 0 */
/* */
/*the result of read_kalman with the upper descriptionfile */
/*as inputparameter: */
/* */
/*Dimension = [3,1,0] */
/*Model = [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0, */
/* 54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7] */
/*Measurement = [1.2] */
/*Prediction = [0.0,0.0,0.0,0.0,180.5,0.0,0.0,0.0,100.0,0.0,100.0, */
/* 0.0] */

Result
If the description file is readable and correct, the operator read_kalman returns the value H_MSG_TRUE.
Otherwise an exception handling will be raised.
Parallelization Information
read_kalman is reentrant and processed without parallelization.
Possible Successors
filter_kalman
See also
update_kalman, filter_kalman, sensor_kalman

HALCON 8.0.2
1252 CHAPTER 15. TOOLS

Module
Foundation

T_sensor_kalman ( const Htuple Dimension, const Htuple MeasurementIn,


Htuple *MeasurementOut )

Interactive input of measurement values for a Kalman filtering.


The operator sensor_kalman supports the interactive input of measurement values for a Kalman filtering.
Kalman filters return an estimate of the current state (or even the prediction of a future state) of a discrete, stochas-
tically disturbed, linear system. They are successfully used in image processing, especially in the analysis of image
sequences.
Each filtering is hereby based on certain measurement values. How these values are extracted from images or
sensor data depends strongly on the individual application and therefore must be entirely up to the user. However,
the operator sensor_kalman allows an interactive input of (fictitious) measurement values y and the corre-
sponding measurement-error covariance matrix R. Especially the testing of Kalman filters during the development
can hereby be facilitated.
The parameters MeasurementIn and MeasurementOut include the matrix R which has been stored in
row-major order and the measurement vector y lined up, i.e. they are vectors of the length Dimension ×
Dimension + Dimension
Parameter
. Dimension (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of measurement values.
Default Value : 1
Typical range of values : 0 ≤ Dimension ≤ 30
. MeasurementIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The matrix R stored in row-major order and the measurement vector y lined up.
Default Value : [1.2,1.0]
Typical range of values : 0.0 ≤ MeasurementIn ≤ 10000.0
. MeasurementOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix R stored in row-major order and the measurement vector y lined up.
Result
If the parameters are correct, the operator sensor_kalman returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
sensor_kalman is reentrant and processed without parallelization.
Possible Successors
filter_kalman
See also
filter_kalman, read_kalman, update_kalman
Module
Foundation

T_update_kalman ( const Htuple FileName, const Htuple DimensionIn,


const Htuple ModelIn, const Htuple MeasurementIn,
Htuple *DimensionOut, Htuple *ModelOut, Htuple *MeasurementOut )

Read an update file of a Kalman filter.


The operator update_kalman reads the update file FileName of a Kalman filter. Kalman filters return an
estimate of the current state (or even the prediction of a future state) of a discrete, stochastically disturbed, linear
system.
A Kalman filtering is based on a mathematical model of the system to be examined which at any point in time has
the following characteristics:

HALCON/C Reference Manual, 2008-5-13


15.13. KALMAN-FILTER 1253

Model parameter: transition matrix A, control matrix G including the controller output u and the measurement
matrix C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L and
measurement-error covariance matrix R
Measurement vector: y
History of the system: extrapolation vector x̂ and extrapolation-error covariance matrix P̂

Many systems do not need entries "‘from outside"’ and therefore G and u can be dropped. Further, system errors
and measurement errors are normally not correlated (L is dropped). Some of the characteristics mentioned above
may change dynamically (from one iteration to the next). The operator update_kalman serves to modify parts
of the system according to an update file (ASCII) with the following structure (see also read_kalman):
Dimension row
+ content row
+ matrix A
+ matrix C
+ matrix Q
+ matrix G + vector u
+ matrix L
+ matrix R

The dimension row thereby has the following form:


n = <integer> m = <integer> p = <integer>
whereby n indicates the number of the state variables, m the number of the measurement values and p the number
of the controller members (see also DimensionIn / DimensionOut). The maximal dimension will hereby be
limited by a system constant (= 30 for the time being). As in this case changes should take effect at a valid model,
the dimensions n and m are invariant (and will only be indicated for purposes of control).
The content row has the following form:
A ∗ C ∗ Q ∗ G ∗ u ∗ L ∗ R∗
and describes the further content of the file. Instead of ’∗’, ’+’ (= parameter is available) respectively ’-’ (=
parameter is missing) has to be set. In contrast to description files for read_kalman, the system description
needs not be complete in this case. Only those parts of the system which are changed must be indicated. The
indication of estimated values is unnecessary, as these values must stem from the latest filtering according to the
structure of the filter.
(r × s) matrices will be stored in row-major order in the following form:

< comment, i.e. string >


< a11 > < a12 > · · · < a1s >
.. .. ..
. . .
< ar1 > < ar2 > ··· < ars >

(the spaces/line feed characters can be chosen at will),


vectors will be stored correspondingly in the following form:

< commentar, i.e. string >


< a1 > · · · < ak >

The following parameter values of the operator read_kalman will be changed:

DimensionIn / DimensionOut: These parameters include the dimensions of the state vector, measurement
vector and controller vector and therefore are vectors [n,m,p], whereby n indicates the number of the state
variables, m the number of the measurement values and p the number of the controller members. n and m are
invariant for a given system, i.e. they must not differ from corresponding input values of the update file. For
a system without without influence "‘from outside"’ p = 0.

HALCON 8.0.2
1254 CHAPTER 15. TOOLS

ModelIn / ModelOut: These parameters include the lined up matrices (vectors) A, C, Q, G, u and if necessary
L which have been stored in row-major order. ModelIn / ModelOut therefore are vectors of the length
n × n + n × m + n × n + n × p + p[+n × m]. The last summand is dropped if system errors and measurement
errors are not correlated, i.e. no value has been set for L.
MeasurementIn / MeasurementOut: These parameters include the matrix R stored in row-major order, and
therefore are vectors of the dimension m × m.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
Update file for a Kalman filter.
Default Value : "kalman.updt"
. DimensionIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
The dimensions of the state vector, measurement vector and controller vector.
Default Value : [3,1,0]
Typical range of values : 0 ≤ DimensionIn ≤ 30
. ModelIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The lined up matrices A,C,Q, possibly G and u, and if necessary L which all have been stored in row-major
order.
Default Value : [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
Typical range of values : 0.0 ≤ ModelIn ≤ 10000.0
. MeasurementIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The matrix R stored in row-major order.
Default Value : [1,2]
Typical range of values : 0.0 ≤ MeasurementIn ≤ 10000.0
. DimensionOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
The dimensions of the state vector, measurement vector and controller vector.
. ModelOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The lined up matrices A,C,Q, possibly G and u, and if necessary L which all have been stored in row-major
order.
. MeasurementOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix R stored in row-major order.
Example

/* The following values are describing the system */


/* */
/*DimensionIn = [3,1,0] */
/*ModelIn = [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0, */
/* 54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7] */
/*MeasurementIn = [1,2] */
/* */
/*An example of the Updatefile: */
/* */
/*n=3 m=1 p=0 */
/*A+C-Q-G-u-L-R- */
/*transitions at time t=15: */
/*2 1 1 */
/*0 2 2 */
/*0 0 2 */
/* */
/*the results of update_kalman: */
/* */
/*DimensionOut = [3,1,0] */
/*ModelOut = [2.0,1.0,1.0,0.0,2.0,2.0,0.0,0.0,2.0,1.0,0.0,0.0, */
/* 54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7] */
/*MeasurementOut = [1.2] */
Result
If the update file is readable and correct, the operator update_kalman returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1255

Parallelization Information
update_kalman is reentrant and processed without parallelization.
Possible Successors
filter_kalman
See also
read_kalman, filter_kalman, sensor_kalman
Module
Foundation

15.14 Measure
close_all_measures ( )
T_close_all_measures ( )

Delete all measure objects.


close_all_measures deletes all measure objects that have been created using
gen_measure_rectangle2 or gen_measure_arc. The memory used for the measure objects is
freed.
Attention
close_all_measures exists solely for the purpose of implementing the “reset program” functionality in HDe-
velop. close_all_measures must not be used in any application.
Result
close_all_measures always returns H_MSG_TRUE.
Parallelization Information
close_all_measures is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, measure_pos, measure_pairs
Alternatives
close_measure
Module
1D Metrology

close_measure ( Hlong MeasureHandle )


T_close_measure ( const Htuple MeasureHandle )

Delete a measure object.


close_measure deletes the measure object given by MeasureHandle. The memory used for the measure
object is freed.
Parameter

. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Hlong


Measure object handle.
Result
If the parameter values are correct the operator close_measure returns the value H_MSG_TRUE. Otherwise
an exception handling is raised.
Parallelization Information
close_measure is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, measure_pos, measure_pairs

HALCON 8.0.2
1256 CHAPTER 15. TOOLS

See also
close_all_measures
Module
1D Metrology

T_fuzzy_measure_pairing ( const Hobject Image,


const Htuple MeasureHandle, const Htuple Sigma,
const Htuple AmpThresh, const Htuple FuzzyThresh,
const Htuple Transition, const Htuple Pairing, const Htuple NumPairs,
Htuple *RowEdgeFirst, Htuple *ColumnEdgeFirst, Htuple *AmplitudeFirst,
Htuple *RowEdgeSecond, Htuple *ColumnEdgeSecond,
Htuple *AmplitudeSecond, Htuple *RowPairCenter,
Htuple *ColumnPairCenter, Htuple *FuzzyScore, Htuple *IntraDistance )

Extract straight edge pairs perpendicular to a rectangle or an annular arc.


fuzzy_measure_pairing serves to extract straight edge pairs that lie perpendicular to the major axis of a
rectangle or an annular arc. In addition to measure_pos it uses fuzzy member functions to evaluate and select
the edge pairs.
The extraction algorithm is identical to fuzzy_measure_pos. In addition, the edges are grouped to pairs: If
Transition = ’positive’, the edge points with a dark-to-light transition in the direction of the major axis of the
rectangle or the annular arc are returned in RowEdgeFirst and ColumnEdgeFirst. In this case, the cor-
responding edges with a light-to-dark transition are returned in RowEdgeSecond and ColumnEdgeSecond.
If Transition = ’negative’, the behavior is exactly opposite. If Transition = ’all’, the first detected edge
defines the transition for RowEdgeFirst and ColumnEdgeFirst.
Having extracted subpixel edge locations, the edges are paired. The features of a possible edge pair are evaluated
by a fuzzy function, set by set_fuzzy_measure. Which edge pairs are selected can be determined with the
parameter FuzzyThresh, which constitutes a threshold on the weight over all fuzzy sets, i.e., the geometric
mean of the weights of the defined fuzzy membership functions. As an extension to fuzzy_measure_pairs,
the pairing algorithm can be restricted by Pairing. Currently only ’no_restriction’ is available, which returns all
possible edge pairs, allowing interleaving and inclusion of pairs. Finally, the best scored NumPairs edge pairs
are returned, whereas 0 indicates to return all possible found edge combinations.
The selected edges are returned as single points, which lie on the major axis of the rectangle or annular arc. The
corresponding edge amplitudes are returned in AmplitudeFirst and AmplitudeSecond, the fuzzy scores in
FuzzyScore. In addition, the distance between each edge pair is returned in IntraDistance, corresponding
to the distance between EdgeFirst[i] and EdgeSecond[i].
Attention
fuzzy_measure_pairing only returns meaningful results if the assumptions that the edges are straight and
perpendicular to the major axis of the rectangle or annular arc are fulfilled. Thus, it should not be used to extract
edges from curved objects, for example. Furthermore, the user should ensure that the rectangle or annular arc is
as close to perpendicular as possible to the edges in the image. Additionally, Sigma must not become larger than
approx. 0.5 * Length1 (for Length1 see gen_measure_rectangle2).
It should be kept in mind that fuzzy_measure_pairing ignores the domain of Image for efficiency reasons.
If certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2


Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong
Measure object handle.

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1257

. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double


Sigma of Gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.4
. AmpThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum edge amplitude.
Default Value : 30.0
Suggested values : AmpThresh ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Typical range of values : 1 ≤ AmpThresh ≤ 255 (lin)
Minimum Increment : 0.5
Recommended Increment : 2
. FuzzyThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum fuzzy value.
Default Value : 0.5
Suggested values : FuzzyThresh ∈ {0.1, 0.3, 0.5, 0.7, 0.9}
Typical range of values : 0.0 ≤ FuzzyThresh ≤ 1.0 (lin)
Recommended Increment : 0.1
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Select the first gray value transition of the edge pairs.
Default Value : "all"
List of values : Transition ∈ {"all", "positive", "negative"}
. Pairing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Constraint of pairing.
Default Value : "no_restriction"
List of values : Pairing ∈ {"no_restriction"}
. NumPairs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Number of edge pairs.
Default Value : 10
Suggested values : NumPairs ∈ {0, 1, 10, 20, 50}
Typical range of values : 0 ≤ NumPairs
Recommended Increment : 1
. RowEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the first edge.
. ColumnEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the first edge.
. AmplitudeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Edge amplitude of the first edge (with sign).
. RowEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the second edge.
. ColumnEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the second edge.
. AmplitudeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Edge amplitude of the second edge (with sign).
. RowPairCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the center of the edge pair.
. ColumnPairCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the center of the edge pair.
. FuzzyScore (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Fuzzy evaluation of the edge pair.
. IntraDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Distance between the edges of the edge pair.
Result
If the parameter values are correct the operator fuzzy_measure_pairing returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.

HALCON 8.0.2
1258 CHAPTER 15. TOOLS

Parallelization Information
fuzzy_measure_pairing is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairs, measure_pairs
See also
fuzzy_measure_pos, measure_pos
Module
1D Metrology

T_fuzzy_measure_pairs ( const Hobject Image,


const Htuple MeasureHandle, const Htuple Sigma,
const Htuple AmpThresh, const Htuple FuzzyThresh,
const Htuple Transition, Htuple *RowEdgeFirst,
Htuple *ColumnEdgeFirst, Htuple *AmplitudeFirst,
Htuple *RowEdgeSecond, Htuple *ColumnEdgeSecond,
Htuple *AmplitudeSecond, Htuple *RowEdgeCenter,
Htuple *ColumnEdgeCenter, Htuple *FuzzyScore, Htuple *IntraDistance,
Htuple *InterDistance )

Extract straight edge pairs perpendicular to a rectangle or an annular arc.


fuzzy_measure_pairs serves to extract straight edge pairs which lie perpendicular to the major axis of a
rectangle or an annular arc. In addition to measure_pairs it uses fuzzy member functions to evaluate and
select the edge pairs.
The extraction algorithm is identical to fuzzy_measure_pos. In addition, the edges are grouped to pairs:
If Transition = ’positive’, the edge points with a dark-to-light transition in the direction of the major axis of
the rectangle or annular arc are returned in RowEdgeFirst and ColumnEdgeFirst. In this case, the cor-
responding edges with a light-to-dark transition are returned in RowEdgeSecond and ColumnEdgeSecond.
If Transition = ’negative’, the behavior is exactly opposite. If Transition = ’all’, the first detected edge
defines the transition for RowEdgeFirst and ColumnEdgeFirst. I.e., dependent on the positioning of the
measure object, edge pairs with a light-dark-light transition or edge pairs with a dark-light-dark transition are
returned. This is suited, e.g., to measure objects with different brightness relative to the background.
Having extracted subpixel edge locations, the edges are paired. The pairing algorithm groups the edges such that
interleavings and inclusions of pairs are prohibited. The features of an edge pair are evaluated by a fuzzy function,
which can be set by set_fuzzy_measure or set_fuzzy_measure_norm_pair. Which edge pairs are
selected can be determined with the parameter FuzzyThresh, which constitutes a threshold on the weight over
all fuzzy sets, i.e., the geometric mean of the weights of the defined fuzzy member functions.
The selected edges are returned as single points, which lie on the major axis of the rectangle or annular arc. The
corresponding edge amplitudes are returned in AmplitudeFirst and AmplitudeSecond, the fuzzy scores
in FuzzyScore. In addition, the distance between each edge pair is returned in IntraDistance and the
distance between consecutive edge pairs is returned in InterDistance. Here, IntraDistance[i] corresponds to
the distance between EdgeFirst[i] and EdgeSecond[i], while InterDistance[i] corresponds to the distance between
EdgeSecond[i] and EdgeFirst[i+1], i.e., the tuple InterDistance contains one element less than the tuples of
the edge pairs.
Attention
fuzzy_measure_pairs only returns meaningful results if the assumptions that the edges are straight and
perpendicular to the major axis of the rectangle or annular arc are fulfilled. Thus, it should not be used to extract
edges from curved objects, for example. Furthermore, the user should ensure that the rectangle or a annular arc is
as close to perpendicular as possible to the edges in the image. Additionally, Sigma must not become larger than
approx. 0.5 * Length1 (for Length1 see gen_measure_rectangle2).

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1259

It should be kept in mind that fuzzy_measure_pairs ignores the domain of Image for efficiency reasons.
If certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2


Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Sigma of Gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.4
. AmpThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum edge amplitude.
Default Value : 30.0
Suggested values : AmpThresh ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Typical range of values : 1 ≤ AmpThresh ≤ 255 (lin)
Minimum Increment : 0.5
Recommended Increment : 2
. FuzzyThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum fuzzy value.
Default Value : 0.5
Suggested values : FuzzyThresh ∈ {0.1, 0.3, 0.5, 0.7, 0.9}
Typical range of values : 0.0 ≤ FuzzyThresh ≤ 1.0 (lin)
Recommended Increment : 0.1
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Select the first gray value transition of the edge pairs.
Default Value : "all"
List of values : Transition ∈ {"all", "positive", "negative"}
. RowEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the first edge point.
. ColumnEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the first edge point.
. AmplitudeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Edge amplitude of the first edge (with sign).
. RowEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the second edge point.
. ColumnEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the second edge point.
. AmplitudeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Edge amplitude of the second edge (with sign).
. RowEdgeCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the center of the edge pair.
. ColumnEdgeCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the center of the edge pair.
. FuzzyScore (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Fuzzy evaluation of the edge pair.
. IntraDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Distance between edges of an edge pair.
. InterDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Distance between consecutive edge pairs.

HALCON 8.0.2
1260 CHAPTER 15. TOOLS

Result
If the parameter values are correct the operator fuzzy_measure_pairs returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
fuzzy_measure_pairs is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairing, measure_pairs
See also
fuzzy_measure_pos, measure_pos
Module
1D Metrology

T_fuzzy_measure_pos ( const Hobject Image, const Htuple MeasureHandle,


const Htuple Sigma, const Htuple AmpThresh, const Htuple FuzzyThresh,
const Htuple Transition, Htuple *RowEdge, Htuple *ColumnEdge,
Htuple *Amplitude, Htuple *FuzzyScore, Htuple *Distance )

Extract straight edges perpendicular to a rectangle or an annular arc.


fuzzy_measure_pos extracts straight edges which lie perpendicular to the major axis of a rectangle or an
annular arc. In addition to measure_pos it uses fuzzy member functions to evaluate and select the edges.
The algorithm works by averaging the gray values in “slices” perpendicular to the major axis of the rectangle or
annular arc in order to obtain a one-dimensional edge profile. The sampling is done at subpixel positions in the
image Image at integer row and column distances (in the coordinate frame of the rectangle) from the center of the
rectangle. Since this involves some calculations which can be used repeatedly in several mesurements, the opera-
tor gen_measure_rectangle2 is used to perform these calculations only once, thus increasing the speed of
fuzzy_measure_pos significantly. Since there is a trade-off between accuracy and speed in the subpixel calcu-
lations of the gray values, and thus in the accuracy of the extracted edge positions, different interpolation schemes
can be selected in gen_measure_rectangle2. (The interpolation only influences rectangles not aligned with
the image axesand annular arcs.) The measure object generated with gen_measure_rectangle2 is passed
in MeasureHandle.
After the one-dimensional edge profile has been calculated, subpixel edge locations are computed by convolving
the profile with the derivatives of a Gaussian smoothing kernel of standard deviation Sigma. Salient edges can be
selected with the parameter AmpThresh, which constitutes a threshold on the amplitude, i.e., the absolute value of
the first derivative of the edge. Additionally, it is possible to select only positive edges, i.e., edges which constitute
a dark-to-light transition in the direction of the major axis of the rectangle (Transition = ’positive’), only
negative edges, i.e., light-to-dark transitions (Transition = ’negative’), or both types of edges (Transition
= ’all’). Finally, it is possible to select which edge points are returned.
Having extracted subpixel edge locations, features of these edges are evaluated by a corresponding fuzzy function,
which can be set by set_fuzzy_measure. Which edges are selected can be determined with the parameter
FuzzyThresh, which constitutes a threshold on the weight over all fuzzy sets, i.e., the geometric mean of the
weights of the defined sets.
The selected edges are returned as single points, which lie on the major axis of the rectangle or annular arc, in
(RowEdge,ColumnEdge). The corresponding edge amplitudes are returned in Amplitude, the fuzzy scores
in FuzzyScore. In addition, the distance between consecutive edge points is returned in Distance. Here,
Distance[i] corresponds to the distance between Edge[i] and Edge[i+1], i.e., the tuple Distance contains one
element less than the tuples RowEdge and ColumnEdge.
Attention
fuzzy_measure_pos only returns meaningful results if the assumptions that the edges are straight and per-
pendicular to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved
objects, for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1261

to the edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1
see gen_measure_rectangle2).
It should be kept in mind that fuzzy_measure_pos ignores the domain of Image for efficiency reasons. If
certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Sigma of Gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.4
. AmpThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum edge amplitude.
Default Value : 30.0
Suggested values : AmpThresh ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Typical range of values : 1 ≤ AmpThresh ≤ 255 (lin)
Minimum Increment : 2
Recommended Increment : 0.5
. FuzzyThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum fuzzy value.
Default Value : 0.5
Suggested values : FuzzyThresh ∈ {0.1, 0.3, 0.5, 0.6, 0.7, 0.9}
Typical range of values : 0.0 ≤ FuzzyThresh ≤ 1.0 (lin)
Recommended Increment : 0.1
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Select light/dark or dark/light edges.
Default Value : "all"
List of values : Transition ∈ {"all", "positive", "negative"}
. RowEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the edge point.
. ColumnEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the edge point.
. Amplitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Edge amplitude of the edge (with sign).
. FuzzyScore (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Fuzzy evaluation of the edges.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Distance between consecutive edges.
Result
If the parameter values are correct the operator fuzzy_measure_pos returns the value H_MSG_TRUE. Oth-
erwise an exception handling is raised.
Parallelization Information
fuzzy_measure_pos is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure
Possible Successors
close_measure
Alternatives
edges_sub_pix, measure_pos

HALCON 8.0.2
1262 CHAPTER 15. TOOLS

See also
fuzzy_measure_pairing, fuzzy_measure_pairs, measure_pairs
Module
1D Metrology

gen_measure_arc ( double CenterRow, double CenterCol, double Radius,


double AngleStart, double AngleExtent, double AnnulusRadius,
Hlong Width, Hlong Height, const char *Interpolation,
Hlong *MeasureHandle )

T_gen_measure_arc ( const Htuple CenterRow, const Htuple CenterCol,


const Htuple Radius, const Htuple AngleStart,
const Htuple AngleExtent, const Htuple AnnulusRadius,
const Htuple Width, const Htuple Height, const Htuple Interpolation,
Htuple *MeasureHandle )

Prepare the extraction of straight edges perpendicular to an annular arc.


gen_measure_arc prepares the extraction of straight edges which lie perpendicular to an annular arc. Here,
annular arc denotes a circular arc with an associated width. The center of the arc is passed in the parameters
CenterRow and CenterCol, its radius in Radius, the starting angle in AngleStart, and its angular extent
relative to the starting angle in AngleExtent. If AngleExtent > 0, an arc with counterclockwise orientation
is generated, otherwise an arc with clockwise orientation. The radius of the annular arc, i.e., half its width, is
determined by AnnulusRadius.
The edge extraction algorithm is described in the documentation of the operator measure_pos. As discussed
there, different types of interpolation can be used for the calculation of the one-dimensional gray value profile. For
Interpolation = ’nearest_neighbor’, the gray values in the measurement are obtained from the gray values of
the closest pixel, i.e., by constant interpolation. For Interpolation = ’bilinear’, bilinear interpolation is used,
while for Interpolation = ’bicubic’, bicubic interpolation is used.
To perform the actual measurement at optimal speed, all computations that can be used for multiple measurements
are already performed in the operator gen_measure_arc. For this, an optimized data structure, a so-called
measure object, is constructed and returned in MeasureHandle. The size of the images in which measurements
will be performed must be specified in the parameters Width and Height.
The system parameter ’int_zooming’ (see set_system) affects the accuracy and speed of the calculations used
to construct the measure object. If ’int_zooming’ is set to ’true’, the internal calculations are performed using fixed
point arithmetic, leading to much shorter execution times. However, the geometric accuracy is slightly lower in
this mode. If ’int_zooming’ is set to ’false’, the internal calculations are performed using floating point arithmetic,
leading to the maximum geometric accuracy, but also to significantly increased execution times.
Attention
Note that when using bilinear or bicubic interpolation, not only the measurement rectangle but additionally the
margin around the rectangle must fit into the image. The width of the margin (in all four directions) must be at
least one pixel for bilinear interpolation and two pixels for bicubic interpolation. For projection lines that do not
fulfill this condition, no gray value is computed. Thus, no edge can be extracted at these positions.
Parameter

. CenterRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; double / Hlong


Row coordinate of the center of the arc.
Default Value : 100.0
Suggested values : CenterRow ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ CenterRow ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1263

. CenterCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double / Hlong


Column coordinate of the center of the arc.
Default Value : 100.0
Suggested values : CenterCol ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ CenterCol ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the arc.
Default Value : 50.0
Suggested values : Radius ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ Radius ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double / Hlong
Start angle of the arc in radians.
Default Value : 0.0
Suggested values : AngleStart ∈ {-3.14159, -2.35619, -1.57080, -0.78540, 0.0, 0.78540, 1.57080,
2.35619, 3.14159}
Typical range of values : -3.14159 ≤ AngleStart ≤ 3.14159 (lin)
Minimum Increment : 0.03142
Recommended Increment : 0.31416
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double / Hlong
Angular extent of the arc in radians.
Default Value : 6.28318
Suggested values : AngleExtent ∈ {-6.28318, -5.49779, -4.71239, -3.92699, -3.14159, -2.35619,
-1.57080, -0.78540, 0.78540, 1.57080, 2.35619, 3.14159, 3.92699, 4.71239, 5.49779, 6.28318}
Typical range of values : -6.28318 ≤ AngleExtent ≤ 6.28318 (lin)
Minimum Increment : 0.03142
Recommended Increment : 0.31416
Restriction : AngleExtent 6= 0.0
. AnnulusRadius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius (half width) of the annulus.
Default Value : 10.0
Suggested values : AnnulusRadius ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ AnnulusRadius ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : AnnulusRadius ≤ Radius
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the image to be processed subsequently.
Default Value : 512
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768}
Typical range of values : 0 ≤ Width ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the image to be processed subsequently.
Default Value : 512
Suggested values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576}
Typical range of values : 0 ≤ Height ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of interpolation to be used.
Default Value : "nearest_neighbor"
List of values : Interpolation ∈ {"nearest_neighbor", "bilinear", "bicubic"}

HALCON 8.0.2
1264 CHAPTER 15. TOOLS

. MeasureHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Hlong *


Measure object handle.
Result
If the parameter values are correct, the operator gen_measure_arc returns the value H_MSG_TRUE. Other-
wise an exception handling is raised.
Parallelization Information
gen_measure_arc is reentrant and processed without parallelization.
Possible Predecessors
draw_circle
Possible Successors
measure_pos, measure_pairs, fuzzy_measure_pos, fuzzy_measure_pairs,
fuzzy_measure_pairing
Alternatives
edges_sub_pix
See also
gen_measure_rectangle2
Module
1D Metrology

gen_measure_rectangle2 ( double Row, double Column, double Phi,


double Length1, double Length2, Hlong Width, Hlong Height,
const char *Interpolation, Hlong *MeasureHandle )

T_gen_measure_rectangle2 ( const Htuple Row, const Htuple Column,


const Htuple Phi, const Htuple Length1, const Htuple Length2,
const Htuple Width, const Htuple Height, const Htuple Interpolation,
Htuple *MeasureHandle )

Prepare the extraction of straight edges perpendicular to a rectangle.


gen_measure_rectangle2 prepares the extraction of straight edges which lie perpendicular to the major
axis of a rectangle. The center of the rectangle is passed in the parameters Row and Column, the direction of
the major axis of the rectangle in Phi, and the length of the two axes, i.e., half the diameter of the rectangle, in
Length1 and Length2.
The edge extraction algorithm is described in the documentation of the operator measure_pos. As discussed
there, different types of interpolation can be used for the calculation of the one-dimensional gray value profile. For
Interpolation = ’nearest_neighbor’, the gray values in the measurement are obtained from the gray values of
the closest pixel, i.e., by constant interpolation. For Interpolation = ’bilinear’, bilinear interpolation is used,
while for Interpolation = ’bicubic’, bicubic interpolation is used.
To perform the actual measurement at optimal speed, all computations that can be used for multiple measurements
are already performed in the operator gen_measure_rectangle2. For this, an optimized data structure,
a so-called measure object, is constructed and returned in MeasureHandle. The size of the images in which
measurements will be performed must be specified in the parameters Width and Height.
The system parameter ’int_zooming’ (see set_system) affects the accuracy and speed of the calculations used
to construct the measure object. If ’int_zooming’ is set to ’true’, the internal calculations are performed using fixed
point arithmetic, leading to much shorter execution times. However, the geometric accuracy is slightly lower in
this mode. If ’int_zooming’ is set to ’false’, the internal calculations are performed using floating point arithmetic,
leading to the maximum geometric accuracy, but also to significantly increased execution times.
Attention
Note that when using bilinear or bicubic interpolation, not only the measurement rectangle but additionally the
margin around the rectangle must fit into the image. The width of the margin (in all four directions) must be at
least one pixel for bilinear interpolation and two pixels for bicubic interpolation. For projection lines that do not
fulfill this condition, no gray value is computed. Thus, no edge can be extracted at these positions.

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1265

Parameter
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y ; double / Hlong
Row coordinate of the center of the rectangle.
Default Value : 50.0
Suggested values : Row ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ Row ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x ; double / Hlong
Column coordinate of the center of the rectangle.
Default Value : 100.0
Suggested values : Column ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ Column ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad ; double / Hlong
Angle of longitudinal axis of the rectangle to horizontal (radians).
Default Value : 0.0
Suggested values : Phi ∈ {-1.178097, -0.785398, -0.392699, 0.0, 0.392699, 0.785398, 1.178097}
Typical range of values : -1.178097 ≤ Phi ≤ 1.178097 (lin)
Minimum Increment : 0.001
Recommended Increment : 0.1
Restriction : (−pi < Phi) ∧ (Phi ≤ pi)
. Length1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth ; double / Hlong
Half width of the rectangle.
Default Value : 200.0
Suggested values : Length1 ∈ {3.0, 5.0, 10.0, 15.0, 20.0, 50.0, 100.0, 200.0, 300.0, 500.0}
Typical range of values : 0.0 ≤ Length1 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Length2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight ; double / Hlong
Half height of the rectangle.
Default Value : 100.0
Suggested values : Length2 ∈ {1.0, 2.0, 3.0, 5.0, 10.0, 15.0, 20.0, 50.0, 100.0, 200.0}
Typical range of values : 0.0 ≤ Length2 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Length2 ≤ Length1
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the image to be processed subsequently.
Default Value : 512
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768}
Typical range of values : 0 ≤ Width ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the image to be processed subsequently.
Default Value : 512
Suggested values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576}
Typical range of values : 0 ≤ Height ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of interpolation to be used.
Default Value : "nearest_neighbor"
List of values : Interpolation ∈ {"nearest_neighbor", "bilinear", "bicubic"}
. MeasureHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Hlong *
Measure object handle.

HALCON 8.0.2
1266 CHAPTER 15. TOOLS

Result
If the parameter values are correct the operator gen_measure_rectangle2 returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
gen_measure_rectangle2 is reentrant and processed without parallelization.
Possible Predecessors
draw_rectangle2
Possible Successors
measure_pos, measure_pairs, fuzzy_measure_pos, fuzzy_measure_pairs,
fuzzy_measure_pairing, measure_thresh
Alternatives
edges_sub_pix
See also
gen_measure_arc
Module
1D Metrology

T_measure_pairs ( const Hobject Image, const Htuple MeasureHandle,


const Htuple Sigma, const Htuple Threshold, const Htuple Transition,
const Htuple Select, Htuple *RowEdgeFirst, Htuple *ColumnEdgeFirst,
Htuple *AmplitudeFirst, Htuple *RowEdgeSecond,
Htuple *ColumnEdgeSecond, Htuple *AmplitudeSecond,
Htuple *IntraDistance, Htuple *InterDistance )

Extract straight edge pairs perpendicular to a rectangle or annular arc.


measure_pairs serves to extract straight edge pairs which lie perpendicular to the major axis of a rectangle or
annular arc.
The extraction algorithm is identical to measure_pos. In addition the edges are grouped to pairs: If
Transition = ’positive’, the edge points with a dark-to-light transition in the direction of the major axis of
the rectangle are returned in RowEdgeFirst and ColumnEdgeFirst. In this case, the corresponding edges
with a light-to-dark transition are returned in RowEdgeSecond and ColumnEdgeSecond. If Transition =
’negative’, the behavior is exactly opposite. If Transition = ’all’, the first detected edge defines the transition
for RowEdgeFirst and ColumnEdgeFirst. I.e., dependent on the positioning of the measure object, edge
pairs with a light-dark-light transition or edge pairs with a dark-light-dark transition are returned. This is suited,
e.g., to measure objects with different brightness relative to the background.
If more than one consecutive edge with the same transition is found, the first one is used as a pair element. This
behavior may cause problems in applications in which the threshold Threshold cannot be selected high enough
to suppress consecutive edges of the same transition. For these applications, a second pairing mode exists that only
selects the respective strongest edges of a sequence of consecutive rising and falling edges. This mode is selected
by appending ’_strongest’ to any of the above modes for Transition, e.g., ’negative_strongest’. Finally, it is
possible to select which edge pairs are returned. If Select is set to ’all’, all edge pairs are returned. If it is set to
’first’, only the first of the extracted edge pairs is returned, while it is set to ’last’, only the last one is returned.
The extracted edges are returned as single points which lie on the major axis of the rectangle. The corresponding
edge amplitudes are returned in AmplitudeFirst and AmplitudeSecond. In addition, the distance between
each edge pair is returned in IntraDistance and the distance between consecutive edge pairs is returned
in InterDistance. Here, IntraDistance[i] corresponds to the distance between EdgeFirst[i] and EdgeSec-
ond[i], while InterDistance[i] corresponds to the distance between EdgeSecond[i] and EdgeFirst[i+1], i.e., the
tuple InterDistance contains one element less than the tuples of the edge pairs.
Attention
measure_pairs only returns meaningful results if the assumptions that the edges are straight and perpendicular
to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible to the
edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1 see
gen_measure_rectangle2).

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1267

It should be kept in mind that measure_pairs ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2


Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Sigma of gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.4
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum edge amplitude.
Default Value : 30.0
Suggested values : Threshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Typical range of values : 1 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 0.5
Recommended Increment : 2
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of gray value transition that determines how edges are grouped to edge pairs.
Default Value : "all"
List of values : Transition ∈ {"all", "positive", "negative", "all_strongest", "positive_strongest",
"negative_strongest"}
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Selection of edge pairs.
Default Value : "all"
List of values : Select ∈ {"all", "first", "last"}
. RowEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the center of the first edge.
. ColumnEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the center of the first edge.
. AmplitudeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Edge amplitude of the first edge (with sign).
. RowEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the center of the second edge.
. ColumnEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the center of the second edge.
. AmplitudeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Edge amplitude of the second edge (with sign).
. IntraDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Distance between edges of an edge pair.
. InterDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Distance between consecutive edge pairs.
Result
If the parameter values are correct the operator measure_pairs returns the value H_MSG_TRUE. Otherwise
an exception handling is raised.
Parallelization Information
measure_pairs is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2

HALCON 8.0.2
1268 CHAPTER 15. TOOLS

Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairs, fuzzy_measure_pairing
See also
measure_pos, fuzzy_measure_pos
Module
1D Metrology

T_measure_pos ( const Hobject Image, const Htuple MeasureHandle,


const Htuple Sigma, const Htuple Threshold, const Htuple Transition,
const Htuple Select, Htuple *RowEdge, Htuple *ColumnEdge,
Htuple *Amplitude, Htuple *Distance )

Extract straight edges perpendicular to a rectangle or annular arc.


measure_pos extracts straight edges which lie perpendicular to the major axis of a rectangle or annular arc.
The algorithm works by averaging the gray values in “slices” perpendicular to the major axis of the rectangle or
annular arc in order to obtain a one-dimensional edge profile. The sampling is done at subpixel positions in the
image Image at integer row and column distances (in the coordinate frame of the rectangle) from the center of the
rectangle. Since this involves some calculations which can be used repeatedly in several mesurements, the operator
gen_measure_rectangle2 or gen_measure_arc is used to perform these calculations only once, thus
increasing the speed of measure_pos significantly. Since there is a trade-off between accuracy and speed in
the subpixel calculations of the gray values, and thus in the accuracy of the extracted edge positions, different
interpolation schemes can be selected in gen_measure_rectangle2. (The interpolation only influences
rectangles not aligned with the image axes.) The measure object generated with gen_measure_rectangle2
is passed in MeasureHandle.
After the one-dimensional edge profile has been calculated, subpixel edge locations are computed by convolving
the profile with the derivatives of a Gaussian smoothing kernel of standard deviation Sigma. Salient edges can
be selected with the parameter Threshold, which constitutes a threshold on the amplitude, i.e., the absolute
value of the first derivative of the edge. Additionally, it is possible to select only positive edges, i.e., edges which
constitute a dark-to-light transition in the direction of the major axis of the rectangle or the arc (Transition =
’positive’), only negative edges, i.e., light-to-dark transitions (Transition = ’negative’), or both types of edges
(Transition = ’all’). Finally, it is possible to select which edge points are returned. If Select is set to ’all’,
all edge points are returned. If it is set to ’first’, only the first of the extracted edge points is returned, while it is set
to ’last’, only the last one is returned.
The extracted edges are returned as single points which lie on the major axis of the rectangle or arc in
(RowEdge,ColumnEdge). The corresponding edge amplitudes are returned in Amplitude. In addition, the
distance between consecutive edge points is returned in Distance. Here, Distance[i] corresponds to the distance
between Edge[i] and Edge[i+1], i.e., the tuple Distance contains one element less than the tuples RowEdge and
ColumnEdge.
Attention
measure_pos only returns meaningful results if the assumptions that the edges are straight and perpendicular to
the major axis of the rectangle or arc are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle or arc is as close to perpendicular as possible
to the edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1
see gen_measure_rectangle2).
It should be kept in mind that measure_pos ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong
Measure object handle.

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1269

. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double


Sigma of gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.4
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum edge amplitude.
Default Value : 30.0
Suggested values : Threshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Typical range of values : 1 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 2
Recommended Increment : 0.5
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Light/dark or dark/light edge.
Default Value : "all"
List of values : Transition ∈ {"all", "positive", "negative"}
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Selection of end points.
Default Value : "all"
List of values : Select ∈ {"all", "first", "last"}
. RowEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the center of the edge.
. ColumnEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the center of the edge.
. Amplitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Edge amplitude of the edge (with sign).
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Distance between consecutive edges.
Result
If the parameter values are correct the operator measure_pos returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
measure_pos is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pos
See also
measure_pairs, fuzzy_measure_pairs, fuzzy_measure_pairing
Module
1D Metrology

T_measure_projection ( const Hobject Image,


const Htuple MeasureHandle, Htuple *GrayValues )

Extract a gray value profile perpendicular to a rectangle or annular arc.


measure_projection extracts a one-dimensional gray value profile perpendicular to a rectangle or annular
arc. This is done by averaging the gray values in “slices” perpendicular to the major axis of the rectangle or
arc. The sampling is done at subpixel positions in the image Image at integer row and column distances (in the

HALCON 8.0.2
1270 CHAPTER 15. TOOLS

coordinate frame of the rectangle) from the center of the rectangle. Since this involves some calculations which
can be used repeatedly in several projections, the operator gen_measure_rectangle2 is used to perform
these calculations only once, thus increasing the speed of measure_projection significantly. Since there
is a trade-off between accuracy and speed in the subpixel calculations of the gray values, different interpolation
schemes can be selected in gen_measure_rectangle2 (the interpolation only influences rectangles not
aligned with the image axes). The measure object generated with gen_measure_rectangle2 is passed in
MeasureHandle.
Attention
It should be kept in mind that measure_projection ignores the domain of Image for efficiency reasons. If
certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2


Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong
Measure object handle.
. GrayValues (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
Gray value profile.
Result
If the parameter values are correct the operator measure_projection returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
measure_projection is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2
Possible Successors
close_measure
Alternatives
gray_projections
Module
1D Metrology

T_measure_thresh ( const Hobject Image, const Htuple MeasureHandle,


const Htuple Sigma, const Htuple Threshold, const Htuple Select,
Htuple *RowThresh, Htuple *ColumnThresh, Htuple *Distance )

Extracting points with a particular grey value along a rectangle or an annular arc.
measure_thresh extracts points for which the gray value within an one-dimensional gray value profile is equal
to the specified threshold Threshold. The gray value profile is projected onto the major axis of the measure
rectangle which is passed with the parameter MeasureHandle, so the threshold points calculated within the
gray value profile correspond to certain image coordinates on the rectangle’s major axis. These coordinates are
returned as the operator results in RowThresh and ColumnThresh.
If the gray value profile intersects the threshold line for several times, the parameter Select determines which
values to return. Possible settings are ’first’, ’last’, ’first_last’ (first and last) or ’all’. For the last two cases
Distance returns the distances between the calculated points.
The gray value profile is created by averaging the gray values along all line segments, which are defined by the
measure rectangle as follows:

1. The segments are perpendicular to the major axis of the rectangle,


2. they have an integer distance to the center of the rectangle,
3. the rectangle bounds the segments.

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1271

For every line segment, the average of the gray values of all points with an integer distance to the major axis is
calculated. Due to translation and rotation of the measure rectangle with respect to the image coordinates the input
image Image is in general sampled at subpixel positions.
Since this involves some calculations which can be used repeatedly in several projections, the operator
gen_measure_rectangle2 is used to perform these calculations only once in advance. Here, the measure
object MeasureHandle is generated and different interpolation schemes can be selected.
Attention
measure_thresh only returns meaningful results if the assumptions that the edges are straight and perpendicu-
lar to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible to the
edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1 see
gen_measure_rectangle2).
It should be kept in mind that measure_thresh ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Sigma of gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.0, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.0
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Threshold.
Default Value : 128.0
Typical range of values : 0 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 0.5
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Selection of points.
Default Value : "all"
List of values : Select ∈ {"all", "first", "last", "first_last"}
. RowThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of points with threshold value.
. ColumnThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of points with threshold value.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Distance between consecutive points.
Result
If the parameter values are correct the operator measure_thresh returns the value H_MSG_TRUE. Otherwise,
an exception handling is raised.
Parallelization Information
measure_thresh is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2
Possible Successors
close_measure
Alternatives
measure_pos, edges_sub_pix, measure_pairs

HALCON 8.0.2
1272 CHAPTER 15. TOOLS

Module
1D Metrology

reset_fuzzy_measure ( Hlong MeasureHandle, const char *SetType )


T_reset_fuzzy_measure ( const Htuple MeasureHandle,
const Htuple SetType )

Reset a fuzzy member function.


reset_fuzzy_measure discards a fuzzy member function of the fuzzy set SetType. This member function
should have been set by set_fuzzy_measure before.
Parameter
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Hlong
Measure object handle.
. SetType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Selection of the fuzzy set.
Default Value : "contrast"
List of values : SetType ∈ {"position", "position_pair", "size", "gray", "contrast"}
Parallelization Information
reset_fuzzy_measure is reentrant and processed without parallelization.
Possible Predecessors
set_fuzzy_measure
Possible Successors
fuzzy_measure_pos, fuzzy_measure_pairs
See also
set_fuzzy_measure, set_fuzzy_measure_norm_pair
Module
1D Metrology

T_set_fuzzy_measure ( const Htuple MeasureHandle,


const Htuple SetType, const Htuple Function )

Specify a fuzzy member function.


set_fuzzy_measure specifies a fuzzy member function passed in Function. The specified fuzzy functions
enable fuzzy_measure_pos and fuzzy_measure_pairs / fuzzy_measure_pairing to evaluate
and select the detected edge candidates. For this purpose, weighting characteristics for different edge features can
be defined by one function each. Such a specified feature is called fuzzy set. Specifying no function for a fuzzy
set means not to use this feature for the final edge evaluation. Setting a second fuzzy function to a set means to
discard the first defined function and replace it by the second one. A previously defined fuzzy member function
can be discarded completely by reset_fuzzy_measure.
Functions for five different fuzzy set types selected by the SetType parameter can be defined, the sub types of a
set beeing mutual exclusive:

• ’contrast’ will use the fuzzy function to evaluate the amplitudes of the edge candidates. When extracting
edge pairs, the fuzzy evaluation is obtained by the geometric average of the fuzzy contrast scores of both
edges.
• The fuzzy function of ’position’ evaluates the distance of each edge candidate to the reference point of the
measure object, generated by gen_measure_arc or gen_measure_rectangle2. The reference
point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference point to the
middle or the end of the one-dimensional gray value profile instead. If the fuzzy position evaluation depends
on the position of the object along the profile, ’position_first_edge’ / ’position_last_edge’ sets the referece
point at the position of the first/last extracted edge. When extracting edge pairs the position of a pair is
referenced by the geometric average of the fuzzy position scores of both edges.

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1273

• Similar to ’position’, ’position_pair’ evaluates the distance of each edge pair to the reference point of
the measure object. The position of a pair is defined by the center point between both edges. The ob-
ject’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’position_first_pair’, ’posi-
tion_last_pair’, respectively. Contrary to ’position’, this set is only used by fuzzy_measure_pairs/
fuzzy_measure_pairing.
• ’size’ denotes a fuzzy set that evaluates the normed distance of the two edges of a pair in pixels. This set
is only used by fuzzy_measure_pairs/ fuzzy_measure_pairing. Specifying an upper bound
for the size by terminating the member function with a corresponding fuzzy value of 0.0 will speed up
fuzzy_measure_pairs / fuzzy_measure_pairing because not all possible pairs need to be con-
sidered.
• ’gray’ sets a fuzzy function to weight the mean projected gray value between two edges of a pair. This set is
only used by fuzzy_measure_pairs / fuzzy_measure_pairing.

A fuzzy member function is defined as a piecewise linear function by at least two pairs of values, sorted in an
ascending order by their x value. The x values represent the edge feature and must lie within the parameter space
of the set type, i.e., in case of ’contrast’ and ’gray’ feature and, e.g., byte images within the range 0.0 ≤ x ≤
255.0. In case of ’size’ x has to satisfy 0.0 ≤ x whereas in case of ’position’ x can be any real number. The
y values of the fuzzy function represent the weight of the corresponding feature value and have to satisfy the
range of 0.0 ≤ y ≤ 1.0. Outside of the function’s interval, defined by the smallest and the greatest x value, the
y values of the interval borders are continued constantly. Such Fuzzy member functions can be generated by
create_funct_1d_pairs.
If more than one set is defined, fuzzy_measure_pos / fuzzy_measure_pairs /
fuzzy_measure_pairing yield the overall fuzzy weighting by the geometric middle of the weights of
each set.
Parameter

. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong


Measure object handle.
. SetType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Selection of the fuzzy set.
Default Value : "contrast"
List of values : SetType ∈ {"position", "position_center", "position_end", "position_first_edge",
"position_last_edge", "position_pair_center", "position_pair_end", "position_first_pair", "position_last_pair",
"size", "gray", "contrast"}
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Fuzzy member function.
Example (Syntax: HDevelop)

/* how to use a fuzzy function */


...
gen_measure_rectangle2 (50, 100, 0, 200, 100, 512, 512, ’nearest_neighbor’,
MeasureHandle)
/* create a generalized fuzzy function to evaluate edge pairs
* (30% uncertainty). */
create_funct_1d_pairs ([0.7,1.0,1.3], [0.0,1.0,0.0], SizeFunction)
/* and transform it to expected size of 13.45 pixels */
transform_funct_1d (SizeFunction, [1.0,0.0,13.45,0.0], TransformedFunction)
set_fuzzy_measure (MeasureHandle, ’size’, SizeFunction)

fuzzy_measure_pairs (Image, MeasureHandle, 1, 30, 0.5, ’all’, RowEdgeFirst,


ColumnEdgeFirst, AmplitudeFirst, RowEdgeSecond,
ColumnEdgeSecond, AmplitudeSecond, RowEdgeCenter,
ColumnEdgeCenter, FuzzyScore, IntraDistance,
InterDistance)

Parallelization Information
set_fuzzy_measure is reentrant and processed without parallelization.

HALCON 8.0.2
1274 CHAPTER 15. TOOLS

Possible Predecessors
gen_measure_arc, gen_measure_rectangle2, create_funct_1d_pairs,
transform_funct_1d
Possible Successors
fuzzy_measure_pos, fuzzy_measure_pairs
Alternatives
set_fuzzy_measure_norm_pair
See also
reset_fuzzy_measure
Module
1D Metrology

T_set_fuzzy_measure_norm_pair ( const Htuple MeasureHandle,


const Htuple PairSize, const Htuple SetType, const Htuple Function )

Specify a normalized fuzzy member function for edge pairs.


set_fuzzy_measure_norm_pair specifies a normalized fuzzy member function passed in Function.
The specified fuzzy functions enables fuzzy_measure_pos, fuzzy_measure_pairs and
fuzzy_measure_pairing to evaluate and select the detected candidates of edges and edge pairs. For this
purpose, weighting characteristics for different edge features can be defined by one function each. Such a specified
feature is called fuzzy set. Specifying no function for a fuzzy set means not to use this feature for the final edge
evaluation. Setting a second fuzzy function to a fuzzy set means to discard the first defined function and replace it
by the second one. In difference to set_fuzzy_measure, the abscissa x of these member functions must be
defined relative to the desired size s of the edge pairs (passed in PairSize). This enables a generalized usage
of the defined functions. A previously defined normalized fuzzy member function can be discarded completely by
reset_fuzzy_measure.
Functions for three different fuzzy set types selected by the SetType parameter can be defined, the sub types of
a set beeing mutual exclusive:

• ’size’ denotes a fuzzy set that valuates the normalized distance of two edges of a pair in pixels:

d
x= (x ≥ 0) .
s
Specifying an upper bound x_max for the size by terminating the member function with a corresponding
fuzzy value of 0.0 will speed up fuzzy_measure_pairs / fuzzy_measure_pairing because not
all possible pairs must be considered. Additionally, this fuzzy set can also be specified as a normalized size
difference by ’size_diff’
s−d
x= (x ≤ 1)
s
and a absolute normalized size difference by ’size_abs_diff’

|s − d|
x= (0 ≤ x ≤ 1) .
s
• The fuzzy function of ’position’ evaluates the signed distance p of each edge candidate to the reference point
of the measure object, generated by gen_measure_arc or gen_measure_rectangle2:
p
x= .
s
The reference point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference
point to the middle or the end of the one-dimensional gray value profile, instead. If the fuzzy position
valuation depends on the position of the object along the profile ’position_first_edge’ / ’position_last_edge’
sets the referece point at the position of the first/last extracted edge. When extracting edge pairs, the position
of a pair is referenced by the geometric average of the fuzzy position scores of both edges.

HALCON/C Reference Manual, 2008-5-13


15.14. MEASURE 1275

• Similar to ’position’, ’position_pair’ evaluates the signed distance of each edge pair to the reference point
of the measure object. The position of a pair is defined by the center point between both edges. The ob-
ject’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’position_first_pair’, ’posi-
tion_last_pair’, respectively. Contrary to ’position’, this set is only used by fuzzy_measure_pairs/
fuzzy_measure_pairing.

A normalized fuzzy member function is defined as a piecewise linear function by at least two pairs of values,
sorted in an ascending order by their x value. The y values of the fuzzy function represent the weight of the
corresponding feature value and must satisfy the range of 0.0 ≤ y ≤ 1.0. Outside of the function’s interval, defined
by the smallest and the greatest x value, the y values of the interval borders are continued constantly. Such Fuzzy
member functions can be generated by create_funct_1d_pairs.
If more than one set is defined, fuzzy_measure_pos / fuzzy_measure_pairs /
fuzzy_measure_pairing yield the overall fuzzy weighting by the geometric mean of the weights of
each set.
Parameter

. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong


Measure object handle.
. PairSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Favored width of edge pairs.
Default Value : 10.0
List of values : PairSize ∈ {4.0, 6.0, 8.0, 10.0, 15.0, 20.0, 30.0}
Typical range of values : 0.0 ≤ PairSize
Minimum Increment : 0.1
Recommended Increment : 1.0
. SetType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Selection of the fuzzy set.
Default Value : "size_abs_diff"
List of values : SetType ∈ {"size", "size_diff", "size_abs_diff", "position", "position_center",
"position_end", "position_first_edge", "position_last_edge", "position_pair_center", "position_pair_end",
"position_first_pair", "position_last_pair"}
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Fuzzy member function.
Example (Syntax: HDevelop)

/* how to use a fuzzy function */


...
gen_measure_rectangle2 (50, 100, 0, 200, 100, 512, 512, ’nearest_neighbor’,
MeasureHandle)
/* create a generalized fuzzy function to evaluate edge pairs
* (30% uncertainty). */
create_funct_1d_pairs ([0.7,1.0,1.3], [0.0,1.0,0.0], SizeFunction)
/* and set it for an expected pair size of 13.45 pixels */
set_fuzzy_measure_norm_pair (MeasureHandle, 13.45, ’size’, SizeFunction)

fuzzy_measure_pairs (Image, MeasureHandle, 1, 30, 0.5, ’all’, RowEdgeFirst,


ColumnEdgeFirst, AmplitudeFirst, RowEdgeSecond,
ColumnEdgeSecond, AmplitudeSecond, RowEdgeCenter,
ColumnEdgeCenter, FuzzyScore, IntraDistance,
InterDistance)

Parallelization Information
set_fuzzy_measure_norm_pair is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_arc, gen_measure_rectangle2, create_funct_1d_pairs
Possible Successors
fuzzy_measure_pairs, fuzzy_measure_pairing

HALCON 8.0.2
1276 CHAPTER 15. TOOLS

Alternatives
transform_funct_1d, set_fuzzy_measure
See also
reset_fuzzy_measure
Module
1D Metrology

translate_measure ( Hlong MeasureHandle, double Row, double Column )


T_translate_measure ( const Htuple MeasureHandle, const Htuple Row,
const Htuple Column )

Translate a measure object.


translate_measure translates the reference point of the measure object given by MeasureHandle to the
point (Row,Column). If the measure object and the translated measure object lie completely within the image,
the measure object is shifted to the new reference point in an efficient manner. Otherwise, the measure object is
generated anew with gen_measure_rectangle2 or gen_measure_arc using the parameters that were
specified when the measure object was created and the new reference point.
Parameter

. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Hlong


Measure object handle.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; double / Hlong
Row coordinate of the new reference point.
Default Value : 50.0
Suggested values : Row ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ Row ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double / Hlong
Column coordinate of the new reference point.
Default Value : 100.0
Suggested values : Column ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ Column ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Result
If the parameter values are correct the operator translate_measure returns the value H_MSG_TRUE. Oth-
erwise an exception handling is raised.
Parallelization Information
translate_measure is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc
Possible Successors
measure_pos, measure_pairs, fuzzy_measure_pos, fuzzy_measure_pairs,
fuzzy_measure_pairing, measure_thresh
Alternatives
gen_measure_rectangle2, gen_measure_arc
See also
close_measure
Module
1D Metrology

HALCON/C Reference Manual, 2008-5-13


15.15. OCV 1277

15.15 OCV
close_all_ocvs ( )
T_close_all_ocvs ( )

Clear all OCV tools.


close_all_ocvs closes all OCV tools which have been opened using create_ocv_proj or read_ocv.
All handles are invalid after this call.
Attention
close_all_ocvs exists solely for the purpose of implementing the “reset program” functionality in HDevelop.
close_all_ocvs must not be used in any application.
Result
close_all_ocvs returns always H_MSG_TRUE.
Parallelization Information
close_all_ocvs is processed completely exclusively without parallelization.
Possible Predecessors
read_ocv, create_ocv_proj
Alternatives
close_ocv
Module
OCR/OCV

close_ocv ( Hlong OCVHandle )


T_close_ocv ( const Htuple OCVHandle )

Clear an OCV tool.


close_ocv closes an open OCV tool and frees the memory. The OCV tool has been created using
create_ocv_proj or read_ocv. The handle is after this call no longer valid.
Parameter
. OCVHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocv ; Hlong
Handle of the OCV tool which has to be freed.
Example (Syntax: C++)

read_ocv("ocv_file",&ocv_handle);
for (i=0; i<1000; i++)
{
grab_image_async(&Image,fg_handle,-1);
reduce_domain(Image,ROI,&Pattern);
do_ocv_simple(Pattern,ocv_handle,"A",
"true","true","false","true",10,
&Quality);
}
close_ocv(ocv_handle);

Result
close_ocv returns H_MSG_TRUE, if the handle is valid. Otherwise, an exception handling is raised.
Parallelization Information
close_ocv is processed completely exclusively without parallelization.
Possible Predecessors
read_ocv, create_ocv_proj

HALCON 8.0.2
1278 CHAPTER 15. TOOLS

See also
close_ocr
Module
OCR/OCV

create_ocv_proj ( const char *PatternNames, Hlong *OCVHandle )


T_create_ocv_proj ( const Htuple PatternNames, Htuple *OCVHandle )

Create a new OCV tool based on gray value projections.


create_ocv_proj creates a new OCV tool. This tool will be used to train good-patterns for the optical char-
acter verification. The training is done using the operator traind_ocv_proj. Thus traind_ocv_proj is
normally called after create_ocv_proj.
The pattern comparison is based on the gray projections: For every traing pattern the horizontal and vertical gray
projections are calculated by summing up the gray values along the rows and columns inside the region of the
pattern. This operation is applied to the training patterns and the test patterns. For the training patterns the result
is stored inside the OCV tool to save runtime while comparing patterns. The OCV is done by comparing the
corresponding projections. The Quality is the similarity of the projections.
Input for create_ocv_proj are the names of the patterns (PatternNames) which have to be trained. The
number and the names can be chosen arbitrary. In most case only one pattern will be trained, thus only one name
has to be specified. The names will be used when doing the OCV ( do_ocv_simple). It is possible to specify
more names than actually used. These might be trained later.
To close the OCV tool, i.e. to free the memory, the operator close_ocv is called.
Parameter

. PatternNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *


List of names for patterns to be trained.
Default Value : "a"
. OCVHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocv ; (Htuple .) Hlong *
Handle of the created OCV tool.
Example (Syntax: C++)

create_ocv_proj("A",&ocv_handle);
draw_region(&ROI,window_handle);
reduce_domain(Image,ROI,&Sample);
traind_ocv_proj(Sample,ocv_handle,"A","single");

Result
create_ocv_proj returns H_MSG_TRUE, if the parameters are correct. Otherwise, an exception handling is
raised.
Parallelization Information
create_ocv_proj is processed completely exclusively without parallelization.
Possible Successors
traind_ocv_proj, write_ocv, close_ocv
Alternatives
read_ocv
See also
create_ocr_class_box
Module
OCR/OCV

HALCON/C Reference Manual, 2008-5-13


15.15. OCV 1279

do_ocv_simple ( const Hobject Pattern, Hlong OCVHandle,


const char *PatternName, const char *AdaptPos, const char *AdaptSize,
const char *AdaptAngle, const char *AdaptGray, double Threshold,
double *Quality )

T_do_ocv_simple ( const Hobject Pattern, const Htuple OCVHandle,


const Htuple PatternName, const Htuple AdaptPos,
const Htuple AdaptSize, const Htuple AdaptAngle,
const Htuple AdaptGray, const Htuple Threshold, Htuple *Quality )

Verification of a pattern using an OCV tool.


do_ocv_simple evaluates the pattern in (Pattern). Before the evaluation the good-pattern has be trained by
using the operator traind_ocv_proj. Both patterns should have roughly the same (relative) extent and shape.
To specify which of the trained patterns is used as reference its name is specified in PatternName. The next four
parameters influence the automatic adaption: AdaptPos and AdaptSize refer to the geometry of the pattern.
AdaptPos specifies whether a shift of the position will be adapted automatically. AdaptSize is used to adapt
to changes in the size of the pattern. AdaptAngle is not yet implemented. The parameter AdaptGray controls
the adaption to changes of the grayvalues. This comprises additive and multiplicative changes of the intensity.
The parameter Threshold specifies the minimum difference of the gray values to be treated as an error. In this
case the percentage of wrong pixels is returned. If the value is below 0 the sum of all errors normalized with
respect to the size is returned.
The result of the operator is the Quality of the pattern with a value between 0 and 1. The value 1 corresponds to
a pattern with no faults. The value 0 corresponds to a very big fault.
Parameter
. Pattern (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Characters to be verified.
. OCVHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocv ; (Htuple .) Hlong
Handle of the OCV tool.
. PatternName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Name of the character.
Default Value : "a"
. AdaptPos (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Adaption to vertical and horizontal translation.
Default Value : "true"
List of values : AdaptPos ∈ {"true", "false"}
. AdaptSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Adaption to vertical and horizontal scaling of the size.
Default Value : "true"
List of values : AdaptSize ∈ {"true", "false"}
. AdaptAngle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Adaption to changes of the orientation (not yet implemented).
Default Value : "false"
List of values : AdaptAngle ∈ {"false"}
. AdaptGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Adaption to additive and scaling gray value changes.
Default Value : "true"
List of values : AdaptGray ∈ {"true", "false"}
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Minimum difference between objects.
Default Value : 10
Suggested values : Threshold ∈ {-1, 0, 1, 5, 10, 15, 20, 30, 40, 50, 60, 80, 100, 150}
. Quality (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Evaluation of the character.
Typical range of values : 0.0 ≤ Quality ≤ 1.0
Result
do_ocv_simple returns H_MSG_TRUE, if the handle and the characters are correct. Otherwise, an exception
handling is raised.

HALCON 8.0.2
1280 CHAPTER 15. TOOLS

Parallelization Information
do_ocv_simple is reentrant and processed without parallelization.
Possible Predecessors
traind_ocr_class_box, trainf_ocr_class_box, read_ocv, threshold, connection,
select_shape
Possible Successors
close_ocv
See also
create_ocv_proj
Module
OCR/OCV

read_ocv ( const char *FileName, Hlong *OCVHandle )


T_read_ocv ( const Htuple FileName, Htuple *OCVHandle )

Reading an OCV tool from file.


read_ocv reads an OCV tool from file. The tool will contain the same information that it contained when saving
it with write_ocv. After reading the tool the training can be completed for those patterns which have not been
trained so far. Otherwise a pattern comparison can be applied directly by calling do_ocv_simple.
As extension ’.ocv’ is used. If this extension is not given with the file name it will be added automatically.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of the file which has to be read.
Default Value : "test_ocv"
. OCVHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocv ; Hlong *
Handle of read OCV tool.
Example (Syntax: C++)

read_ocv("ocv_file",&ocv_handle);
for (i=0; i<1000; i++)
{
grab_image_async(&Image,fg_handle,-1);
reduce_domain(Image,ROI,&Pattern);
do_ocv_simple(Pattern,ocv_handle,"A",
"true","true","false","true",10,
&Quality);
}
close_ocv(ocv_handle);

Result
read_ocv returns H_MSG_TRUE, if the file is correct. Otherwise, an exception handling is raised.
Parallelization Information
read_ocv is processed completely exclusively without parallelization.
Possible Predecessors
write_ocv
Possible Successors
do_ocv_simple, close_ocv
See also
read_ocr
Module
OCR/OCV

HALCON/C Reference Manual, 2008-5-13


15.15. OCV 1281

traind_ocv_proj ( const Hobject Pattern, Hlong OCVHandle,


const char *Name, const char *Mode )

T_traind_ocv_proj ( const Hobject Pattern, const Htuple OCVHandle,


const Htuple Name, const Htuple Mode )

Training of an OCV tool.


traind_ocv_proj trains patterns for an OCV tool that has been created using the operators
create_ocv_proj or read_ocv. For this training one or multiple patterns are offered the system. Such
a pattern consists of an image with a reduced domain (ROI) for the area of the pattern. Note that the pattern should
not only contain foreground pixels (e.g. dark pixels of a character) but also background pixels. This can be imple-
mented e.g. by the smallest surrounding rectangle of the pattern. Without this context an evaluation of the pattern
is not possible.
If more than one pattern has to be trained this can be achieved by multiple calls (one for each pattern) or by calling
traind_ocv_proj once with all patterns and a tuple of the corresponding names. The result will be in both
cases the same. However using multiple calls will normally result in a longer execution time than using one call
with all patterns.
Parameter

. Pattern (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Pattern to be trained.
. OCVHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocv ; (Htuple .) Hlong
Handle of the OCV tool to be trained.
. Name (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Name(s) of the object(s) to analyse.
Default Value : "a"
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Mode for training (only one mode implemented).
Default Value : "single"
List of values : Mode ∈ {"single"}
Example (Syntax: C++)

create_ocv_proj("A",&ocv_handle);
draw_region(&ROI,window_handle);
reduce_domain(Image,ROI,&Sample);
traind_ocv_proj(Sample,ocv_handle,"A","single");

Result
traind_ocv_proj returns H_MSG_TRUE, if the handle and the training pattern(s) are correct. Otherwise, an
exception handling is raised.
Parallelization Information
traind_ocv_proj is processed completely exclusively without parallelization.
Possible Predecessors
write_ocr_trainf, create_ocv_proj, read_ocv, threshold, connection,
select_shape
Possible Successors
close_ocv
See also
traind_ocr_class_box
Module
OCR/OCV

HALCON 8.0.2
1282 CHAPTER 15. TOOLS

write_ocv ( Hlong OCVHandle, const char *FileName )


T_write_ocv ( const Htuple OCVHandle, const Htuple FileName )

Saving an OCV tool to file.


write_ocv writes an OCV tool to file. This can be used to save the result of a training ( traind_ocv_proj).
The whole information contained in the OCV tool is stored in the file. The file can be reloaded afterwards using
the operator read_ocv.
As file extension ’.ocv’ is used. If this extension is not given with the file name, it will be added automatically.
Parameter
. OCVHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocv ; Hlong
Handle of the OCV tool to be written.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of the file where the tool has to be saved.
Default Value : "test_ocv"
Result
write_ocv returns H_MSG_TRUE, if the data is correct and the file can be written. Otherwise, an exception
handling is raised.
Parallelization Information
write_ocv is reentrant and processed without parallelization.
Possible Predecessors
traind_ocv_proj
Possible Successors
close_ocv
See also
write_ocr
Module
OCR/OCV

15.16 Shape-from

depth_from_focus ( const Hobject MultiFocusImage, Hobject *Depth,


Hobject *Confidence, const char *Filter, const char *Selection )

T_depth_from_focus ( const Hobject MultiFocusImage, Hobject *Depth,


Hobject *Confidence, const Htuple Filter, const Htuple Selection )

Extract depth using mutiple focus levels.


The operator depth_from_focus extracts the depth using a focus sequence. The images of the focus sequence
have to passed as a multi channel image (MultiFocusImage). The depth for each pixel will be returned in
Depth as the channel number. The parameter Confidence returns a confidence value for each depth estimation:
The larger this value, the higher the confidence of the depth estimation is.
depth_from_focus selects the pixels with the best focus of all focus levels. The method used to extract these
pixels is specified by the parameters Filter and Selection:

’highpass’ The value of the focus is estimated by a highpass filter.


’bandpass’ The value of the focus is estimated by a bandpass filter.
’next_maximum’ To decide which focus level has be selected, the pixel in the neighborhood with the best confi-
dence is used to determine this value.
’local’ The decision for a focus level is based only on the locally calculated focus values.

HALCON/C Reference Manual, 2008-5-13


15.16. SHAPE-FROM 1283

Parameter
. MultiFocusImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte
Multichannel gray image consisting of multiple focus levels.
. Depth (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : byte
Depth image.
. Confidence (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : byte
Confidence of depth estimation.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Filter used to find sharp pixels.
Default Value : "highpass"
List of values : Filter ∈ {"highpass", "bandpass"}
. Selection (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Method used to find sharp pixels.
Default Value : "next_maximum"
List of values : Selection ∈ {"next_maximum", "local"}
Example (Syntax: C++)

compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,’highpass’,’next_maximum’);
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);
threshold(Confidence,HighConfidence,10,255);
reduce_domain(SharpImage,HighConfidence,ConfidentSharp);

Parallelization Information
depth_from_focus is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
compose2, compose3, compose4, add_channels, read_image, read_sequence
Possible Successors
select_grayvalues_from_channels, mean_image, binomial_filter, gauss_image,
threshold
See also
count_channels
Module
3D Metrology

estimate_al_am ( const Hobject Image, double *Albedo, double *Ambient )


T_estimate_al_am ( const Hobject Image, Htuple *Albedo,
Htuple *Ambient )

Estimate the albedo of a surface and the amount of ambient light.


estimate_al_am estimates the Albedo of a surface, i.e. the percentage of light reflected by the surface, and
the amount of ambient light Ambient by using the maximum and minimum gray values of the image.
Attention
It is assumed that the image contains at least one point for which the reflection function assumes its minimum, e.g.,
points in shadows. Furthermore, it is assumed that the image contains at least one point for which the reflection
function assumes its maximum. If this is not the case, wrong values will be estimated.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Image for which albedo and ambient are to be estimated.
. Albedo (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Amount of light reflected by the surface.

HALCON 8.0.2
1284 CHAPTER 15. TOOLS

. Ambient (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *


Amount of ambient light.
Result
estimate_al_am always returns the value H_MSG_TRUE.
Parallelization Information
estimate_al_am is reentrant and automatically parallelized (on tuple level).
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, phot_stereo, shade_height_field
Module
3D Metrology

estimate_sl_al_lr ( const Hobject Image, double *Slant,


double *Albedo )

T_estimate_sl_al_lr ( const Hobject Image, Htuple *Slant,


Htuple *Albedo )

Estimate the slant of a light source and the albedo of a surface.


estimate_sl_al_lr estimates the Slant of a light source, i.e., the angle between the light source and the
positive z-axis, and the albedo of the surface in the input image Image, i.e. the percentage of light reflected by
the surface, using the algorithm of Lee and Rosenfeld.
Attention
The Albedo is assumed constant for the entire surface depicted in the image.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Image for which slant and albedo are to be estimated.
. Slant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; (Htuple .) double *
Angle between the light sources and the positive z-axis (in degrees).
. Albedo (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Amount of light reflected by the surface.
Result
estimate_sl_al_lr always returns the value H_MSG_TRUE.
Parallelization Information
estimate_sl_al_lr is reentrant and automatically parallelized (on tuple level).
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, phot_stereo, shade_height_field
Module
3D Metrology

estimate_sl_al_zc ( const Hobject Image, double *Slant,


double *Albedo )

T_estimate_sl_al_zc ( const Hobject Image, Htuple *Slant,


Htuple *Albedo )

Estimate the slant of a light source and the albedo of a surface.


estimate_sl_al_zc estimates the Slant of a light source, i.e. the angle between the light source and the
positive z-axis, and the albedo of the surface in the input image Image, i.e. the percentage of light reflected by
the surface, using the algorithm of Zheng and Chellappa.
Attention
The Albedo is assumed constant for the entire surface depicted in the image.

HALCON/C Reference Manual, 2008-5-13


15.16. SHAPE-FROM 1285

Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Image for which slant and albedo are to be estimated.
. Slant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; (Htuple .) double *
Angle of the light sources and the positive z-axis (in degrees).
. Albedo (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Amount of light reflected by the surface.
Result
estimate_sl_al_zc always returns the value H_MSG_TRUE.
Parallelization Information
estimate_sl_al_zc is reentrant and automatically parallelized (on tuple level).
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, phot_stereo, shade_height_field
Module
3D Metrology

estimate_tilt_lr ( const Hobject Image, double *Tilt )


T_estimate_tilt_lr ( const Hobject Image, Htuple *Tilt )

Estimate the tilt of a light source.


estimate_tilt_lr estimates the tilt of a light source, i.e. the angle between the light source and the x-axis
after projection into the xy-plane, from the image Image using the algorithm of Lee and Rosenfeld.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Image for which the tilt is to be estimated.
. Tilt (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; (Htuple .) double *
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Result
estimate_tilt_lr always returns the value H_MSG_TRUE.
Parallelization Information
estimate_tilt_lr is reentrant and automatically parallelized (on tuple level).
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, phot_stereo, shade_height_field
Module
3D Metrology

estimate_tilt_zc ( const Hobject Image, double *Tilt )


T_estimate_tilt_zc ( const Hobject Image, Htuple *Tilt )

Estimate the tilt of a light source.


estimate_tilt_zc estimates the tilt of a light source, i.e. the angle between the light source and the x-axis
after projection into the xy-plane, from the image Image using the algorithm of Zheng and Chellappa.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Image for which the tilt is to be estimated.
. Tilt (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; (Htuple .) double *
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).

HALCON 8.0.2
1286 CHAPTER 15. TOOLS

Result
estimate_tilt_zc always returns the value H_MSG_TRUE.
Parallelization Information
estimate_tilt_zc is reentrant and automatically parallelized (on tuple level).
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, phot_stereo, shade_height_field
Module
3D Metrology

T_phot_stereo ( const Hobject Images, Hobject *Height,


const Htuple Slants, const Htuple Tilts )

Reconstruct a surface from at least three gray value images.


phot_stereo reconstructs a surface (i.e., the relative height of each image point) using the algorithm of Wood-
ham from at least three gray value images given by the multi-channel image Images. The light sources cor-
responding to the individual images are given by the parameters Slants and Tilts and are assumed to lie
infinitely far away.
Attention
phot_stereo assumes that the heights are to be extracted on a lattice with step width 1. If this is not the
case, the calculated heights must be multiplied by the step width after the call to phot_stereo. A Cartesian
coordinate system with the origin in the lower left corner of the image is used internally. All given images must
be byte-images. At least three images must be given in a multi-channel image. Slants and Tilts must contain
exactly as many light sources as the number of channels in Images. At least three of the light source directions
must be linearly independent.
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte
Shaded input image with at least three channels.
. Height (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Reconstructed height field.
. Slants (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg-array ; Htuple . double / Hlong
Angle between the light sources and the positive z-axis (in degrees).
Default Value : 45.0
Suggested values : Slants ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Slants ≤ 180.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 10.0
. Tilts (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg-array ; Htuple . double / Hlong
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default Value : 45.0
Suggested values : Tilts ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Tilts ≤ 360.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 10.0
Result
If all parameters are correct phot_stereo returns the value H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
phot_stereo is reentrant and processed without parallelization.
Possible Predecessors
estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr, estimate_tilt_zc
Possible Successors
shade_height_field
Module
3D Metrology

HALCON/C Reference Manual, 2008-5-13


15.16. SHAPE-FROM 1287

select_grayvalues_from_channels ( const Hobject MultichannelImage,


const Hobject IndexImage, Hobject *Selected )

T_select_grayvalues_from_channels (
const Hobject MultichannelImage, const Hobject IndexImage,
Hobject *Selected )

Selection of gray values of a multi-channel image using an index image.


The operator select_grayvalues_from_channels selects gray values from the different channels of
MultichannelImage. The channel number for each pixel is determined from the corresponding pixel value
in IndexImage. If MultichannelImage and IndexImage contain the same number of images, the corre-
sponding images are processed pairwise. Otherwise, IndexImage must contain only one single image. In this
case, the gray value selection is performed for each image of MultichannelImage according to IndexImage
.
Parameter

. MultichannelImage (input_object) . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte


Multi-channel gray value image.
. IndexImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .singlechannel-image(-array) ; Hobject : byte
Image, where pixel values are interpreted as channel index.
Number of elements : (IndexImage = MultichannelImage) ∨ (IndexImage = 1)
. Selected (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : byte
Resulting image.
Example (Syntax: C++)

compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,’highpass’,’next_maximum’);
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);

Parallelization Information
select_grayvalues_from_channels is reentrant and automatically parallelized (on tuple level, domain
level).
Possible Predecessors
depth_from_focus, mean_image
Possible Successors
disp_image
See also
count_channels
Module
Foundation

sfs_mod_lr ( const Hobject Image, Hobject *Height, double Slant,


double Tilt, double Albedo, double Ambient )

T_sfs_mod_lr ( const Hobject Image, Hobject *Height, const Htuple Slant,


const Htuple Tilt, const Htuple Albedo, const Htuple Ambient )

Reconstruct a surface from a gray value image.


sfs_mod_lr reconstructs a surface (i.e. the relative height of each image point) using the modified algorithm of
Lee and Rosenfeld. The surface is reconstructed from the input image Image, and the light source given by the
parameters Slant, Tilt, Albedo and Ambient, and is assumed to lie infinitely far away in the direction given
by Slant and Tilt. The parameter Albedo determines the albedo of the surface, i.e. the percentage of light
reflected in all directions. Ambient determines the amount of ambient light falling onto the surface. It can be set

HALCON 8.0.2
1288 CHAPTER 15. TOOLS

to values greater than zero if, for example, the white balance of the camera was badly adjusted at the moment the
image was taken.
Attention
sfs_mod_lr assumes that the heights are to be extracted on a lattice with step width 1. If this is not the case, the
calculated heights must be multiplied with the step width after the call to sfs_mod_lr. A Cartesian coordinate
system with the origin in the lower left corner of the image is used internally. sfs_mod_lr can only handle
byte-images.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Shaded input image.
. Height (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Reconstructed height field.
. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; double / Hlong
Angle between the light source and the positive z-axis (in degrees).
Default Value : 45.0
Suggested values : Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Slant ≤ 180.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 10.0
. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; double / Hlong
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default Value : 45.0
Suggested values : Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Tilt ≤ 360.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 10.0
. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of light reflected by the surface.
Default Value : 1.0
Suggested values : Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Typical range of values : 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Albedo ≥ 0.0
. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of ambient light.
Default Value : 0.0
Suggested values : Ambient ∈ {0.1, 0.5, 1.0}
Typical range of values : 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Ambient ≥ 0.0
Result
If all parameters are correct sfs_mod_lr returns the value H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
sfs_mod_lr is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
estimate_al_am, estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr,
estimate_tilt_zc
Possible Successors
shade_height_field
Module
3D Metrology

HALCON/C Reference Manual, 2008-5-13


15.16. SHAPE-FROM 1289

sfs_orig_lr ( const Hobject Image, Hobject *Height, double Slant,


double Tilt, double Albedo, double Ambient )

T_sfs_orig_lr ( const Hobject Image, Hobject *Height,


const Htuple Slant, const Htuple Tilt, const Htuple Albedo,
const Htuple Ambient )

Reconstruct a surface from a gray value image.


sfs_orig_lr reconstructs a surface (i.e. the relative height of each image point) using the original algorithm of
Lee and Rosenfeld. The surface is reconstructed from the input image Image. The light source is to be given by
the parameters Slant, Tilt, Albedo and Ambient, and is assumed to lie infinitely far away in the direction
given by Slant and Tilt. The parameter Albedo determines the albedo of the surface, i.e. the percentage of
light reflected in all directions. Ambient determines the amount of ambient light falling onto the surface. It can
be set to values greater than zero if, for example, the white balance of the camera was badly adjusted at the moment
the image was taken.
Attention
sfs_orig_lr assumes that the heights are to be extracted on a lattice with step width 1. If this is not the case, the
calculated heights must be multiplied with the step width after the call to sfs_orig_lr. A Cartesian coordinate
system with the origin in the lower left corner of the image is used internally. sfs_orig_lr can only handle
byte-images.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Shaded input image.
. Height (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Reconstructed height field.
. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; double / Hlong
Angle between the light source and the positive z-axis (in degrees).
Default Value : 45.0
Suggested values : Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Slant ≤ 180.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 10.0
. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; double / Hlong
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default Value : 45.0
Suggested values : Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Tilt ≤ 360.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 10.0
. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of light reflected by the surface.
Default Value : 1.0
Suggested values : Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Typical range of values : 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Albedo ≥ 0.0
. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of ambient light.
Default Value : 0.0
Suggested values : Ambient ∈ {0.1, 0.5, 1.0}
Typical range of values : 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Ambient ≥ 0.0
Result
If all parameters are correct sfs_orig_lr returns the value H_MSG_TRUE. Otherwise, an exception is raised.

HALCON 8.0.2
1290 CHAPTER 15. TOOLS

Parallelization Information
sfs_orig_lr is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
estimate_al_am, estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr,
estimate_tilt_zc
Possible Successors
shade_height_field
Module
3D Metrology

sfs_pentland ( const Hobject Image, Hobject *Height, double Slant,


double Tilt, double Albedo, double Ambient )

T_sfs_pentland ( const Hobject Image, Hobject *Height,


const Htuple Slant, const Htuple Tilt, const Htuple Albedo,
const Htuple Ambient )

Reconstruct a surface from a gray value image.


sfs_pentland reconstructs a surface (i.e. the relative height of each image point) using the algorithm of
Pentland. The surface is reconstructed from the input image Image. The light source must be given by the
parameters Slant, Tilt, Albedo and Ambient, and is assumed to lie infinitely far away in the direction given
by Slant and Tilt. The parameter Albedo determines the albedo of the surface, i.e. the percentage of light
reflected in all directions. Ambient determines the amount of ambient light falling onto the surface. It can be set
to values greater than zero if, for example, the white balance of the camera was badly adjusted at the moment the
image was taken.
Attention
sfs_pentland assumes that the heights are to be extracted on a lattice with step width 1. If this is not the
case, the calculated heights must be multiplied with the step width after the call to sfs_pentland. A Cartesian
coordinate system with the origin in the lower left corner of the image is used internally. sfs_pentland can
only handle byte-images.
Parameter

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte


Shaded input image.
. Height (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Reconstructed height field.
. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; double / Hlong
Angle between the light source and the positive z-axis (in degrees).
Default Value : 45.0
Suggested values : Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Slant ≤ 180.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; double / Hlong
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default Value : 45.0
Suggested values : Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Tilt ≤ 360.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0

HALCON/C Reference Manual, 2008-5-13


15.16. SHAPE-FROM 1291

. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong


Amount of light reflected by the surface.
Default Value : 1.0
Suggested values : Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Typical range of values : 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Albedo ≥ 0.0
. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of ambient light.
Default Value : 0.0
Suggested values : Ambient ∈ {0.1, 0.5, 1.0}
Typical range of values : 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Ambient ≥ 0.0
Result
If all parameters are correct sfs_pentland returns the value H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
sfs_pentland is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
estimate_al_am, estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr,
estimate_tilt_zc
Possible Successors
shade_height_field
Module
3D Metrology

shade_height_field ( const Hobject ImageHeight, Hobject *ImageShade,


double Slant, double Tilt, double Albedo, double Ambient,
const char *Shadows )

T_shade_height_field ( const Hobject ImageHeight, Hobject *ImageShade,


const Htuple Slant, const Htuple Tilt, const Htuple Albedo,
const Htuple Ambient, const Htuple Shadows )

Shade a height field.


shade_height_field computes a shaded image from the height field ImageHeight as if the image were
illuminated by an infinitely far away light source. It is assumed that the surface described by the height field has
Lambertian reflection properties determined by Albedo and Ambient. The parameter Shadows determines
whether shadows are to be calculated.
Attention
shade_height_field assumes that the heights are given on a lattice with step width 1. If this is not the
case, the heights must be divided by the step width before the call to shade_height_field. Otherwise, the
derivatives used internally to compute the orientation of the surface will be estimated to steep or too flat. Example:
The height field is given on 100*100 points on the square [0,1]*[0,1]. Then the heights must be divided by 1/100
first. A Cartesian coordinate system with the origin in the lower left corner of the image is used internally.
Parameter

. ImageHeight (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int4 / real


Height field to be shaded.
. ImageShade (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte
Shaded image.

HALCON 8.0.2
1292 CHAPTER 15. TOOLS

. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; double / Hlong


Angle between the light source and the positive z-axis (in degrees).
Default Value : 0.0
Suggested values : Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Slant ≤ 180.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 10.0
. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; double / Hlong
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default Value : 0.0
Suggested values : Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Typical range of values : 0.0 ≤ Tilt ≤ 360.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 10.0
. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of light reflected by the surface.
Default Value : 1.0
Suggested values : Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Typical range of values : 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Albedo ≥ 0.0
. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of ambient light.
Default Value : 0.0
Suggested values : Ambient ∈ {0.1, 0.5, 1.0}
Typical range of values : 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Ambient ≥ 0.0
. Shadows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Should shadows be calculated?
Default Value : "false"
Suggested values : Shadows ∈ {"true", "false"}
Result
If all parameters are correct shade_height_field returns the value H_MSG_TRUE. Otherwise, an exception
is raised.
Parallelization Information
shade_height_field is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, phot_stereo
Module
Foundation

15.17 Stereo
T_binocular_calibration ( const Htuple NX, const Htuple NY,
const Htuple NZ, const Htuple NRow1, const Htuple NCol1,
const Htuple NRow2, const Htuple NCol2, const Htuple StartCamParam1,
const Htuple StartCamParam2, const Htuple NStartPose1,
const Htuple NStartPose2, const Htuple EstimateParams,
Htuple *CamParam1, Htuple *CamParam2, Htuple *NFinalPose1,
Htuple *NFinalPose2, Htuple *RelPose, Htuple *Errors )

Determine all camera parameters of a binocular stereo system.

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1293

In general, binocular calibration means the exact determination of the parameters that model the 3D reconstruction
of a 3D point from the corresponding images of this point in a binocular stereo system. This reconstruction
is specified by the internal parameters CamParam1 of camera 1 and CamParam2 of camera 2 describing the
underlying projective camera model, and the external parameters RelPose describing the relative pose of camera
system 2 in relation to camera system 1.
Thus, known 3D model points (with coordinates NX, NY, NZ) are projected in the image planes of both cameras
(camera 1 and camera 2) and the sum of the squared distances between these projections and the corresponding
measured image points (with coordinates NRow1, NCol1 for camera 1 and NRow2, NCol2 for camera 2) is mini-
mized. It should be noted that all these model points must be visible in both images. The projection uses the initial
values StartCamParam1 and StartCamParam2 of the internal parameters of camera 1 and camera 2 which
can be obtained from the camera data sheets. In addition, the initial guesses NStartPose1 and NStartPose2
of the poses of the 3D calibration model in relation to the camera coordinate systems (CCS) of camera 1 and cam-
era 2 are needed as well. These 3D transformation poses can be determined by the find_marks_and_pose
operator. Since this calibration algorithm simultaneously handles correspondences between measured image and
known model points from different image pairs, poses (NStartPose1,NStartPose2), and measured points
(NRow1,NCol1,NRow2, NCol2) must be passed concatenated in a corresponding order.
The input parameter EstimateParams is used to select the parameters to be estimated. Usually this param-
eter is set to ’all’, i.e., all external camera parameters (translation and rotation) and all internal camera param-
eters are determined. Otherwise, EstimateParams contains a tuple of strings indicating the combination
of parameters to estimate. For instance, if the interior camera parameters already have been determined (e.g.,
by previous calls to camera_calibration) it is often desired to only determine relative the pose of the
two cameras to each other (RelPose). In this case, EstimateParams can be set to ’pose_rel’. This has
the same effect as EstimateParams = [’pose1’,’pose2’]. The internal parameters can be subsumed by the
parameter values ’cam_param1’ and ’cam_param2’, as well. In addition, parameters can be excluded from
estimation by using the prefix ~. For example, the values [’pose1’, ’~transx1’] have the same effect as [’al-
pha1’,’beta1’,’gamma1’,’transy1’,’transz1’]. Whereas [’all’,’~focus1’] determines all internal and external param-
eters except the focus of camera 1, for instance. The prefix ~ can be used with all parameter values except ’all’.
The underlying camera model is explained in the description of the camera_calibration operator. It is
specified by the parameters [focus1, kappa1, sx1, sy1, cx1, cy1, image_width1, image_height1] of camera 1
returned in CamParam1 and [focus2, kappa2, sx2, sy2, cx2, cy2, image_width2, image_height2] of camera 2
returned in CamParam2 (with focus > 0). The external parameters [alpha_rel, beta_rel, gamma_rel, transx_rel,
transy_rel, transz_rel] are returned in RelPose and specify the 3D transformation of points of CCS 2 into CCS
1. Note that according to the description of poses at create_pose one parameter is appended to the pose tuple
at the last position to define the representation type of this pose.
According to camera_calibration the 3D transformation poses of the calibration model to the respective
CCS are returned in NFinalPose1 and NFinalPose2. These transformations are related to RelPose accord-
ing to the following equation (neglecting differences due to the balancing effects of the multi image calibration):
HomMat3D_NFinalPose2 = INV(HomMat3D_RelPose) * HomMat3D_NFinalPose1,
whereas HomMat3D_* denotes a homogeneous transformation matrix of the respective poses and INV() inverts a
homogeneous matrix.
The computed average errors returned in Errors give an impression of the accuracy of the calibration. Using
the determined camera parameters they denote the average of the euklidean distance of the projection of the mark
centers of the model to their image.
Parameter
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all X-coordinates of the calibration marks (in meters).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all Y-coordinates of the calibration marks (in meters).
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all Z-coordinates of the calibration marks (in meters).
. NRow1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 1 (in pixels).
. NCol1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 1 (in pixels).
. NRow2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 2 (in pixels).

HALCON 8.0.2
1294 CHAPTER 15. TOOLS

. NCol2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double


Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 2 (in pixels).
. StartCamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong
Initial values for the internal projective parameters of the projective camera 1.
. StartCamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong
Initial values for the internal projective parameters of teh projective camera 2.
. NStartPose1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 1.
. NStartPose2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 2.
. EstimateParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong
Camera parameters to be estimated.
Default Value : "all"
List of values : EstimateParams ∈ {"all", "pose_rel", "pose1", "pose2", "cam_param1", "cam_param2",
"alpha1", "beta1", "gamma1", "transx1", "transy1", "transz1", "alpha2", "beta2", "gamma2", "transx2",
"transy2", "transz2", "focus1", "kappa1", "cx1", "cy1", "sx1", "sy1", "focus2", "kappa2", "cx2", "cy2", "sx2",
"sy2"}
. CamParam1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
Internal Parameters of the projective camera 1.
. CamParam2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
Internal parameters of the projective camera 2.
. NFinalPose1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Ordered tuple with all poses of the calibration model in relation to camera 1.
. NFinalPose2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Ordered tuple with all poses of the calibration model in relation to camera 2.
. RelPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Pose of camera 2 in relation to camera 1.
. Errors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Average error distances in pixels.
Example (Syntax: HDevelop)

// open image source


close_all_framegrabbers ()
open_framegrabber (’File’, 1, 1, 0, 0, 0, 0, ’default’, -1, ’default’, -1,
’default’, ’images_l.seq’, ’default’, 0, -1, FGHandle1)
open_framegrabber (’File’, 1, 1, 0, 0, 0, 0, ’default’, -1, ’default’, -1,
’default’, ’images_r.seq’, ’default’, 1, -1, FGHandle2)

// initialize the start parameters


create_caltab (0.03, ’caltab_30.descr’, ’caltab_30.ps’)
caltab_points (’caltab_30.descr’, X, Y, Z)
StartCamParam1 := [0.0125, 0, 7.4e-6, 7.4e-6,Width/2.0,Height/2.0,Width,Height]
StartCamParam2 := StartCamParam1
Rows1 := []
Cols1 := []
StartPoses1 := []
Rows2 := []
Cols2 := []
StartPoses2 := []

// find calibration marks and startposes


for i := 0 to 11 by 1
grab_image_async (Image1, FGHandle1, -1)
grab_image_async (Image2, FGHandle2, -1)
find_caltab (Image1, Caltab1, ’caltab_30.descr’, 3, 120, 5)
find_caltab (Image2, Caltab2, ’caltab_30.descr’, 3, 120, 5)
find_marks_and_pose (Image1, Caltab1, ’caltab_30.descr’, StartCamParam1,

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1295

128, 10, 20, 0.7, 5, 100, RCoord1, CCoord1,


StartPose1)
Rows1 := [Rows1,RCoord1]
Cols1 := [Cols1,CCoord1]
StartPoses1 := [StartPoses1,StartPose1]
find_marks_and_pose (Image2, Caltab2, ’caltab_30.descr’, StartCamParam2,
128, 10, 20, 0.7, 5, 100, RCoord2, CCoord2,
StartPose2)
Rows2 := [Rows2,RCoord2]
Cols2 := [Cols2,CCoord2]
StartPoses2 := [StartPoses2,StartPose2]
endfor

// calibrate the stereo rig


binocular_calibration (X, Y, Z, Rows1, Cols1, Rows2, Cols2, StartCamParam1,
StartCamParam2, StartPoses1, StartPoses2, ’all’,
CamParam1, CamParam2, NFinalPose1, NFinalPose2,
RelPose, Errors)
// archive the results
write_cam_par (CamParam1, ’cam_left-125.dat’)
write_cam_par (CamParam2, ’cam_right-125.dat’)
write_pose (RelPose, ’rel_pose.dat’)

// ... rectify the stereo images


gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, RelPose,
’geometric’, ’bilinear’, CamParamRect1, CamParamRect2, Cam1PoseRect1,
Cam2PoseRect2, RelPoseRect)
map_image (Image1, Map1, ImageMapped1)
map_image (Image2, Map2, ImageMapped2)

Result
binocular_calibration returns H_MSG_TRUE if all parameter values are correct and the desired param-
eters have been determined by the minimization algorithm. If necessary, an exception handling is raised.
Parallelization Information
binocular_calibration is reentrant and processed without parallelization.
Possible Predecessors
find_marks_and_pose, caltab_points, read_cam_par
Possible Successors
write_pose, write_cam_par, pose_to_hom_mat3d, disp_caltab,
gen_binocular_rectification_map
See also
find_caltab, sim_caltab, read_cam_par, create_pose, convert_pose_type,
read_pose, hom_mat3d_to_pose, create_caltab, binocular_disparity,
binocular_distance
Module
3D Metrology

HALCON 8.0.2
1296 CHAPTER 15. TOOLS

binocular_disparity ( const Hobject Image1, const Hobject Image2,


Hobject *Disparity, Hobject *Score, const char *Method,
Hlong MaskWidth, Hlong MaskHeight, double TextureThresh,
Hlong MinDisparity, Hlong MaxDisparity, Hlong NumLevels,
double ScoreThresh, const char *Filter, const char *SubDisparity )

T_binocular_disparity ( const Hobject Image1, const Hobject Image2,


Hobject *Disparity, Hobject *Score, const Htuple Method,
const Htuple MaskWidth, const Htuple MaskHeight,
const Htuple TextureThresh, const Htuple MinDisparity,
const Htuple MaxDisparity, const Htuple NumLevels,
const Htuple ScoreThresh, const Htuple Filter,
const Htuple SubDisparity )

Compute the disparities of a rectified image pair.


binocular_disparity computes pixel-wise correspondences between two epipolar images using correlation
techniques. Different from binocular_distance the results are not transformed into distance values.
The algorithm requires a reference image Image1 and a search image Image2 which must be rectified,
i.e., corresponding epipolar lines are parallel and lie on identical image rows ( r1 = r2 ). In case this
assumption is violated the images can be rectified by using the operators binocular_calibration,
gen_binocular_rectification_map, and map_image. Hence, given a pixel in the reference image
Image1 the homologous pixel in Image2 is selected by searching along the corresponding row in Image2 and
matching a local neighborhood within a rectangular window of size MaskWidth and MaskHeight. The pixel
correspondences are returned in the single-channel Disparity image d(r1 , c1 ) which specifies for each pixel
(r1,c1) of the reference image Image1 a suitable matching pixel (r2,c2) of Image2 according to the equation
c2 = c1 + d(r1 , c1 ). A quality measure for each disparity value is returned in Score, containing the best result of
the matching function S of a reference pixel. For the matching, the gray values of the original unprocessed images
are used.
The used matching function is defined by the parameter Method allocating three different kinds of correlation:
r+m c+n
1
| g1 (r0 , c0 ) − g2 (r0 , c0 + d) |,
P P
• ’sad’: Summed Absolute Differences S(r, c, d) = N
r 0 =r−m c0 =c−n
with 0 ≤ S(r, c, d) ≤ 255.
r+m c+n
1
(g1 (r0 , c0 ) − g2 (r0 , c0 + d))2 ,
P P
• ’ssd’: Summed Squared Differences S(r, c, d) = N
r 0 =r−m c0 =c−n
with 0 ≤ S(r, c, d) ≤ 65025.
r+m
P c+n
P
(g1 (r 0 ,c0 )−g¯1 (r,c))(g2 (r 0 ,c0 +d)−g¯2 (r,c))
r 0 =r−m c0 =c−n
• ’nnc’: Normalized Cross Correlation S(r, c, d) = s ,
r+m
P c+n+d
P
(g1 (r 0 ,c0 )−g¯1 (r,c))2 (g2 (r 0 ,c0 +d)−g¯2 (r,c+d))2
r 0 =r−m c0 =c−n

with −1.0 ≤ S(r, c, d) ≤ 1.0.

with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n

It should be noted, that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window, referenced by 2m + 1 and 2n + 1, has to be odd numbered and is passed in
MaskWidth and MaskHeight. The search space is confined by the minimum and maximum disparity value
MinDisparity and MaxDisparity. Due to pixel values not defined beyond the image border the resulting
domain of Disparity and Score is not set along the image border within a margin of height (MaskHeight-
1)/2 at the top and bottom border and of width (MaskWidth-1)/2 at the left and right border. For the same reason,
the maximum disparity range is reduced at the left and right image border.

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1297

Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum statistical
spread of gray values within the correlation window can be defined in TextureThresh. This threshold is applied
on both input images Image1 and Image2. In addition, ScoreThresh guarantees the matching quality and
defines the maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation function. Setting
Filter to ’left_right_check’, moreover, increases the robustness of the returned matches, as the result relies on a
concurrent direct and reverse match, whereas ’none’ switches it off.
The number of pyramid levels used to improve the time response of binocular_disparity is determined by
NumLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmented
into rectangular subimages of similar disparity to reduce the disparity range on the next lower pyramid level.
TextureThresh and ScoreThresh are applied on every level and the returned domain of the Disparity
and Score images arises from the intersection of the resulting domains of every single level. Generally, pyramid
structures are the more advantageous the more the disparity image can be segmented into regions of homogeneous
disparities and the bigger the disparity range is specified. As a drawback, coarse pyramid levels might loose
important texture information which can result in deficient disparity values.
Finally, the value ’interpolation’ for parameter SubDisparity performs subpixel refinement of disparities. It is
switched off by setting the parameter to ’none’.
Parameter

. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte


Epipolar image of camera 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte
Epipolar image of camera 2.
. Disparity (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject * : real
Disparity map.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject * : real
Evaluation of the disparity values.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Matching function.
Default Value : "ncc"
List of values : Method ∈ {"sad", "ssd", "ncc"}
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Width of the correlation window.
Default Value : 11
Suggested values : MaskWidth ∈ {5, 7, 9, 11, 21}
Restriction : (3 ≤ MaskWidth) ∧ odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Height of the correlation window.
Default Value : 11
Suggested values : MaskHeight ∈ {5, 7, 9, 11, 21}
Restriction : (3 ≤ MaskHeight) ∧ odd(MaskHeight)
. TextureThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double / Hlong
Variance threshold of textured image regions.
Default Value : 0.0
Suggested values : TextureThresh ∈ {0.0, 10.0, 30.0}
Restriction : 0.0 ≤ TextureThresh
. MinDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Minimum of the expected disparities.
Default Value : -30.0
. MaxDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Maximum of the expected disparities.
Default Value : 30.0
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Number of pyramid levels.
Default Value : 1
Suggested values : NumLevels ∈ {1, 2, 3, 4}
Restriction : 1 ≤ NumLevels

HALCON 8.0.2
1298 CHAPTER 15. TOOLS

. ScoreThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double / Hlong


Threshold of the correlation function.
Default Value : 0.5
Suggested values : ScoreThresh ∈ {-1.0, 0.0, 0.3, 0.5, 0.7}
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Downstream filters.
Default Value : "none"
List of values : Filter ∈ {"none", "left_right_check"}
. SubDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Subpixel interpolation of disparities.
Default Value : "none"
List of values : SubDisparity ∈ {"none", "interpolation"}
Example (Syntax: HDevelop)

// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpos.dat’, RelPose)

// compute the mapping for epipolar images


gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, RelPose,
’geometric’, ’bilinear’, CamParamRect1,CamParamRect2, Cam1PoseRect1,
Cam2PoseRect2,RelPoseRect)

// compute the disparities in online images


while 1
grab_image_async (Image1, FGHandle1, -1)
map_image (Image1, Map1, ImageMapped1)

grab_image_async (Image2, FGHandle2, -1)


map_image (Image2, Map2, ImageMapped2)

binocular_disparity(ImageMapped1, ImageMapped2, Disparity, Score, ’sad’,


11, 11, 20, -40, 20, 2, 25, ’left_right_check’,’interpolation’)
endfor

Result
binocular_disparity returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
binocular_disparity is reentrant and automatically parallelized (on domain level).
Possible Predecessors
map_image
Possible Successors
threshold, disparity_to_distance
Alternatives
binocular_distance
See also
map_image, gen_binocular_rectification_map, binocular_calibration
Module
3D Metrology

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1299

T_binocular_distance ( const Hobject Image1, const Hobject Image2,


Hobject *Distance, Hobject *Score, const Htuple CamParamRect1,
const Htuple CamParamRect2, const Htuple RelPoseRect,
const Htuple Method, const Htuple MaskWidth, const Htuple MaskHeight,
const Htuple TextureThresh, const Htuple MinDisparity,
const Htuple MaxDisparity, const Htuple NumLevels,
const Htuple ScoreThresh, const Htuple Filter,
const Htuple SubDistance )

Compute the distance values for a rectified stereo image pair.


binocular_distance computes pixel-wise correspondences between two images of a rectified stereo rig
using correlation techniques. Different from binocular_distance this operator transforms these pixel cor-
relations into distances of the corresponding 3D world points to the stereo camera system.
The algorithm requires a reference image Image1 and a search image Image2 which must be rectified,
i.e., corresponding epipolar lines are parallel and lie on identical image rows ( r1 = r2 ). In case this
assumption is violated the images can be rectified by using the operators binocular_calibration,
gen_binocular_rectification_map and map_image. Hence, given a pixel in the reference image
Image1 the homologous pixel in Image2 is selected by searching along the corresponding row in Image2 and
matching a local neighborhood within a rectangular window of size MaskWidth and MaskHeight. For each
defined reference pixel the pixel correspondences are transformed into distances of the world points defined by the
intersection of the lines of sight of a corresponding pixel pair to the z = 0 plane of the rectified stereo system.
These distances are returned in the single channel image Distance. For this transformation the rectified internal
camera parameters CamParamRect1 of the projective camera 1 and CamParamRect2 of the projective camera
2, and the the external parameters RelPoseRect have to be defined. Latter characterizes the relative pose of
both cameras to each other and specifies a point transformation from the rectified camera system 2 to the recti-
fied camera system 1. These parameters can be obtained from the operator binocular_calibration and
gen_binocular_rectification_map. After all, a quality measure for each distance value is returned in
Score, containing the best result of the matching function S of a reference pixel. For the matching, the gray
values of the original unprocessed images are used.
The used matching function is defined by the parameter Method allocating three different kinds of correlation:
r+m c+n
1
| g1 (r0 , c0 ) − g2 (r0 , c0 + d) |,
P P
• ’sad’: Summed Absolute Differences S(r, c, d) = N
r 0 =r−m c0 =c−n
with 0 ≤ S(r, c, d) ≤ 255.
r+m c+n
1
(g1 (r0 , c0 ) − g2 (r0 , c0 + d))2 ,
P P
• ’ssd’: Summed Squared Differences S(r, c, d) = N
r 0 =r−m c0 =c−n
with 0 ≤ S(r, c, d) ≤ 65025.
r+m
P c+n
P
(g1 (r 0 ,c0 )−g¯1 (r,c))(g2 (r 0 ,c0 +d)−g¯2 (r,c+d))
r 0 =r−m c0 =c−n
• ’nnc’: Normalized Cross Correlation S(r, c, d) = s ,
r+m
P c+n
P
(g1 (r 0 ,c0 )−g¯1 (r,c))2 (g2 (r 0 ,c0 +d)−g¯2 (r,c+d))2
r 0 =r−m c0 =c−n

with −1.0 ≤ S(r, c, d) ≤ 1.0.

with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n

It should be noted that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window has to be odd numbered and is passed in MaskWidth and MaskHeight. The
search space is confined by the minimum and maximum disparity value MinDisparity and MaxDisparity.
Due to pixel values not defined beyond the image border the resulting domain of Distance and Score is
generally not set along the image border within a margin of height MaskHeight/2 at the top and bottom border

HALCON 8.0.2
1300 CHAPTER 15. TOOLS

and of width MaskWidth/2 at the left and right border. For the same reason, the maximum disparity range is
reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum variance
within the correlation window can be defined in TextureThresh. This threshold is applied on both input
images Image1 and Image2. In addition, ScoreThresh guarantees the matching quality and defines the
maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation function. Setting Filter
to ’left_right_check’, moreover, increases the robustness of the returned matches, as the result relies on a concurrent
direct and reverse match, whereas ’none’ switches it off.
The number of pyramid levels used to improve the time response of binocular_distance is determined by
NumLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmentated
into rectangular subimages to reduce the disparity range on the next lower pyramid level. TextureThresh and
ScoreThresh are applied on every level and the returned domain of the Distance and Score images arises
from the intersection of the resulting domains of every single level. Generally, pyramid structures are the more
advantageous the more the distance image can be segmented into regions of homogeneous distance values and the
bigger the disparity range must be specified. As a drawback, coarse pyramid levels might loose important texture
information which can result in deficient distance values.
Finally, the value ’interpolation’ for parameter SubDistance increases the refinement and accuracy of the dis-
tance values. It is switched off by setting the parameter to ’none’.
Parameter

. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte


Epipolar image of camera 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte
Epipolar image of camera 2.
. Distance (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject * : real
Distance image.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject * : real
Evaluation of a distance value.
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Rectified internal camera parameters of the projective camera 1.
Number of elements : 8
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Rectified internal camera parameters of the projective camera 2.
Number of elements : 8
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Point transformation from rectified camera 2 to rectified camera 1.
Number of elements : 7
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Matching function.
Default Value : "ncc"
List of values : Method ∈ {"sad", "ssd", "ncc"}
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Width of the correlation window.
Default Value : 11
Suggested values : MaskWidth ∈ {5, 7, 9, 11, 21}
Restriction : (3 ≤ MaskWidth) ∧ odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Height of the correlation window.
Default Value : 11
Suggested values : MaskHeight ∈ {5, 7, 9, 11, 21}
Restriction : (3 ≤ MaskHeight) ∧ odd(MaskHeight)
. TextureThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double / Hlong
Variance threshold of textured image regions.
Default Value : 0.0
Suggested values : TextureThresh ∈ {-1.0, 0.0, 0.3, 0.5, 0.7}
Restriction : 0.0 ≤ TextureThresh

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1301

. MinDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double / Hlong


Minimum of the expected disparities.
Default Value : 0.0
. MaxDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double / Hlong
Maximum of the expected disparities.
Default Value : 30.0
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of pyramid levels.
Default Value : 1
List of values : NumLevels ∈ {1, 2, 3, 4}
Restriction : 1 ≤ NumLevels
. ScoreThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double / Hlong
Threshold of the correlation function.
Default Value : 0.0
List of values : ScoreThresh ∈ {-1.0, 0.0, 0.3, 0.5, 0.7}
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Downstream filters.
Default Value : "none"
List of values : Filter ∈ {"none", "left_right_check"}
. SubDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Distance interpolation.
Default Value : "none"
List of values : SubDistance ∈ {"none", "interpolation"}
Example (Syntax: HDevelop)

// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpose.dat’, RelPose)

// compute the mapping for epipolar images


gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, RelPose,
’geometric’, ’bilinear’, CamParamRect1,
CamParamRect2, Cam1PoseRect1, Cam2PoseRect2,
RelPoseRect)

// compute the distance values on online images


while 1
grab_image_async (Image1, FGHandle1, -1)
map_image (Image1, Map1, ImageMapped1)

grab_image_async (Image2, FGHandle2, -1)


map_image (Image2, Map2, ImageMapped2)

binocular_distance(ImageMapped1, ImageMapped2, Distance, Score, ’sad’,


11, 11, 20, -40, 20, 2, 25, ’left_right_check’,
’interpolation’)
endwhile

Result
binocular_disparity returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
binocular_distance is reentrant and automatically parallelized (on domain level).
Possible Predecessors
map_image

HALCON 8.0.2
1302 CHAPTER 15. TOOLS

Possible Successors
threshold
Alternatives
binocular_disparity
See also
map_image, gen_binocular_rectification_map, binocular_calibration,
distance_to_disparity, disparity_to_distance
Module
3D Metrology

T_disparity_to_distance ( const Htuple CamParamRect1,


const Htuple CamParamRect2, const Htuple RelPoseRect,
const Htuple Disparity, Htuple *Distance )

Transform a disparity value into a distance value in a rectified binocular stereo system.
disparity_to_distance transforms a disparity value into a distance of an object point to the binocular
stereo system. The cameras of this system must be rectified and are defined by the rectified internal parameters
CamParamRect1 of the projective camera 1 and CamParamRect2 of the projective camera 2, and the external
parameters RelPoseRect. Latter specifies the relative pose of both cameras to each other by defining a point
transformation from rectified camera system 2 to rectified camera system 1. These parameters can be obtained from
the operator binocular_calibration and gen_binocular_rectification_map. The disparity
value Disparity is defined by the column difference of the image coordinates of two corresponding points
on an epipolar line according to the equation d = c2 − c1 (see also binocular_disparity). This value
characterises a set of 3D object points of an equal distance to a plane beeing parallel to the rectified image plane of
the stereo system. The distance to the subset plane z = 0 which is parallel to the rectified image plane and contains
the optical centers of both cameras is returned in Distance.
Parameter
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Rectified internal camera parameters of the projective camera 1.
Number of elements : 8
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Rectified internal camera parameters of the projective camera 2.
Number of elements : 8
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Point transformation from rectified camera 2 to rectified camera 1.
Number of elements : 7
. Disparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Disparity between the images of the world point.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Distance of a world point to the rectified camera system.
Result
disparity_to_distance returns H_MSG_TRUE if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
disparity_to_distance is reentrant and processed without parallelization.
Possible Predecessors
binocular_calibration, gen_binocular_rectification_map, map_image,
binocular_disparity
Alternatives
binocular_distance
See also
distance_to_disparity, disparity_to_point_3d
Module
3D Metrology

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1303

T_disparity_to_point_3d ( const Htuple CamParamRect1,


const Htuple CamParamRect2, const Htuple RelPoseRect,
const Htuple Row1, const Htuple Col1, const Htuple Disparity,
Htuple *X, Htuple *Y, Htuple *Z )

Transform an image point and its disparity into a 3D point in a rectified stereo system.
Given an image point of the rectified camera 1, specified by its image coordinates (Row1,Col1), and its disparity in
a rectified binocular stereo system, disparity_to_point_3d computes the corresponding three dimensional
object point. Whereby the disparity value Disparity defines the column difference of the image coordinates
of two corresponding features on an epipolar line according to the equation d = c2 − c1 . The rectified binocular
camera system is specified by its internal camera parameters CamParamRect1 of the projective camera 1 and
CamParamRect2 of the projective camera 2, and the external parameters RelPoseRect defining the pose of
the rectified camera 2 in relation to the rectified camera 1. These camera parameters can be obtained from the
operators binocular_calibration and gen_binocular_rectification_map. The 3D point is
returned in Cartesian coordinates (X,Y,Z) of the rectified camera system 1.
Parameter

. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong


Rectified internal camera parameters of the projective camera 1.
Number of elements : 8
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Rectified internal camera parameters of the projective camera 2.
Number of elements : 8
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Pose of the rectified camera 2 in relation to the rectified camera 1.
Number of elements : 7
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Row coordinate of a point in the rectified image 1.
. Col1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Column coordinate of a point in the rectified image 1.
. Disparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Disparity of the images of the world point.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
X coordinate of the 3D point.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Y coordinate of the 3D point.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Z coordinate of the 3D point.
Result
disparity_to_point_3d returns H_MSG_TRUE if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
disparity_to_point_3d is reentrant and processed without parallelization.
Possible Predecessors
binocular_calibration, gen_binocular_rectification_map
Possible Successors
binocular_disparity, binocular_distance
See also
intersect_lines_of_sight
Module
3D Metrology

HALCON 8.0.2
1304 CHAPTER 15. TOOLS

T_distance_to_disparity ( const Htuple CamParamRect1,


const Htuple CamParamRect2, const Htuple RelPoseRect,
const Htuple Distance, Htuple *Disparity )

Transfrom a distance value into a disparity in a rectified stereo system.


distance_to_disparity transforms a distance of a 3D point to the binocular stereo system into a dis-
parity value. The cameras of this system must be rectified and are defined by the rectified internal parameters
CamParamRect1 of the projective camera 1 and CamParamRect2 of the projective camera 2 and the external
parameters RelPoseRect. latter specifies the relative pose of both camera systems to each other by defining a
point transformation from the rectified camera system 2 to the rectified camera system 1. These parameters can
be obtained from the operator binocular_calibration and gen_binocular_rectification_map.
The distance value is passed in Distance and the resulting disparity value Disparity is defined by the column
difference of the image coordinates of two corresponding features on an epipolar line according to the equation
d = c2 − c1 .
Parameter
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
rectified internal camera parameters of the projective camera 1.
Number of elements : 8
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
rectified internal camera parameters of the projective camera 2.
Number of elements : 8
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Point transformation from rectified camera 2 to rectified camera 1.
Number of elements : 7
. Distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Distance of a world point to camera 1.
Restriction : 0 < Distance
. Disparity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double * / Hlong *
Disparity between the images of the point.
Result
distance_to_disparity returns H_MSG_TRUE if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
distance_to_disparity is reentrant and processed without parallelization.
Possible Predecessors
binocular_calibration, gen_binocular_rectification_map
Possible Successors
binocular_disparity
Module
3D Metrology

T_essential_to_fundamental_matrix ( const Htuple EMatrix,


const Htuple CovEMat, const Htuple CamMat1, const Htuple CamMat2,
Htuple *FMatrix, Htuple *CovFMat )

Compute the fundamental matrix from an essential matrix.


The fundamental matrix is the entity describing the epipolar constraint in image coordinates (C,R) and the essential
matrix is its counterpart for 3D direction vectors (X,Y,1):
 T    T  
C2 C1 X2 X1
 R2  · FMatrix ·  R1  = 0 and  Y2  · EMatrix ·  Y1  = 0 .
1 1 1 1

Image coordinates result from 3D direction vectors by multiplication with the camera matrix CamM at:

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1305

   
col X
 row  = CamM at ·  Y  .
1 1

Therefore, the fundamental matrix FMatrix is calculated from the essential matrix EMatrix and the camera
matrices CamMat1, CamMat2 by the following formula:

FMatrix = CamMat2−T · EMatrix · CamMat1−1 .

The transformation of the essential matrix to the fundamental matrix goes along with the propagation of the co-
variance matrices CovEMat to CovFMat. If CovEMat is empty CovFMat will be empty too.
The conversion operator essential_to_fundamental_matrix is used especially for a subsequent visu-
alization of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.
Parameter
. EMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double / Hlong
Essential matrix.
. CovEMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
9 × 9 covariance matrix of the essential matrix.
Default Value : []
. CamMat1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double / Hlong
Camera matrix of the 1. camera.
. CamMat2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double / Hlong
Camera matrix of the 2. camera.
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
9 × 9 covariance matrix of the fundamental matrix.
Parallelization Information
essential_to_fundamental_matrix is reentrant and processed without parallelization.
Possible Predecessors
vector_to_essential_matrix
Alternatives
rel_pose_to_fundamental_matrix
Module
3D Metrology

T_gen_binocular_proj_rectification ( Hobject *Map1, Hobject *Map2,


const Htuple FMatrix, const Htuple CovFMat, const Htuple Width1,
const Htuple Height1, const Htuple Width2, const Htuple Height2,
const Htuple SubSampling, const Htuple Mapping, Htuple *CovFMatRect,
Htuple *H1, Htuple *H2 )

Compute the projective rectification of weakly calibrated binocular stereo images.


A binocular stereo setup is called weakly calibrated if the fundamental matrix, which describes the projective
relation between the two images, is known. Rectification is the process of finding a suitable set of transformations,
that transform both images such that all corresponding epipolar lines become collinear and parallel to the horizontal
axes. The rectified images can be thought of as aquired by a stereo configuration where the left and right image
plane are identical and the difference between both image centres is a horizontal translation. Note that rectification
can only be performed if both of the epipoles are located outside the images.
Typically, the fundamental matrix is calculated beforehand with match_fundamental_matrix_ransac
and FMatrix is the basis for the computation of the two homographies H1 and H2, which describe the rectifi-
cations for the left image and the right image respectively. Since a projective rectification is an underdetermined

HALCON 8.0.2
1306 CHAPTER 15. TOOLS

problem, additional constraints are defined: the algorithm chooses the set of homographies that minimizes the
projective distortion induced by the homographies in both images. For the computation of this cost function the
dimensions of the images must be provided in Width1, Height1, Width2, Height2. After rectification the
fundamental matrix is always of the canonical form
 
0 0 0
 0 0 −1  .
0 1 0

In the case of a known covariance matrix CovFMat of the fundamental matrix FMatrix, the covariance matrix
CovFMatRect of the above rectified fundamental matrix is calculated. This can help for an improved stereo
matching process because the covariance matrix defines in terms of probabilities the image domain where to find
a corresponding match.
Similar to the operator gen_binocular_rectification_map the output images Map1 and Map2 describe
the transformation, also called mapping, of the original images to the rectified ones. The parameter Mapping
specifies whether bilinear interpolation (’bilinear_map’) should be applied between the pixels in the input image
or whether the gray value of the nearest neighboring pixel should be taken (’nn_map’). The size and resolution
of the maps and of the transformed images can be adjusted by the parameter SubSampling, which applies a
sub-sampling factor to the original images. For example, a factor of two will halve the image sizes. If just the two
homographies are required Mapping can be set to ’no_map’ and no maps will be returned. For speed reasons,
this option should be used if for a specific stereo configuration the images must be rectified only once. If the stereo
setup is fixed, the maps should be generated only once and both images should be rectified with map_image;
this will result in the smallest computational cost for on-line rectification.
When using the maps, the transformed images are of the same size as their maps. Each pixel in the map contains
the description of how the new pixel at this position is generated. The images Map1 and Map2 are single channel
images if Mapping is set to ’nn_map’ and five channel images if it is set to ’bilinear_map’. In the first channel,
which is of type int4, the pixels contain the linear coordinates of their reference pixels in the original image. With
Mapping equal to ’no_map’ this reference pixel is the nearest neighbor to the back-transformed pixel coordinates
of the map. In the case of bilinear interpolation the reference pixel is the next upper left pixel relative to the back-
transformed coordinates. The following scheme shows the ordering of the pixels in the original image next to the
back-transformed pixel coordinates, where the reference pixel takes the number 2.

2 3
4 5

The channels 2 to 5, which are of type uint2, contain the weights of the relevant pixels for the bilinear interpolation.
Based on the rectified images, the disparity be computed using binocular_disparity. In contrast to stereo
with fully calibrated cameras, using the operator gen_binocular_rectification_map and its succes-
sors, metric depth information can not be derived for weakly calibrated cameras. The disparity map gives just a
qualitative depth ordering of the scene.
Parameter

. Map1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : int4 / uint2


Image coding the rectification of the 1. image.
. Map2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : int4 / uint2
Image coding the rectification of the 2. image.
. FMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double / Hlong
Fundamental matrix.
. CovFMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
9 × 9 covariance matrix of the fundamental matrix.
Default Value : []
. Width1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Width of the 1. image.
Default Value : 512
List of values : Width1 ∈ {128, 256, 512, 1024}
Restriction : Width1 > 0

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1307

. Height1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Height of the 1. image.
Default Value : 512
List of values : Height1 ∈ {128, 256, 512, 1024}
Restriction : Height1 > 0
. Width2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Width of the 2. image.
Default Value : 512
List of values : Width2 ∈ {128, 256, 512, 1024}
Restriction : Width2 > 0
. Height2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Height of the 2. image.
Default Value : 512
List of values : Height2 ∈ {128, 256, 512, 1024}
Restriction : Height2 > 0
. SubSampling (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong / double
Subsampling factor.
Default Value : 1
List of values : SubSampling ∈ {1, 2, 3, 1.5}
. Mapping (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of mapping.
Default Value : "no_map"
List of values : Mapping ∈ {"no_map", "nn_map", "bilinear_map"}
. CovFMatRect (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double *
9 × 9 covariance matrix of the rectified fundamental matrix.
. H1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Projective transformation of the 1. image.
. H2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Projective transformation of the 2. image.
Parallelization Information
gen_binocular_proj_rectification is reentrant and processed without parallelization.
Possible Predecessors
match_fundamental_matrix_ransac, vector_to_fundamental_matrix
Possible Successors
map_image, projective_trans_image, binocular_disparity
Alternatives
gen_binocular_rectification_map
References
J. Gluckmann and S.K. Nayar: “Rectifying transformations that minimize resampling effects”; IEEE Conference
on Computer Vision and Pattern Recognition (CVPR) 2001, vol I, pages 111-117.
Module
3D Metrology

T_gen_binocular_rectification_map ( Hobject *Map1, Hobject *Map2,


const Htuple CamParam1, const Htuple CamParam2, const Htuple RelPose,
const Htuple SubSampling, const Htuple Method,
const Htuple Interpolation, Htuple *CamParamRect1,
Htuple *CamParamRect2, Htuple *CamPoseRect1, Htuple *CamPoseRect2,
Htuple *RelPoseRect )

Generate transformation maps that describe the mapping of the images of a binocular camera pair to a common
rectified image plane.
Given a pair of stereo images, rectification determines a transformation of each image plane in a way that pairs of
conjugate epipolar lines become collinear and parallel to the horizontal image axes. The rectified epipolar images

HALCON 8.0.2
1308 CHAPTER 15. TOOLS

can be thought of as acquired by a new stereo rig, obtained by rotating the original cameras. The camera centers of
this virtual rig are maintained whereas the image planes coincide, which means that the focal lengths are set equal,
and the optical axes parallel.
To achieve the transformation map for epipolar images gen_binocular_rectification_map requires the
internal camera parameters CamParam1 of the projective camera 1 and CamParam2 of the projective camera 2,
as well as the relative pose RelPose defining a point transformation from camera 2 to camera 1. These parameters
can be obtained, e.g., from the operator binocular_calibration.
The projection onto a common plane has many degrees of freedom which are implicitly restricted by selecting a
certain method in Method (currently only one method available):

• ’geometric’ specifies the orientation of the common image plane by the cross product of the base line and the
line of intersection of the original image planes. The new focal length are determined in such a way as the
old prinzipal points have the same distance to the new common image plane.

Similar to gen_image_to_world_plane_map the parameter Interpolation specifies whether bilinear


interpolation (’bilinear’) should be applied between the pixels in the input image or the gray value of the nearest
neighboring pixel should be taken (’none’). The size and resolution of the maps and of the transformed images can
be adjusted by the SubSampling parameter which applies a sub-sampling factor to the original images.
The mapping functions for the images of camera 1 and camera 2 are returned in the images Map1 and Map2.
If Interpolation is set to ’none’, both maps consist of one single-channel image which contains the linear
coordinate of the pixel of the respective input image that is the nearest neighbor of the transformed coordinate.
In case of bilinear interpolation, each map contains one five-channel image. The first channel contains for each
pixel of the respective map the linear coordinate of the pixel in the respective input image that is in the upper left
position with respect to the transformed coordinate. The remaining four channels of each map contain the weights
of the four neighboring pixels of the transformed coordinates which are used for the bilinear interpolation. The
mapping of the channel numbers to the neighboring pixels is as follows:

2 3
4 5

In addition, gen_binocular_rectification_map returns the modified internal and external camera pa-
rameters of the rectified stereo rig. CamParamRect1 and CamParamRect2 contain the modified internal pa-
rameters of camera 1 and camera 2, respectively. The rotation of the rectified camera in relation to the original
camera is specified by CamPoseRect1 and CamPoseRect2, respectively. Finally, RelPoseRect returns
the modified relative pose of the rectified camera system 2 in relation to the rectified camera system 1 defining
a translation in x only. Generally, the transformations are defined in a way that the rectified camera 1 is left of
the rectified camera 2. This means that the optical center of camera 2 has a positive x coordinate of the rectified
coordinate system of camera 1.
Parameter

. Map1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : int4 / uint2


Image containing the mapping data of camera 1.
. Map2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : int4 / uint2
Image containing the mapping data of camera 2.
. CamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Internal parameters of the projective camera 1.
Number of elements : 8
. CamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Internal parameters of the projective camera 2.
Number of elements : 8
. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Point transformation from camera 2 to camera 1.
Number of elements : 7
. SubSampling (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Factor of sub sampling.
Default Value : 1.0
Suggested values : SubSampling ∈ {0.5, 0.66, 1.0, 1.5, 2.0, 3.0, 4.0}

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1309

. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *


Type of rectification.
Default Value : "geometric"
List of values : Method ∈ {"geometric"}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of interpolation.
Default Value : "bilinear"
List of values : Interpolation ∈ {"none", "bilinear"}
. CamParamRect1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
Rectified internal parameters of the projective camera 1.
Number of elements : 8
. CamParamRect2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
Rectified internal parameters of the projective camera 2.
Number of elements : 8
. CamPoseRect1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Point transformation from the rectified camera 1 to the original camera 1.
Number of elements : 7
. CamPoseRect2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Point transformation from the rectified camera 1 to the original camera 1.
Number of elements : 7
. RelPoseRect (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements : 7
Example (Syntax: HDevelop)

// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpos.dat’, RelPose)

// the stereo images


gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, RelPose,
’geometric’, ’bilinear’, CamParRect1,
CamParamRect2, Cam1PoseRect1, Cam2PoseRect2,
RelPoseRect)

while 1
grab_image_async (Image1, FGHandle1, -1)
map_image (Image1, Map1, ImageMapped1)

grab_image_async (Image2, FGHandle2, -1)


map_image (Image2, Map2, ImageMapped2)

binocular_disparity(ImageMapped1, ImageMapped2, Disparity, Score, ’sad’,


11, 11, 20, -40, 20, 2, 25, ’left_right_check’,
’interpolation’)
endwhile

Result
gen_binocular_rectification_map returns H_MSG_TRUE if all parameter values are correct. If nec-
essary, an exception handling is raised.
Parallelization Information
gen_binocular_rectification_map is reentrant and processed without parallelization.
Possible Predecessors
binocular_calibration

HALCON 8.0.2
1310 CHAPTER 15. TOOLS

Possible Successors
map_image
Alternatives
gen_image_to_world_plane_map
See also
map_image, gen_image_to_world_plane_map, contour_to_world_plane_xld,
image_points_to_world_plane
Module
3D Metrology

T_intersect_lines_of_sight ( const Htuple CamParam1,


const Htuple CamParam2, const Htuple RelPose, const Htuple Row1,
const Htuple Col1, const Htuple Row2, const Htuple Col2, Htuple *X,
Htuple *Y, Htuple *Z, Htuple *Dist )

Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Given two lines of sight from different cameras, specified by their image points (Row1,Col1) of camera 1 and
(Row2,Col2) of camera 2, intersect_lines_of_sight computes the 3D point of intersection of these
lines. The binocular camera system is specified by its internal camera parameters CamParam1 of the projective
camera 1 and CamParam2 of the projective camera 2, and the external parameters RelPose defining the pose
of the cameras by a point transformation from camera 2 to camera 1. These camera parameters can be obtained,
e.g., from the operator binocular_calibration, if the coordinates of the image points (Row1,Col1) and
(Row2,Col2) refer to the respective original image coordinate system. In case of rectified image coordinates (
e.g., obtained from epipolar images), the rectified camera parameters must be passed, as they are returned by the
operator gen_binocular_rectification_map. The ’point of intersection’ is defined by the point with
the shortest distance to both lines of sight. This point is returned in Cartesian coordinates (X,Y,Z) of camera system
1 and its distance to the lines of sight is passed in Dist.
Parameter

. CamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong


Internal parameters of the projective camera 1.
Number of elements : 8
. CamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Internal parameters of the projective camera 2.
Number of elements : 8
. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Point transformation from camera 2 to camera 1.
Number of elements : 7
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Row coordinate of a point in image 1.
. Col1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Column coordinate of a point in image 1.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Row coordinate of the corresponding point in image 2.
. Col2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Column coordinate of the corresponding point in image 2.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
X coordinate of the 3D point.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Y coordinate of the 3D point.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Z coordinate of the 3D point.
. Dist (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Distance of the 3D point to the lines of sight.

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1311

Result
intersect_lines_of_sight returns H_MSG_TRUE if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
intersect_lines_of_sight is reentrant and processed without parallelization.
Possible Predecessors
binocular_calibration
See also
disparity_to_point_3d
Module
3D Metrology

T_match_essential_matrix_ransac ( const Hobject Image1,


const Hobject Image2, const Htuple Rows1, const Htuple Cols1,
const Htuple Rows2, const Htuple Cols2, const Htuple CamMat1,
const Htuple CamMat2, const Htuple GrayMatchMethod,
const Htuple MaskSize, const Htuple RowMove, const Htuple ColMove,
const Htuple RowTolerance, const Htuple ColTolerance,
const Htuple Rotation, const Htuple MatchThreshold,
const Htuple EstimationMethod, const Htuple DistanceThreshold,
const Htuple RandSeed, Htuple *EMatrix, Htuple *CovEMat,
Htuple *Error, Htuple *Points1, Htuple *Points2 )

Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo images
Image1 and Image2 along with known internal camera parameters, specified by the camera matrices CamMat1
and CamMat2, match_essential_matrix_ransac automatically determines the geometry of the stereo
setup and finds the correspondences between the characteristic points. The geometry of the stereo setup is repre-
sented by the essential matrix EMatrix and all corresponding points have to fulfill the epipolar constraint.
The operator match_essential_matrix_ransac is designed to deal with a linear camera model. The
internal camera parameters are passed by the arguments CamMat1 and CamMat2, which are 3×3 upper triangular
matrices desribing an affine transformation. The relation between a vector (X,Y,1), representing the direction from
the camera to the viewed 3D space point and its (projective) 2D image coordinates (col,row,1) is:
     
col X f /sx s cx
 row  = CamM at ·  Y  where CamM at =  0 f /sy cy  .
1 1 0 0 1

Note the column/row ordering in the point coordinates which has to be compliant with the x/y notation of the
camera coordinate system. The focal length is denoted by f , sx , sy are scaling factors, s describes a skew factor
and (cx , cy ) indicates the principal point. Mainly, these are the elements known from the camera parameters as
used for example in camera_calibration. Alternatively, the elements of the camera matrix can be described
in a different way, see e.g. stationary_camera_self_calibration. Multiplied by the inverse of the
camera matrices the direction vectors in 3D space are obtained from the (projective) image coordinates. For known
camera matrices the epipolar constraint is given by:
 T  
X2 X1
 Y2  · EM atrix ·  Y1  = 0 .
1 1

The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC

HALCON 8.0.2
1312 CHAPTER 15. TOOLS

algorithm is applied to find the essential matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the essen-
tial matrix EMatrix. It tries to find the essential matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The essential matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns the
covariance of the essential matrix CovEMat as well. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-
linear-transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences
differ depending on the deployed estimation method.
The value Error indicates the overall quality of the estimation procedure and is the mean euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
For the operator match_essential_matrix_ransac a special configuration of scene points and cameras
exists: if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution
in the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by
the operator. This means that the output parameters EMatrix, CovEMat and Error are of double length and
the values of the second solution are simply concatenated behind the values of the first one.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible.
Parameter

. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2


Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image 2.

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1313

. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong


Row coordinates of characteristic points in image 1.
Restriction : (length(Rows1) ≥ 6) ∨ (length(Rows1) ≥ 3)
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Column coordinates of characteristic points in image 1.
Restriction : length(Cols1) = length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Row coordinates of characteristic points in image 2.
Restriction : (length(Rows2) ≥ 6) ∨ (length(Rows2) ≥ 3)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Column coordinates of characteristic points in image 2.
Restriction : length(Cols2) = length(Rows2)
. CamMat1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double / Hlong
Camera matrix of the 1st camera.
. CamMat2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double / Hlong
Camera matrix of the 2nd camera.
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Gray value comparison metric.
Default Value : "ssd"
List of values : GrayMatchMethod ∈ {"ssd", "sad", "ncc"}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Size of gray value masks.
Default Value : 10
Typical range of values : 3 ≤ MaskSize ≤ 15
Restriction : MaskSize ≥ 1
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Average row coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ RowMove ≤ 200
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Average column coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ ColMove ≤ 200
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Half height of matching search window.
Default Value : 200
Typical range of values : 50 ≤ RowTolerance ≤ 200
Restriction : RowTolerance ≥ 1
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Half width of matching search window.
Default Value : 200
Typical range of values : 50 ≤ ColTolerance ≤ 200
Restriction : ColTolerance ≥ 1
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Estimate of the relative orientation of the right image with respect to the left image.
Default Value : 0.0
Suggested values : Rotation ∈ {0.0, 0.1, -0.1, 0.7854, 1.571, 3.142}
. MatchThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong / double
Threshold for gray value matching.
Default Value : 10
Suggested values : MatchThreshold ∈ {10, 20, 50, 100, 0.9, 0.7}
. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Algorithm for the computation of the essential matrix and for special camera orientations.
Default Value : "normalized_dlt"
List of values : EstimationMethod ∈ {"normalized_dlt", "gold_standard", "trans_normalized_dlt",
"trans_gold_standard"}

HALCON 8.0.2
1314 CHAPTER 15. TOOLS

. DistanceThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong


Maximal deviation of a point from its epipolar line.
Default Value : 1
Typical range of values : 0.5 ≤ DistanceThreshold ≤ 5
Restriction : DistanceThreshold > 0
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Seed for the random number generator.
Default Value : 0
. EMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Computed essential matrix.
. CovEMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
9 × 9 covariance matrix of the essential matrix.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Root-Mean-Square of the epipolar distance error.
. Points1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of matched input points in image 1.
. Points2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of matched input points in image 2.
Parallelization Information
match_essential_matrix_ransac is reentrant and processed without parallelization.
Possible Predecessors
points_foerstner, points_harris
Possible Successors
vector_to_essential_matrix
See also
match_fundamental_matrix_ransac, match_rel_pose_ransac,
stationary_camera_self_calibration
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology

T_match_fundamental_matrix_ransac ( const Hobject Image1,


const Hobject Image2, const Htuple Rows1, const Htuple Cols1,
const Htuple Rows2, const Htuple Cols2, const Htuple GrayMatchMethod,
const Htuple MaskSize, const Htuple RowMove, const Htuple ColMove,
const Htuple RowTolerance, const Htuple ColTolerance,
const Htuple Rotation, const Htuple MatchThreshold,
const Htuple EstimationMethod, const Htuple DistanceThreshold,
const Htuple RandSeed, Htuple *FMatrix, Htuple *CovFMat,
Htuple *Error, Htuple *Points1, Htuple *Points2 )

Compute the fundamental matrix for a pair of stereo images by automatically finding correspondences between
image points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo images
Image1 and Image2, match_fundamental_matrix_ransac automatically finds the correspondences
between the characteristic points and determines the geometry of the stereo setup. For unknown cameras the
geometry of the stereo setup is represented by the fundamental matrix FMatrix and all corresponding points
have to fulfill the epipolar constraint, namely:

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1315

 T  
Cols2 Cols1
 Rows2  · FMatrix ·  Rows1  = 0 .
1 1

Note the column/row ordering in the point coordinates: because the fundamental matrix encodes the projective
relation between two stereo images embedded in 3D space, the x/y notation has to be compliant with the camera
coordinate system. So, (x,y) coordinates correspond to (column,row) pairs.
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an initial
matching between them is generated using the similarity of the windows in both images. Then, the RANSAC algo-
rithm is applied to find the fundamental matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the fun-
damental matrix FMatrix. It tries to find the matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. If left and right camera are identical and the relative orien-
tation between them is a pure translation then choose EstimationMethod equal to ’trans_normalized_dlt’ or
’trans_gold_standard’. The typical application for this special motion case is the scenario of a single fixed camera
looking onto a moving conveyor belt. In order to get a unique solution in the correspondence problem the min-
imum required number of corresponding points is eight in the general case and three in the special, translational
case.
The fundamental matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns as
well the covariance of the fundamental matrix CovFMat. Here, ’normalized_dlt’ and ’gold_standard’ stand for
direct-linear-transformation and gold-standard-algorithm respectively.
The value Error indicates the overall quality of the estimation procedure and is the mean euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible.

HALCON 8.0.2
1316 CHAPTER 15. TOOLS

Parameter
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Row coordinates of characteristic points in image 1.
Restriction : (length(Rows1) ≥ 8) ∨ (length(Rows1) ≥ 3)
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Column coordinates of characteristic points in image 1.
Restriction : length(Cols1) = length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Row coordinates of characteristic points in image 2.
Restriction : (length(Rows2) ≥ 8) ∨ (length(Rows2) ≥ 3)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Column coordinates of characteristic points in image 2.
Restriction : length(Cols2) = length(Rows2)
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Gray value comparison metric.
Default Value : "ssd"
List of values : GrayMatchMethod ∈ {"ssd", "sad", "ncc"}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Size of gray value masks.
Default Value : 10
Typical range of values : 3 ≤ MaskSize ≤ 15
Restriction : MaskSize ≥ 1
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Average row coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ RowMove ≤ 200
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Average column coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ ColMove ≤ 200
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Half height of matching search window.
Default Value : 200
Typical range of values : 50 ≤ RowTolerance ≤ 200
Restriction : RowTolerance ≥ 1
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Half width of matching search window.
Default Value : 200
Typical range of values : 50 ≤ ColTolerance ≤ 200
Restriction : ColTolerance ≥ 1
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Estimate of the relative orientation of the right image with respect to the left image.
Default Value : 0.0
Suggested values : Rotation ∈ {0.0, 0.1, -0.1, 0.7854, 1.571, 3.142}
. MatchThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong / double
Threshold for gray value matching.
Default Value : 10
Suggested values : MatchThreshold ∈ {10, 20, 50, 100, 0.9, 0.7}
. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Algorithm for the computation of the fundamental matrix and for special camera orientations.
Default Value : "normalized_dlt"
List of values : EstimationMethod ∈ {"normalized_dlt", "gold_standard", "trans_normalized_dlt",
"trans_gold_standard"}

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1317

. DistanceThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong


Maximal deviation of a point from its epipolar line.
Default Value : 1
Typical range of values : 0.5 ≤ DistanceThreshold ≤ 5
Restriction : DistanceThreshold > 0
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Seed for the random number generator.
Default Value : 0
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
9 × 9 covariance matrix of the fundamental matrix.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Root-Mean-Square of the epipolar distance error.
. Points1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of matched input points in image 1.
. Points2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of matched input points in image 2.
Parallelization Information
match_fundamental_matrix_ransac is reentrant and processed without parallelization.
Possible Predecessors
points_foerstner, points_harris
Possible Successors
vector_to_fundamental_matrix, gen_binocular_proj_rectification
See also
match_essential_matrix_ransac, match_rel_pose_ransac,
proj_match_points_ransac
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology

T_match_rel_pose_ransac ( const Hobject Image1, const Hobject Image2,


const Htuple Rows1, const Htuple Cols1, const Htuple Rows2,
const Htuple Cols2, const Htuple CamPar1, const Htuple CamPar2,
const Htuple GrayMatchMethod, const Htuple MaskSize,
const Htuple RowMove, const Htuple ColMove, const Htuple RowTolerance,
const Htuple ColTolerance, const Htuple Rotation,
const Htuple MatchThreshold, const Htuple EstimationMethod,
const Htuple DistanceThreshold, const Htuple RandSeed,
Htuple *RelPose, Htuple *CovRelPose, Htuple *Error, Htuple *Points1,
Htuple *Points2 )

Compute the relative orientation between two cameras by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo
images Image1 and Image2 along with known internal camera parameters CamPar1 and CamPar2,
match_rel_pose_ransac automatically determines the geometry of the stereo setup and finds the corre-
spondences between the characteristic points. The geometry of the stereo setup is represented by the relative
pose RelPose and all corresponding points have to fulfill the epipolar constraint. RelPose indicates the rel-
ative pose of camera 1 with respect to camera 2 (See create_pose for more information about poses and

HALCON 8.0.2
1318 CHAPTER 15. TOOLS

their representations.). This is in accordance with the explicit calibration of a stereo setup using the operator
binocular_calibration. Now, let R, t be the rotation and translation of the relative pose. Then, the essen-
tial matrix E is defined as E = ([t]× R)T , where [t]× denotes the 3 × 3 skew-symmetric matrix realising the cross
product with the vector t. The pose can be determined from the epipolar constraint:
 T    
X2 X1 0 −tz ty
 Y2  · ([t]× R)T ·  Y1  = 0 where [t]× =  tz 0 −tx  .
1 1 −ty tx 0

Note, that the essential matrix is a projective entity and thus is defined up to a scaling factor. From this follows that
the translation vector of the relative pose can only be determined up to scale too. In fact, the computed translation
vector will always be normalized to unit length. As a consequence, a subsequent threedimensional reconstruction
of the scene, using for instance vector_to_rel_pose, can be carried out only up to a single global scaling
factor.
The operator match_rel_pose_ransac is designed to deal with a camera model, that includes lens dis-
tortions. This is in contrast to the operator match_essential_matrix_ransac, which encompasses
only straight line preserving cameras. The camera parameters are passed in CamPar1 and CamPar2. The
3D direction vectors (X1 , Y1 , 1) and (X2 , Y2 , 1) are calculated from the point coordinates (Rows1,Cols1) and
(Rows2,Cols2) by inverting the process of projection (see camera_calibration).
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the relative pose that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the rel-
ative pose RelPose. It tries to find the relative pose that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The relative pose is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen. With
’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns as well the
covariance of the relative pose CovRelPose. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-linear-
transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences differ
depending on the deployed estimation method.

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1319

The value Error indicates the overall quality of the estimation procedure and is the mean euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
For the operator match_rel_pose_ransac a special configuration of scene points and cameras exists: if all
3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in the
essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by the
operator. This means that the output parameters RelPose, CovRelPose and Error are of double length and
the values of the second solution are simply concatenated behind the values of the first one.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible.
Parameter

. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2


Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Row coordinates of characteristic points in image 1.
Restriction : (length(Rows1) ≥ 6) ∨ (length(Rows1) ≥ 3)
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Column coordinates of characteristic points in image 1.
Restriction : length(Cols1) = length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Row coordinates of characteristic points in image 2.
Restriction : (length(Rows2) ≥ 6) ∨ (length(Rows2) ≥ 3)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Column coordinates of characteristic points in image 2.
Restriction : length(Cols2) = length(Rows2)
. CamPar1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Parameters of the 1st camera.
. CamPar2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Parameters of the 2nd camera.
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Gray value comparison metric.
Default Value : "ssd"
List of values : GrayMatchMethod ∈ {"ssd", "sad", "ncc"}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Size of gray value masks.
Default Value : 10
Typical range of values : 3 ≤ MaskSize ≤ 15
Restriction : MaskSize ≥ 1
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Average row coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ RowMove ≤ 200
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Average column coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ ColMove ≤ 200

HALCON 8.0.2
1320 CHAPTER 15. TOOLS

. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong


Half height of matching search window.
Default Value : 200
Typical range of values : 50 ≤ RowTolerance ≤ 200
Restriction : RowTolerance ≥ 1
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Half width of matching search window.
Default Value : 200
Typical range of values : 50 ≤ ColTolerance ≤ 200
Restriction : ColTolerance ≥ 1
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Estimate of the relative orientation of the right image with respect to the left image.
Default Value : 0.0
Suggested values : Rotation ∈ {0.0, 0.1, -0.1, 0.7854, 1.571, 3.142}
. MatchThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong / double
Threshold for gray value matching.
Default Value : 10
Suggested values : MatchThreshold ∈ {10, 20, 50, 100, 0.9, 0.7}
. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Algorithm for the computation of the relative pose and for special pose types.
Default Value : "normalized_dlt"
List of values : EstimationMethod ∈ {"normalized_dlt", "gold_standard", "trans_normalized_dlt",
"trans_gold_standard"}
. DistanceThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Maximal deviation of a point from its epipolar line.
Default Value : 1
Typical range of values : 0.5 ≤ DistanceThreshold ≤ 5
Restriction : DistanceThreshold > 0
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Seed for the random number generator.
Default Value : 0
. RelPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Computed relative orientation of the cameras (3D pose).
. CovRelPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
6 × 6 covariance matrix of the relative orientation.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Root-Mean-Square of the epipolar distance error.
. Points1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of matched input points in image 1.
. Points2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of matched input points in image 2.
Parallelization Information
match_rel_pose_ransac is reentrant and processed without parallelization.
Possible Predecessors
points_foerstner, points_harris
Possible Successors
vector_to_rel_pose, gen_binocular_rectification_map
See also
binocular_calibration, match_fundamental_matrix_ransac,
match_essential_matrix_ransac, create_pose
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1321

Module
3D Metrology

T_reconst3d_from_fundamental_matrix ( const Htuple Rows1,


const Htuple Cols1, const Htuple Rows2, const Htuple Cols2,
const Htuple CovRR1, const Htuple CovRC1, const Htuple CovCC1,
const Htuple CovRR2, const Htuple CovRC2, const Htuple CovCC2,
const Htuple FMatrix, const Htuple CovFMat, Htuple *X, Htuple *Y,
Htuple *Z, Htuple *W, Htuple *CovXYZW )

Compute the projective 3d reconstruction of points based on the fundamental matrix.


A pair of stereo images is called weakly calibrated if the fundamental matrix, which defines the geometric relation
between the two images, is known. Given such a fundamental matrix FMatrix and a set of corresponding points
(Rows1,Cols1) and (Rows2,Cols2) the operator reconst3d_from_fundamental_matrix determines
the three-dimensional space points projecting onto these image points. This 3D reconstruction is purely projective
and the projective coordinates are returned by the four-vector (X,Y,Z,W). This type of reconstruction is also known
as projective triangulation. If additionally the covariances CovRR1, CovRC1, CovCC1 and CovRR2, CovRC2,
CovCC2 of the image points are given the covariances of the reconstructed points CovXYZW are computed too.
Let n be the number of points. Then the concatenated covariances are stored in a 16 × n tuple. The computation
of the covariances is more precise if the covariance of the fundamental matrix CovFMat is provided.
The operator reconst3d_from_fundamental_matrix is typically used after
match_fundamental_matrix_ransac to perform 3d reconstruction. This will save computational
cost compared with the deployment of vector_to_fundamental_matrix.
reconst3d_from_fundamental_matrix is the projective equivalent to the euclidian reconstruction oper-
ator intersect_lines_of_sight.
Parameter

. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong


Input points in image 1 (row coordinate).
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Input points in image 1 (column coordinate).
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Input points in image 2 (row coordinate).
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Input points in image 2 (column coordinate).
. CovRR1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Row coordinate variance of the points in image 1.
Default Value : []
. CovRC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Covariance of the points in image 1.
Default Value : []
. CovCC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Column coordinate variance of the points in image 1.
Default Value : []
. CovRR2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Row coordinate variance of the points in image 2.
Default Value : []
. CovRC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Covariance of the points in image 2.
Default Value : []
. CovCC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Column coordinate variance of the points in image 2.
Default Value : []
. FMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Fundamental matrix.

HALCON 8.0.2
1322 CHAPTER 15. TOOLS

. CovFMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double


9 × 9 covariance matrix of the fundamental matrix.
Default Value : []
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
X coordinates of the reconstructed points in projective 3D space.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Y coordinates of the reconstructed points in projective 3D space.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Z coordinates of the reconstructed points in projective 3D space.
. W (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
W coordinates of the reconstructed points in projective 3D space.
. CovXYZW (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Covariance matrices of the reconstructed points.
Parallelization Information
reconst3d_from_fundamental_matrix is reentrant and processed without parallelization.
Possible Predecessors
match_fundamental_matrix_ransac
Alternatives
vector_to_fundamental_matrix, intersect_lines_of_sight
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Module
3D Metrology

T_rel_pose_to_fundamental_matrix ( const Htuple RelPose,


const Htuple CovRelPose, const Htuple CamPar1, const Htuple CamPar2,
Htuple *FMatrix, Htuple *CovFMat )

Compute the fundamental matrix from the relative orientation of two cameras.
Cameras including lens distortions can be modeled by the following set of parameters: the focal length f , two
scaling factors sx , sy , the coordinates of the principal point (cx , cy ) and the distortion coefficient κ. For a more
detailed description see the operator camera_calibration. Only cameras with a distortion coefficient equal
to zero project straight lines in the world onto straight lines in the image. Then, image projection is a linear
mapping and the camera, i.e. the set of internal parameters, can be described by the camera matrix CamM at:
 
f /sx 0 cx
CamM at =  0 f /sy cy  .
0 0 1

Going from a nonlinear model to a linear model is an approximation of the real underlying camera. For a variety of
camera lenses, especially lenses with long focal length, the error induced by this approximation can be neglected.
Following the formula E = ([t]× R)T , the essential matrix E is derived from the translation t and the rotation
R of the relative pose RelPose (see also operator vector_to_rel_pose). In the linearized framework the
fundamental matrix can be calculated from the relative pose and the camera matrices according to the formula
presented under essential_to_fundamental_matrix:

FMatrix = CamM at2−T · ([t]× R)T · CamM at1−1 .

The transformation from a relative pose to a fundamental matrix goes along with the propagation of the covariance
matrices CovRelPose to CovFMat. If CovRelPose is empty CovFMat will be empty too.
The conversion operator rel_pose_to_fundamental_matrix is used especially for a subsequent visual-
ization of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.

HALCON/C Reference Manual, 2008-5-13


15.17. STEREO 1323

Parameter
. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Relative orientation of the cameras (3D pose).
. CovRelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
6 × 6 covariance matrix of relative pose.
Default Value : []
. CamPar1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Parameters of the 1. camera.
. CamPar2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Parameters of the 2. camera.
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
9 × 9 covariance matrix of the fundamental matrix.
Parallelization Information
rel_pose_to_fundamental_matrix is reentrant and processed without parallelization.
Possible Predecessors
vector_to_rel_pose
Alternatives
essential_to_fundamental_matrix
See also
camera_calibration
Module
3D Metrology

T_vector_to_essential_matrix ( const Htuple Rows1,


const Htuple Cols1, const Htuple Rows2, const Htuple Cols2,
const Htuple CovRR1, const Htuple CovRC1, const Htuple CovCC1,
const Htuple CovRR2, const Htuple CovRC2, const Htuple CovCC2,
const Htuple CamMat1, const Htuple CamMat2, const Htuple Method,
Htuple *EMatrix, Htuple *CovEMat, Htuple *Error, Htuple *X, Htuple *Y,
Htuple *Z, Htuple *CovXYZ )

Compute the essential matrix given image point correspondences and known camera matrices and reconstruct 3D
points.
For a stereo configuration with known camera matrices the geometric relation between the two images is de-
fined by the essential matrix. The operator vector_to_essential_matrix determines the essential matrix
EMatrix from in general at least six given point correspondences, that fulfill the epipolar constraint:
 T  
X2 X1
 Y2  · EM atrix ·  Y1  = 0
1 1

The operator vector_to_essential_matrix is designed to deal only with a linear camera model. This is
in constrast to the operator vector_to_rel_pose, that encompasses lens distortions too. The internal camera
parameters are passed by the arguments CamMat1 and CamMat2, which are 3 × 3 upper triangular matrices
desribing an affine transformation. The relation between the vector (X,Y,1), defining the direction from the camera
to the viewed 3D point, and its (projective) 2D image coordinates (col,row,1) is:
     
col X f /sx s cx
 row  = CamM at ·  Y  where CamM at =  0 f /sy cy  .
1 1 0 0 1

The focal length is denoted by f , sx , sy are scaling factors, s describes a skew factor and (cx , cy ) indicates
the principal point. Mainly, these are the elements known from the camera parameters as used for example in

HALCON 8.0.2
1324 CHAPTER 15. TOOLS

camera_calibration. Alternatively, the elements of the camera matrix can be described in a different way,
see e.g. stationary_camera_self_calibration.
The point correspondences (Rows1,Cols1) and (Rows2,Cols2) are typically found by applying the operator
match_essential_matrix_ransac. Multiplying the image coordinates by the inverse of the camera ma-
trices results in the 3D direction vectors, which can then be inserted in the epipolar constraint.
The parameter Method decides whether the relative orientation between the cameras is of a special type and which
algorithm is to be applied for its computation. If Method is either ’normalized_dlt’ or ’gold_standard’ the relative
orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’ means that the relative motion
between the cameras is a pure translation. The typical application for this special motion case is the scenario
of a single fixed camera looking onto a moving conveyor belt. In this case the minimum required number of
corresponding points is just two instead of six in the general case.
The essential matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result. Here, ’normal-
ized_dlt’ and ’gold_standard’ stand for direct-linear-transformation and gold-standard-algorithm respectively. All
methods return the coordinates (X,Y,Z) of the reconstructed 3D points. The optimal methods also return the co-
variances of the 3D points in CovXYZ. Let n be the number of points then the 3 × 3 covariance matrices are
concatenated and stored in a tuple of length 9n. Additionally, the optimal methods return the covariance of the
essential matrix CovEMat.
If an optimal gold-standard-algorithm is chosen the covariances of the image points (CovRR1, CovRC1, CovCC1,
CovRR2, CovRC2, CovCC2) can be incorporated in the computation. They can be provided for example by the
operator points_foerstner. If the point covariances are unknown, which is the default, empty tuples are
input. In this case the optimization algorithm internally assumes uniform and equal covariances for all points.
The value Error indicates the overall quality of the optimization process and is the root-mean-square euclidian
distance in pixels between the points and their corresponding epipolar lines.
For the operator vector_to_essential_matrix a special configuration of scene points and cameras exists:
if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in
the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by
the operator. This means that all output parameters are of double length and the values of the second solution are
simply concatenated behind the values of the first one.
Parameter

. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong


Input points in image 1 (row coordinate).
Restriction : (length(Rows1) ≥ 6) ∨ (length(Rows1) ≥ 2)
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Input points in image 1 (column coordinate).
Restriction : length(Cols1) = length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Input points in image 2 (row coordinate).
Restriction : length(Rows2) = length(Rows1)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Input points in image 2 (column coordinate).
Restriction : length(Cols2) = length(Rows1)
. CovR

You might also like