Professional Documents
Culture Documents
HALCON/C
Reference Manual
This manual describes the operators of HALCON, version 8.0.2, in C syntax. It was generated on May 13, 2008.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in
any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without prior written
permission of the publisher.
Copyright
c 1997-2008 by MVTec Software GmbH, München, Germany MVTec Software GmbH
1 Classification 1
1.1 Gaussian-Mixture-Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
add_sample_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
classify_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
clear_all_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
clear_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
clear_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
create_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
evaluate_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
get_params_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
get_prep_info_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
get_sample_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
get_sample_num_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
read_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
read_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
train_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
write_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
write_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2 Hyperboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
clear_sampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
close_all_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
close_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
create_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
descript_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
enquire_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
enquire_reject_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
get_class_box_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
learn_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
learn_sampset_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
read_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
read_sampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
set_class_box_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
test_sampset_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
write_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Neural-Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
add_sample_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
classify_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
clear_all_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
clear_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
clear_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
create_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
evaluate_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
get_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
get_prep_info_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
get_sample_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
get_sample_num_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
read_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
read_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
train_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
write_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
write_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.4 Support-Vector-Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
add_sample_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
classify_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
clear_all_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
clear_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
clear_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
create_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
get_params_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
get_prep_info_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
get_sample_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
get_sample_num_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
get_support_vector_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
get_support_vector_num_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
read_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
read_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
reduce_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
train_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
write_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
write_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2 File 61
2.1 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
read_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
read_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
write_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.2 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
delete_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
file_exists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
list_files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
read_world_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.3 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
read_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
write_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.4 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
close_all_files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
close_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
fnew_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
fread_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
fread_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
fread_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
fwrite_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
open_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.5 Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
read_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
write_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.6 XLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
read_contour_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
read_contour_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
read_polygon_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
read_polygon_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
write_contour_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
write_contour_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
write_polygon_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
write_polygon_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3 Filter 87
3.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
abs_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
add_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
div_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
invert_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
max_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
min_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
mult_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
scale_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
sqrt_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
sub_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.2 Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
bit_and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
bit_lshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
bit_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
bit_not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
bit_or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
bit_rshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
bit_slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
bit_xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.3 Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
cfa_to_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
gen_principal_comp_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
linear_trans_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
principal_comp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
rgb1_to_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
rgb3_to_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
trans_from_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
trans_to_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.4 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
close_edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
close_edges_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
derivate_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
diff_of_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
edges_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
edges_color_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
edges_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
edges_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
frei_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
frei_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
highpass_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
info_edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
kirsch_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
kirsch_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
laplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
laplace_of_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
prewitt_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
prewitt_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
roberts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
robinson_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
robinson_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
sobel_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
sobel_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.5 Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
adjust_mosaic_images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
coherence_enhancing_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
emphasize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
equ_histo_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
illuminate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
mean_curvature_flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
scale_image_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
shock_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
3.6 FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
convol_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
convol_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
correlation_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
energy_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
fft_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
fft_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
fft_image_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
gen_bandfilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
gen_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
gen_derivative_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
gen_filter_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
gen_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
gen_gauss_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
gen_highpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
gen_lowpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
gen_sin_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
gen_std_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
optimize_fft_speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
optimize_rft_speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
phase_deg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
phase_rad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
power_byte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
power_ln . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
power_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
read_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
rft_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
write_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
3.7 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
affine_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
affine_trans_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
gen_bundle_adjusted_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
gen_cube_map_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
gen_projective_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
gen_spherical_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
map_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
mirror_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
polar_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
polar_trans_image_ext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
polar_trans_image_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
projective_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
projective_trans_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
rotate_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
zoom_image_factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
zoom_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
3.8 Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
harmonic_interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
inpainting_aniso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
inpainting_ced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
inpainting_ct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
inpainting_mcf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
inpainting_texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
3.9 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
bandpass_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
lines_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
lines_facet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
lines_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
3.10 Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
exhaustive_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
exhaustive_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
gen_gauss_pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
monotony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
3.11 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
convol_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
expand_domain_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
gray_inside . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
gray_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
lut_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
topographic_sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
3.12 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
add_noise_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
add_noise_white . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
gauss_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
noise_distribution_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
sp_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
3.13 Optical-Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
optical_flow_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
unwarp_image_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
vector_field_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
3.14 Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
corner_response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
dots_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
points_foerstner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
points_harris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
points_sojka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
3.15 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
anisotrope_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
anisotropic_diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
binomial_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
eliminate_min_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
eliminate_sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
fill_interlace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
gauss_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
info_smooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
isotropic_diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
mean_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
mean_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
mean_sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
median_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
median_separate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
median_weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
midrange_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
rank_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
sigma_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
smooth_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
trimmed_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
3.16 Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
deviation_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
entropy_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
texture_laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
3.17 Wiener-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
gen_psf_defocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
gen_psf_motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
simulate_defocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
simulate_motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
wiener_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
wiener_filter_ni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
4 Graphics 301
4.1 Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
drag_region1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
drag_region2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
drag_region3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
draw_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
draw_circle_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
draw_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
draw_ellipse_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
draw_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
draw_line_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
draw_nurbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
draw_nurbs_interp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
draw_nurbs_interp_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
draw_nurbs_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
draw_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
draw_point_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
draw_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
draw_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
draw_rectangle1_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
draw_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
draw_rectangle2_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
draw_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
draw_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
draw_xld_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
4.2 Gnuplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
gnuplot_close . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
gnuplot_open_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
gnuplot_open_pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
gnuplot_plot_ctrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
gnuplot_plot_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
gnuplot_plot_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
4.3 LUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
disp_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
draw_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
get_fixed_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
get_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
get_lut_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
query_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
set_fixed_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
set_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
set_lut_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
write_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
4.4 Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
get_mbutton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
get_mposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
get_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
query_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
set_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
4.5 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
disp_arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
disp_arrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
disp_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
disp_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
disp_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
disp_cross . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
disp_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
disp_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
disp_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
disp_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
disp_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
disp_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
disp_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
disp_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
disp_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
disp_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
4.6 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
get_comprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
get_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
get_fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
get_hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
get_icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
get_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
get_line_approx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
get_line_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
get_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
get_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
get_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
get_part_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
get_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
get_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
get_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
query_all_colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
query_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
query_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
query_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
query_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
query_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
query_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
query_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
set_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
set_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
set_comprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
set_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
set_fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
set_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
set_hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
set_icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
set_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
set_line_approx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
set_line_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
set_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
set_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
set_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
set_part_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
set_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
set_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
set_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
4.7 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
get_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
get_string_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
get_tposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
get_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
new_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
query_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
query_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
read_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
read_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
set_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
set_tposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
set_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
write_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
4.8 Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
clear_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
clear_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
close_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
copy_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
dump_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
dump_window_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
get_os_window_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
get_window_attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
get_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
get_window_pointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
get_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
move_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
new_extern_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
open_textwindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
open_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
query_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
set_window_attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
set_window_dc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
set_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
set_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
slide_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
5 Image 433
5.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
get_grayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
get_image_pointer1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
get_image_pointer1_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
get_image_pointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
get_image_time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
5.2 Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
close_all_framegrabbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
close_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
get_framegrabber_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
get_framegrabber_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
grab_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
grab_data_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
grab_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
grab_image_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
grab_image_start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
info_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
open_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
set_framegrabber_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
set_framegrabber_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
5.3 Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
access_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
append_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
channels_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
compose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
compose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
compose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
compose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
compose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
compose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
count_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
decompose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
decompose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
decompose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
decompose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
decompose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
decompose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
image_to_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
5.4 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
copy_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
gen_image1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
gen_image1_extern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
gen_image1_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
gen_image3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
gen_image_const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
gen_image_gray_ramp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
gen_image_interleaved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
gen_image_proto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
gen_image_surface_first_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
gen_image_surface_second_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
region_to_bin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
region_to_label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
region_to_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
5.5 Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
add_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
change_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
full_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
get_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
rectangle1_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
reduce_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
5.6 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
area_center_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
cooc_feature_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
cooc_feature_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
elliptic_axis_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
entropy_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
estimate_noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
fit_surface_first_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
fit_surface_second_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
fuzzy_entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
fuzzy_perimeter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
gen_cooc_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
gray_histo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
gray_histo_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
gray_projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
histo_2dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
min_max_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
moments_gray_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
plane_deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
select_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
shape_histo_all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
shape_histo_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
5.7 Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
change_format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
crop_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
crop_domain_rel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
crop_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
crop_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
tile_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
tile_images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
tile_images_offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
5.8 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
overpaint_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
overpaint_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
paint_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
paint_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
paint_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
set_grayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
5.9 Type-Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
complex_to_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
convert_image_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
real_to_complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
real_to_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
vector_field_to_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
6 Lines 529
6.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
approx_chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
approx_chain_simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
6.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
line_orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
line_position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
partition_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
select_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
select_lines_longest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
7 Matching 541
7.1 Component-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
clear_all_component_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
clear_all_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
clear_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
clear_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
cluster_model_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
create_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
create_trained_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
find_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
gen_initial_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
get_component_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
get_component_model_tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
get_component_relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
get_found_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
get_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
inspect_clustered_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
modify_component_relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
read_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
read_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
train_model_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
write_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
write_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
7.2 Correlation-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
clear_all_ncc_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
clear_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
create_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
find_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
get_ncc_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
get_ncc_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
read_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
set_ncc_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
write_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
7.3 Gray-Value-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
adapt_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
best_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
best_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
best_match_pre_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
best_match_rot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
best_match_rot_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
clear_all_templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
clear_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
create_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
create_template_rot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
fast_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
fast_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
read_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
set_offset_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
set_reference_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
write_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
7.4 Shape-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
clear_all_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
clear_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
create_aniso_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
create_scaled_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
create_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
determine_shape_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
find_aniso_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
find_aniso_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
find_scaled_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
find_scaled_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
find_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
find_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
get_shape_model_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
get_shape_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
get_shape_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
inspect_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
read_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
set_shape_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
write_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
8 Matching-3D 647
affine_trans_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
clear_all_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
clear_all_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
clear_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
clear_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
convert_point_3d_cart_to_spher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
convert_point_3d_spher_to_cart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
create_cam_pose_look_at_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
create_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
find_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
get_object_model_3d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
get_shape_model_3d_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
get_shape_model_3d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
project_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
project_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
read_object_model_3d_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
read_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
trans_pose_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
write_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
9 Morphology 673
9.1 Gray-Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
dual_rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
gen_disc_se . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
gray_bothat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
gray_closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
gray_closing_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
gray_closing_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
gray_dilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
gray_dilation_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
gray_dilation_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
gray_erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
gray_erosion_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
gray_erosion_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
gray_opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
gray_opening_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
gray_opening_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
gray_range_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
gray_tophat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
read_gray_se . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
9.2 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
bottom_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
closing_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
closing_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
closing_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
dilation1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
dilation2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
dilation_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
dilation_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
dilation_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
dilation_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
erosion1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
erosion2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
erosion_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
erosion_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
erosion_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
erosion_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
gen_struct_elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
golay_elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
hit_or_miss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
hit_or_miss_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
hit_or_miss_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
minkowski_add1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
minkowski_add2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
minkowski_sub1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
minkowski_sub2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
morph_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
morph_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
morph_skiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
opening_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
opening_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
opening_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
opening_seg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
thickening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
thickening_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
thickening_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
thinning_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
thinning_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
top_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
10 OCR 743
10.1 Hyperboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
close_all_ocrs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
close_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
create_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
do_ocr_multi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
do_ocr_single . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
info_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
ocr_change_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
ocr_get_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
read_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
testd_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
traind_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
trainf_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
write_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
10.2 Lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
clear_all_lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
clear_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
create_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
import_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
inspect_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757
lookup_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757
suggest_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
10.3 Neural-Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
clear_all_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
clear_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
create_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
do_ocr_multi_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
do_ocr_single_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
do_ocr_word_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
get_features_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766
get_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
get_prep_info_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
read_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
trainf_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
write_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
10.4 Support-Vector-Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
clear_all_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
clear_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
create_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
do_ocr_multi_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
do_ocr_single_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
do_ocr_word_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
get_features_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
get_params_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
get_prep_info_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
get_support_vector_num_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
get_support_vector_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
read_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
reduce_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
trainf_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
write_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
10.5 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
segment_characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
select_characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
text_line_orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792
text_line_slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
10.6 Training-Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
append_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
concat_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
read_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
read_ocr_trainf_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
read_ocr_trainf_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
write_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
write_ocr_trainf_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
11 Object 801
11.1 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
count_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
get_channel_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
get_obj_class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
test_equal_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
test_obj_def . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
11.2 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
clear_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
concat_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
copy_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
gen_empty_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
integer_to_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
obj_to_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
select_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
12 Regions 811
12.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
get_region_chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
get_region_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
get_region_convex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
get_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
get_region_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
get_region_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
12.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
gen_checker_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
gen_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
gen_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
gen_empty_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
gen_grid_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
gen_random_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
gen_random_regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823
gen_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
gen_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826
gen_region_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
gen_region_histo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
gen_region_hline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
gen_region_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
gen_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
gen_region_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 832
gen_region_polygon_filled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
gen_region_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
gen_region_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
label_to_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
12.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
area_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
circularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
connect_and_holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
contlength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
diameter_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
eccentricity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
elliptic_axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
euler_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
find_neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
get_region_index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
get_region_thickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
hamming_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
hamming_distance_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
inner_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
inner_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
moments_region_2nd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
moments_region_2nd_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
moments_region_2nd_rel_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
moments_region_3rd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
moments_region_3rd_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
moments_region_central . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857
moments_region_central_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
orientation_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
rectangularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
roundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
runlength_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
runlength_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
select_region_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
select_region_spatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865
select_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
select_shape_proto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
select_shape_std . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
smallest_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
smallest_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
smallest_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
spatial_relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
12.4 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
affine_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
mirror_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
move_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
polar_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
polar_trans_region_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
projective_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
transpose_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
zoom_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
12.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
symm_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
union1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
union2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890
12.6 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890
test_equal_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890
test_region_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
test_subset_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
12.7 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
background_seg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
clip_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
clip_region_rel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894
connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896
distance_transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
eliminate_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
expand_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
fill_up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900
fill_up_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901
hamming_change_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
interjacent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
junctions_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904
merge_regions_line_scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
partition_dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
partition_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
rank_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
remove_noise_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
shape_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911
sort_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
split_skeleton_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913
split_skeleton_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
13 Segmentation 917
13.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
add_samples_image_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
add_samples_image_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
add_samples_image_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
class_2dim_sup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 920
class_2dim_unsup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
class_ndim_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
class_ndim_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
classify_image_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925
classify_image_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 926
classify_image_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927
learn_ndim_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
learn_ndim_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
13.2 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
detect_edge_segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
hysteresis_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
nonmax_suppression_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 934
nonmax_suppression_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935
13.3 Regiongrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
expand_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
expand_gray_ref . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
expand_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
regiongrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
regiongrowing_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942
regiongrowing_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 943
13.4 Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
auto_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
bin_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948
char_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
check_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950
dual_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
dyn_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
fast_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
histo_to_thresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
threshold_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958
var_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959
zero_crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
zero_crossing_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
13.5 Topography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962
critical_points_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962
local_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963
local_max_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964
local_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
local_min_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
lowlands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967
lowlands_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967
plateaus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
plateaus_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
pouring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
saddle_points_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
watersheds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972
watersheds_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
14 System 975
14.1 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
count_relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
get_modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
reset_obj_db . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 977
14.2 Error-Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
get_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
get_error_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
get_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
query_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
set_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
set_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
14.3 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
get_chapter_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
get_keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
get_operator_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
get_operator_name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
get_param_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
get_param_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988
get_param_num . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
get_param_types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
query_operator_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
query_param_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
search_operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
14.4 Operating-System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992
count_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992
system_call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
wait_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
14.5 Parallelization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
check_par_hw_potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
load_par_knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
store_par_knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
14.6 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
get_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
set_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
14.7 Serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
clear_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
close_all_serials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
close_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
get_serial_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
open_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1008
read_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
set_serial_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
write_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
14.8 Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
close_socket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
get_next_socket_data_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
get_socket_descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
get_socket_timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
open_socket_accept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
open_socket_connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015
receive_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015
receive_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
receive_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
receive_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
send_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
send_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018
send_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018
send_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
set_socket_timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
socket_accept_connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
15 Tools 1023
15.1 2D-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
affine_trans_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
affine_trans_point_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1024
bundle_adjust_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025
hom_mat2d_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
hom_mat2d_determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
hom_mat2d_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
hom_mat2d_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029
hom_mat2d_rotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
hom_mat2d_rotate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1031
hom_mat2d_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032
hom_mat2d_scale_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1033
hom_mat2d_slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
hom_mat2d_slant_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1036
hom_mat2d_to_affine_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037
hom_mat2d_translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038
hom_mat2d_translate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1039
hom_mat2d_transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1040
hom_mat3d_project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1041
hom_vector_to_proj_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
proj_match_points_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
projective_trans_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
projective_trans_point_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047
vector_angle_to_rigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
vector_field_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
vector_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1050
vector_to_proj_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1051
vector_to_rigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053
vector_to_similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054
15.2 3D-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055
affine_trans_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055
convert_pose_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
create_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057
get_pose_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
hom_mat3d_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
hom_mat3d_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1062
hom_mat3d_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
hom_mat3d_rotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
hom_mat3d_rotate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
hom_mat3d_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067
hom_mat3d_scale_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1068
hom_mat3d_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
hom_mat3d_translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1070
hom_mat3d_translate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
pose_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1072
read_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
set_origin_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1074
write_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
15.3 Background-Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
close_all_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
close_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
create_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1078
get_bg_esti_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1081
give_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
run_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083
set_bg_esti_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084
update_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086
15.4 Barcode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087
clear_all_bar_code_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087
clear_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088
create_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088
find_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089
get_bar_code_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1090
get_bar_code_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1091
get_bar_code_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
set_bar_code_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094
15.5 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
caltab_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
cam_mat_to_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097
cam_par_to_cam_mat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
camera_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1099
change_radial_distortion_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106
change_radial_distortion_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107
change_radial_distortion_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1108
contour_to_world_plane_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109
create_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1110
disp_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1112
find_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114
find_marks_and_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115
gen_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1117
gen_image_to_world_plane_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1120
gen_radial_distortion_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1123
get_circle_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1124
get_line_of_sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125
get_rectangle_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126
hand_eye_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1129
image_points_to_world_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137
image_to_world_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1138
project_3d_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1140
radiometric_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1141
read_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1144
sim_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
stationary_camera_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
write_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153
15.6 Datacode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156
clear_all_data_code_2d_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156
clear_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157
create_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157
find_data_code_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1160
get_data_code_2d_objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164
get_data_code_2d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166
get_data_code_2d_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168
query_data_code_2d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175
read_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
set_data_code_2d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
write_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
15.7 Fourier-Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
abs_invar_fourier_coeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
fourier_1dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
fourier_1dim_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
invar_fourier_coeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185
match_fourier_coeff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186
move_contour_orig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1187
prep_contour_fourier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1188
15.8 Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
abs_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
compose_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
create_funct_1d_array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
create_funct_1d_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
derivate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191
distance_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191
funct_1d_to_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
get_pair_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
get_y_value_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
integrate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193
invert_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193
local_min_max_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194
match_funct_1d_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195
negate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
num_points_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
read_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
sample_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197
scale_y_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1198
smooth_funct_1d_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1198
smooth_funct_1d_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
transform_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
write_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
x_range_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
y_range_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201
zero_crossings_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201
15.9 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202
angle_ll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202
angle_lx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1203
distance_cc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
distance_cc_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
distance_lc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205
distance_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
distance_pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
distance_pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
distance_pp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1209
distance_pr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
distance_ps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
distance_rr_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
distance_rr_min_dil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
distance_sc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
distance_sl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
distance_sr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
distance_ss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216
get_points_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
intersection_ll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
projection_pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1220
15.10 Grid-Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
connect_grid_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
create_rectification_grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
find_rectification_grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
gen_arbitrary_distortion_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1223
gen_grid_rectification_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224
15.11 Hough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
hough_circle_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
hough_circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
hough_line_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
hough_line_trans_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1228
hough_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1229
hough_lines_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230
select_matching_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232
15.12 Image-Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
clear_all_variation_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
clear_train_data_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
clear_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
compare_ext_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
compare_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236
create_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237
get_thresh_images_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1239
get_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
prepare_direct_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
prepare_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1242
read_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
train_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
write_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
15.13 Kalman-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
filter_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
read_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
sensor_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
update_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
15.14 Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255
close_all_measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255
close_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255
fuzzy_measure_pairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256
fuzzy_measure_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1258
fuzzy_measure_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1260
gen_measure_arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262
gen_measure_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264
measure_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266
measure_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1268
measure_projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1269
measure_thresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1270
reset_fuzzy_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272
set_fuzzy_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272
set_fuzzy_measure_norm_pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
translate_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
15.15 OCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
close_all_ocvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
close_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
create_ocv_proj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1278
do_ocv_simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
read_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1280
traind_ocv_proj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
write_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
15.16 Shape-from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
depth_from_focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
estimate_al_am . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
estimate_sl_al_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284
estimate_sl_al_zc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284
estimate_tilt_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
estimate_tilt_zc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
phot_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286
select_grayvalues_from_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
sfs_mod_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
sfs_orig_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289
sfs_pentland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1290
shade_height_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291
15.17 Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1292
binocular_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1292
binocular_disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1296
binocular_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1299
disparity_to_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1302
disparity_to_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
distance_to_disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
essential_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
gen_binocular_proj_rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1305
gen_binocular_rectification_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307
intersect_lines_of_sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1310
match_essential_matrix_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1311
match_fundamental_matrix_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1314
match_rel_pose_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1317
reconst3d_from_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
rel_pose_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1322
vector_to_essential_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1323
vector_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1325
vector_to_rel_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1328
15.18 Tools-Legacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330
decode_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330
decode_2d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1331
discrete_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1332
find_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1333
find_1d_bar_code_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1338
find_1d_bar_code_scanline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1339
find_2d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341
gen_1d_bar_code_descr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1344
gen_1d_bar_code_descr_gen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1346
gen_2d_bar_code_descr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1347
get_1d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349
get_1d_bar_code_scanline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1350
get_2d_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352
get_2d_bar_code_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
16 Tuple 1359
16.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1359
tuple_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1359
tuple_acos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1359
tuple_add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
tuple_asin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
tuple_atan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1361
tuple_atan2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1361
tuple_ceil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1362
tuple_cos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1362
tuple_cosh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1362
tuple_cumul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
tuple_deg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363
tuple_div . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364
tuple_exp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364
tuple_fabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
tuple_floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
tuple_fmod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
tuple_ldexp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
tuple_log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
tuple_log10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
tuple_max2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367
tuple_min2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
tuple_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
tuple_mult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
tuple_neg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
tuple_pow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
tuple_rad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
tuple_sgn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
tuple_sin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
tuple_sinh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
tuple_sqrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1372
tuple_sub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1372
tuple_tan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373
tuple_tanh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1373
16.2 Bit-Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
tuple_band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
tuple_bnot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374
tuple_bor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
tuple_bxor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
tuple_lsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1376
tuple_rsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1376
16.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
tuple_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
tuple_greater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1377
tuple_greater_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378
tuple_less . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378
tuple_less_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
tuple_not_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1379
16.4 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
tuple_chr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
tuple_chrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
tuple_int . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
tuple_is_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
tuple_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
tuple_ord . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1382
tuple_ords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1382
tuple_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383
tuple_round . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383
tuple_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
16.5 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
tuple_concat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
tuple_gen_const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
tuple_rand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
16.6 Element-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
tuple_inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
tuple_sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
tuple_sort_index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
16.7 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1388
tuple_deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1388
tuple_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1388
tuple_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
tuple_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
tuple_median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389
tuple_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
tuple_sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
16.8 Logical-Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391
tuple_and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391
tuple_not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1391
tuple_or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
tuple_xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
16.9 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
tuple_find . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
tuple_first_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
tuple_last_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
tuple_remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
tuple_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395
tuple_select_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396
tuple_select_rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396
tuple_str_bit_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1397
tuple_uniq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1397
16.10 String-Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
tuple_environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
tuple_regexp_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
tuple_regexp_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1400
tuple_regexp_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401
tuple_regexp_test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
tuple_split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
tuple_str_first_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
tuple_str_last_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404
tuple_strchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1404
tuple_strlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405
tuple_strrchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405
tuple_strrstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
tuple_strstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
17 XLD 1409
17.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1409
get_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1409
get_lines_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1409
get_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1410
get_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1411
17.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1411
gen_contour_nurbs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1411
gen_contour_polygon_rounded_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1413
gen_contour_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1414
gen_contour_region_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1415
gen_contours_skeleton_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1416
gen_cross_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417
gen_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417
gen_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1419
gen_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1420
gen_rectangle2_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
mod_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
17.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
area_center_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
area_center_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
circularity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1424
compactness_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425
contour_point_num_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426
convexity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427
diameter_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427
dist_ellipse_contour_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1428
dist_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429
dist_rectangle2_contour_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431
eccentricity_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1432
eccentricity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1433
elliptic_axis_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1434
elliptic_axis_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
fit_circle_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1437
fit_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1439
fit_line_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442
fit_rectangle2_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444
get_contour_angle_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446
get_contour_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
get_contour_global_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
get_regress_params_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448
info_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1449
length_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
local_max_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
max_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
moments_any_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1452
moments_any_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1453
moments_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
moments_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456
orientation_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1457
orientation_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1457
query_contour_attribs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458
query_contour_global_attribs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458
select_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1459
select_shape_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1460
select_xld_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1462
smallest_circle_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1463
smallest_rectangle1_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1464
smallest_rectangle2_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1465
test_self_intersection_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1466
test_xld_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
17.4 Geometric-Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
affine_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
affine_trans_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1468
gen_parallel_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470
polar_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470
polar_trans_contour_xld_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1472
projective_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1474
17.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
difference_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
difference_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476
intersection_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1477
intersection_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1478
symm_difference_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1479
symm_difference_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1480
union2_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1481
union2_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
17.6 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
add_noise_white_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
clip_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
close_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
combine_roads_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1485
crop_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486
merge_cont_line_scan_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1487
regress_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1488
segment_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1489
shape_trans_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1491
smooth_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492
sort_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1493
split_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1494
union_adjacent_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1494
union_cocircular_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1496
union_collinear_contours_ext_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497
union_collinear_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1499
union_straight_contours_histo_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1501
union_straight_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1503
Index 1505
Chapter 1
Classification
1.1 Gaussian-Mixture-Models
T_add_sample_class_gmm ( const Htuple GMMHandle,
const Htuple Features, const Htuple ClassID, const Htuple Randomize )
1
2 CHAPTER 1. CLASSIFICATION
clear_all_class_gmm ( )
T_clear_all_class_gmm ( )
HALCON 8.0.2
4 CHAPTER 1. CLASSIFICATION
Parameter
exactly one parameter: The parameter determines the exact number of centers to be used for all classes.
exactly two parameters: The first parameter determines the mimimum number of centers, the second determines
the maximum number of centers for all classes.
exactly 2 · N umClasses parameters: Alternatingly every first parameter determines the minimum number of
centers per class and every second parameters determines the maximum number of centers per class.
When upper and lower bounds are specified, the optimum number of centers will be determined with the help of
the Mimimum Message Length Criterion (MML). In general, we recommend to start the training with (too) many
centers as maximum and the expected number of centers as minimum.
Each center is described by the parameters center mj , covariance matrix Cj , and mixing coefficient Pj . These pa-
rameters are calculated from the training data by means of the Expectation Maximization (EM) algorithm. A GMM
can approximate an arbitrary probability density, provided that enough centers are being used. The covariance ma-
trices Cj have the dimensions NumDim · NumDim (NumComponents · NumComponents if preprocessing is
used) and are symmetric. Further constraints can be given by CovarType:
For CovarType = ’spherical’, Cj is a scalar multiple of the identity matrix Cj = s2j I. The center density
function p(x|j) is
2
1 kx − mj k
p(x|j) = exp(− )
(2πs2j )d/2 2s2j
For CovarType = ’diag’, Cj is a diagonal matrix Cj = diag(s2j,1 , ..., s2j,d ). The center density function p(x|j)
is
d
1 X (xi − mj,i )2
p(x|j) = exp(− )
d 2s2j,i
s2j,i )d/2
Q
(2π i=1
i=1
For CovarType = ’full’, Cj is a positive definite matrix. The center density function p(x|j) is
1 1
p(x|j) = 1 exp(− (x − mj )T C−1 (x − mj ))
(2π)d/2 |Cj | 2 2
The complexity of the calculations increases from CovarType = ’spherical’ over CovarType = ’diag’ to
CovarType = ’full’. At the same time the flexibility of the centers increases. In general, ’spherical’ therefore
needs higher values for NumCenters than ’full’.
The procedure to use GMM is as follows: First, a GMM is created by create_class_gmm. Then,
training vectors are added by add_sample_class_gmm, afterwards they can be written to disk with
write_samples_class_gmm. With train_class_gmm the classifier center parameters (defined above)
are determined. Furthermore, they can be saved with write_class_gmm for later classifications.
From the mixing probabilities Pj and the center density function p(x|j), the probability density function p(x) can
be calculated by:
HALCON 8.0.2
6 CHAPTER 1. CLASSIFICATION
ncomp
X
p(x) = P (j)p(x|j)
j=1
The probability density function p(x) can be evaluated with evaluate_class_gmm for a feature vector x.
classify_class_gmm sorts the p(x) and therefore discovers the most probable class of the feature vector.
The parameters Preprocessing and NumComponents can be used to preprocess the training data and reduce
its dimensions. These parameteters are explained in the description of the operator create_class_mlp.
create_class_gmm initializes the coordinates of the centers with random numbers. To ensure that the results of
training the classifier with train_class_gmm are reproducible, the seed value of the random number generator
is passed in RandSeed.
Parameter
Class := [...]
add_sample_class_gmm (GMMHandle, Features, Class)
endfor
* Train the GMM
train_class_gmm (GMMHandle, 100, 0.001, 0, Centers, Iter)
* Classify unknown data in ’Features’
classify_class_gmm (GMMHandle, Features, 1, ClassProb, Density, KSigmaProb)
clear_class_gmm (GMMHandle)
Result
If the parameters are valid, the operator create_class_gmm returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
create_class_gmm is processed completely exclusively without parallelization.
Possible Successors
add_sample_class_gmm, add_samples_image_class_gmm
Alternatives
create_class_mlp, create_class_svm, create_class_box
See also
clear_class_gmm, train_class_gmm, classify_class_gmm, evaluate_class_gmm,
classify_image_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation
ncomp
X
p(i|x) = P (j)p(x|j)
j=1
and returned for each class in ClassProb. The formulas for the calculation of the center density function p(x|j)
are described with create_class_gmm.
The probablity density of the feature vector is computed as a sum of the posterior class probabilities
nclasses
X
p(x) = P r(i)p(i|x)
i=1
and is returned in Density. Here, P r(i) are the prior classes probabilities as computed by
train_class_gmm. Density can be used for novelty detection, i.e., to reject feature vectors that do not
belong to any of the trained classes. However, since Density depends on the scaling of the feature vectors
and since Density is a probability density, and consequently does not need to lie between 0 and 1, the novelty
detection can typically be performed more easily with KSigmaProb (see below).
A k-sigma error ellipsoid is defined as a locus of points for which
HALCON 8.0.2
8 CHAPTER 1. CLASSIFICATION
(x − µ)T C −1 (x − µ) = k 2
In the one dimensional case this is the interval [µ − kσ, µ + kσ]. For any 1D Gaussian distribution, it is true
that approximately 65% of the occurrences of the random variable are within this range for k = 1, approximately
95% for k = 2, approximately 99% for k = 3, etc. Hence, the probability that a Gaussian distribution will
generate a random variable outside this range is approximately 35%, 5%, and 1%, respectively. This probability is
called k-sigma probability and is denoted by P [k]. P [k] can be computed numerically for univariate as well as for
multivariate Gaussian distributions, where it should be noted that for the same values of k, P (N ) [k] > P (N +1) [k]
(here N and (N+1) denote dimensions). For Gaussian mixture models the k-sigma probability is computed as:
ncomp
X
PGM M [x] = P (j)Pj [kj ], where kj2 = (x − µj )T Cj−1 (x − µj )
j=1
They then are weighted with the class priors, normalized, and returned for each class in KSigmaProb, such that
P r(i)
KSigmaProb[i] = PGM M [x]
P rmax
KSigmaProb can be used for novelty detection. Typically, feature vectors having values below 0.0001 should
be rejected. The parameter RejectionThreshold in classify_image_class_gmm is based on the
KSigmaProb values of the features.
Before calling evaluate_class_gmm, the GMM must be trained with train_class_gmm.
The position of the maximum value of ClassProb is usally interpreted as the class of the feature vector and the
corresponding value as the probability of the class. In this case, classify_class_gmm should be used instead
of evaluate_class_gmm, because classify_class_gmm directly returns the class and corresponding
probability.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Htuple . Hlong
GMM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Feature vector.
. ClassProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
A-posteriori probability of the classes.
. Density (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Probability density of the feature vector.
. KSigmaProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double *
Normalized k-sigma-probability for the feature vector.
Result
If the parameters are valid, the operator evaluate_class_gmm returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
evaluate_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
train_class_gmm, read_class_gmm
Alternatives
classify_class_gmm
See also
create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation
HALCON 8.0.2
10 CHAPTER 1. CLASSIFICATION
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_gmm. The call to get_prep_info_class_gmm al-
ready requires the creation of a GMM, and hence the setting of NumComponents in create_class_gmm
to an initial value. However, if get_prep_info_class_gmm is called, it is typically not known how many
components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step
approach should typically be used to select NumComponents: In a first step, a GMM with the maximum num-
ber for NumComponents is created (NumComponents for ’principal_components’ and ’canonical_variates’).
Then, the training samples are added to the GMM and are saved in a file using write_samples_class_gmm.
Subsequently, get_prep_info_class_gmm is used to determine the information content of the compo-
nents, and with this NumComponents. After this, a new GMM with the desired number of components is
created, and the training samples are read with read_samples_class_gmm. Finally, the GMM is trained
with train_class_gmm.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Htuple . Hlong
GMM handle.
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "principal_components"
List of values : Preprocessing ∈ {"principal_components", "canonical_variates"}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Cumulative information content of the transformed feature vectors.
Example (Syntax: HDevelop)
Result
If the parameters are valid, the operator get_prep_info_class_gmm returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
get_prep_info_class_gmm may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
get_prep_info_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_gmm, read_samples_class_gmm
Possible Successors
clear_class_gmm, create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
Return a training sample from the training data of a Gaussian Mixture Models (GMM).
get_sample_class_gmm reads out a training sample from the Gaussian Mixture Model (GMM) given by
GMMHandle that was stored with add_sample_class_gmm or add_samples_image_class_gmm.
The index of the sample is specified with NumSample. The index is counted from 0, i.e., NumSample
must be a number between 0 and NumSamples − 1, where NumSamples can be determined with
get_sample_num_class_gmm. The training sample is returned in Features and ClassID. Features
is a feature vector of length NumDim, while ClassID is its class (see add_sample_class_gmm and
create_class_gmm).
get_sample_class_gmm can, for example, be used to reclassify the training data with
classify_class_gmm in order to determine which training samples, if any, are classified incorrectly.
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Htuple . Hlong
GMM handle.
. NumSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Index of the stored training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Feature vector of the training sample.
. ClassID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong *
Class of the training sample.
Example (Syntax: HDevelop)
HALCON 8.0.2
12 CHAPTER 1. CLASSIFICATION
Result
If the parameters are valid, the operator get_sample_class_gmm returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
get_sample_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_gmm, add_samples_image_class_gmm, read_samples_class_gmm,
get_sample_num_class_gmm
Possible Successors
classify_class_gmm, evaluate_class_gmm
See also
create_class_gmm
Module
Foundation
Return the number of training samples stored in the training data of a Gaussian Mixture Model (GMM).
get_sample_num_class_gmm returns in NumSamples the number of training samples that are stored in the
Gaussian Mixture Model (GMM) given by GMMHandle. get_sample_num_class_gmm should be called
before the individual training samples are read out with get_sample_class_gmm, e.g., for the purpose of
reclassifying the training data (see get_sample_class_gmm).
Parameter
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong
GMM handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of stored training samples.
Result
If the parameters are valid, the operator get_sample_num_class_gmm returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
get_sample_num_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_gmm, add_samples_image_class_gmm, read_samples_class_gmm
Possible Successors
get_sample_class_gmm
See also
create_class_gmm
Module
Foundation
read_class_gmm and subsequently used for evaluation with evaluate_class_gmm or for classification
with classify_class_gmm.
Parameter
HALCON 8.0.2
14 CHAPTER 1. CLASSIFICATION
Alternatives
add_sample_class_gmm
See also
write_samples_class_gmm, write_samples_class_mlp, clear_samples_class_gmm
Module
Foundation
Result
If the parameters are valid, the operator train_class_gmm returns the value H_MSG_TRUE. If necessary an
exception handling is raised.
Parallelization Information
train_class_gmm is processed completely exclusively without parallelization.
Possible Predecessors
add_sample_class_gmm, read_samples_class_gmm
Possible Successors
evaluate_class_gmm, classify_class_gmm, write_class_gmm
Alternatives
read_class_gmm
See also
create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation
HALCON 8.0.2
16 CHAPTER 1. CLASSIFICATION
write_class_gmm writes the Gaussian Mixture Model (GMM) GMMHandle to the file given by FileName.
write_class_gmm is typically called after the GMM has been trained with train_class_gmm. The GMM
can be read with read_class_gmm. write_class_gmm does not write any training samples that possibly
have been stored in the GMM. For this purpose, write_samples_class_gmm should be used.
Parameter
Possible Successors
clear_samples_class_gmm
See also
create_class_gmm, read_samples_class_gmm, read_samples_class_mlp,
write_samples_class_mlp
Module
Foundation
1.2 Hyperboxes
close_all_class_box ( )
T_close_all_class_box ( )
HALCON 8.0.2
18 CHAPTER 1. CLASSIFICATION
A classificator uses a set of hyper cuboids for every class. With these hyper cuboids it is attempted to get the array
attributes inside the class. descript_class_box returns for every class the expansion of every appropriate
cuboid from dimension 1 up to Dimensions (to ’standard_output’).
Parameter
HALCON 8.0.2
20 CHAPTER 1. CLASSIFICATION
Alternatives
enquire_reject_class_box
See also
test_sampset_box, learn_class_box, learn_sampset_box
Module
Foundation
get_class_box_param gets the parameter of the classificator. The meaning of the parameter is explained in
set_class_box_param.
Default values:
’min_samples_for_split’ = 80,
’split_error’ = 0.1,
’prop_constant’ = 0.25
Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong
Classificator’s handle number.
. Flag (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the system parameter.
Default Value : "split_error"
List of values : Flag ∈ {"split_error", "prop_constant", "used_memory", "min_samples_for_split"}
. Value (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double * / Hlong *
Value of the system parameter.
Result
get_class_box_param returns H_MSG_TRUE. An exception handling is raised if Flag has been set with
wrong values.
Parallelization Information
get_class_box_param is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, enquire_class_box, learn_class_box, write_class_box
Possible Successors
set_class_box_param, learn_class_box, enquire_class_box, write_class_box,
close_class_box, clear_sampset
See also
create_class_box, set_class_box_param
Module
Foundation
HALCON 8.0.2
22 CHAPTER 1. CLASSIFICATION
Result
learn_sampset_box returns H_MSG_TRUE. An exception handling is raised if key SampKey does not exist
or there are problems while opening the file.
Parallelization Information
learn_sampset_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box
Possible Successors
test_sampset_box, enquire_class_box, write_class_box, close_class_box,
clear_sampset
See also
test_sampset_box, enquire_class_box, learn_class_box, read_sampset
Module
Foundation
HALCON 8.0.2
24 CHAPTER 1. CLASSIFICATION
The training examples are accessible with the key SampKey by calling procedures clear_sampset and
learn_sampset_box. You may edit the file using an editor. Every row contains an array of attributes with
corresponding class. An example for a format might be:
(1.0, 25.3, *, 17 | 3)
This row specifies an array of attributes which belong to class 3. In this array the third attribute is unknown.
Attributes upwards 5 are supposed to be unknown, too. You may insert comments like /* .. */ in any place.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Filename of the data set to train.
Default Value : "sampset1"
. SampKey (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . feature_set ; Hlong *
Identification of the data set to train.
Result
read_sampset returns H_MSG_TRUE. An exception handling is raised if it is not possible to open the file or
it contains syntax errors or there is not enough memory.
Parallelization Information
read_sampset is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box
Possible Successors
test_sampset_box, enquire_class_box, write_class_box, close_class_box,
clear_sampset
See also
test_sampset_box, clear_sampset, learn_sampset_box
Module
Foundation
Parameter
. ClassifHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_box ; Hlong
Classificator’s handle number.
. Flag (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the wanted parameter.
Default Value : "split_error"
Suggested values : Flag ∈ {"min_samples_for_split", "split_error", "prop_constant"}
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Value of the parameter.
Default Value : 0.1
Result
read_sampset returns H_MSG_TRUE.
Parallelization Information
set_class_box_param is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, enquire_class_box
Possible Successors
learn_class_box, test_sampset_box, write_class_box, close_class_box,
clear_sampset
See also
enquire_class_box, get_class_box_param, learn_class_box
Module
Foundation
HALCON 8.0.2
26 CHAPTER 1. CLASSIFICATION
Module
Foundation
1.3 Neural-Nets
T_add_sample_class_mlp ( const Htuple MLPHandle,
const Htuple Features, const Htuple Target )
class of the sample, which is counted from 0, i.e., the class must be an integer between 0 and NumOutput − 1.
The class is converted to a target vector of length NumOutput internally.
Before the MLP can be trained with train_class_mlp, all training samples must be added to the MLP with
add_sample_class_mlp.
The number of currently stored training samples can be queried with get_sample_num_class_mlp. Stored
training samples can be read out again with get_sample_class_mlp.
Normally, it is useful to save the training samples in a file with write_samples_class_mlp to facilitate
reusing the samples, and to facilitate that, if necessary, new training samples can be added to the data set, and
hence to facilitate that a newly created MLP can be trained anew with the extended data set.
Parameter
HALCON 8.0.2
28 CHAPTER 1. CLASSIFICATION
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Htuple . Hlong
MLP handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Feature vector.
. Num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Number of best classes to determine.
Default Value : 1
Suggested values : Num ∈ {1, 2, 3, 4, 5}
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; Htuple . Hlong *
Result of classifying the feature vector with the MLP.
. Confidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Confidence(s) of the class(es) of the feature vector.
Result
If the parameters are valid, the operator classify_class_mlp returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
classify_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
train_class_mlp, read_class_mlp
Alternatives
evaluate_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
clear_all_class_mlp ( )
T_clear_all_class_mlp ( )
HALCON 8.0.2
30 CHAPTER 1. CLASSIFICATION
ni
(1) (1) (1)
X
aj = wji xi + bj , j = 1, . . . , nh
i=1
(1)
zj = tanh aj , j = 1, . . . , nh
(1) (1)
Here, the matrix wji and the vector bj are the weights of the input layer (first layer) of the MLP. In the hidden
layer (second layer), the activations zj are transformed in a first step by using linear combinations of the variables
in an analogous manner as above:
nh
(2) (2) (2)
X
ak = wkj zj + bk , k = 1, . . . , no
j=1
(2) (2)
Here, the matrix wkj and the vector bk are the weights of the second layer of the MLP.
The activation function used in the output layer can be determined by setting OutputFunction. For
OutputFunction = ’linear’, the data are simply copied:
(2)
yk = ak , k = 1, . . . , no
This type of activation function should be used for regression problems (function approximation). This activation
function is not suited for classification problems.
For OutputFunction = ’logistic’, the activations are computed as follows:
1
yk = (2)
, k = 1, . . . , no
1 + exp − ak
This type of activation function should be used for classification problems with multiple (NumOutput) indepen-
dent logical attributes as output. This kind of classification problem is relatively rare in practice.
For OutputFunction = ’softmax’, the activations are computed as follows:
(2)
exp ak
yk = Pno (2) , k = 1, . . . , no
l=1 al
This type of activation function should be used for common classification problems with multiple (NumOutput)
mutually exclusive classes as output. In particular, OutputFunction = ’softmax’ must be used for the classifi-
cation of pixel data with classify_image_class_mlp.
The parameters Preprocessing and NumComponents can be used to specify a preprocessing of the feature
vectors. For Preprocessing = ’none’, the feature vectors are passed unaltered to the MLP. NumComponents
is ignored in this case.
For all other values of Preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification or evaluation.
For Preprocessing = ’normalization’, the feature vectors are normalized by subtracting the mean of the
training vectors and dividing the result by the standard deviation of the individual components of the training
vectors. Hence, the transformed feature vectors have a mean of 0 and a standard deviation of 1. The normalization
does not change the length of the feature vector. NumComponents is ignored in this case. This transformation
can be used if the mean and standard deviation of the feature vectors differs substantially from 0 and 1, respectively,
or for data in which the components of the feature vectors are measured in different units (e.g., if some of the data
are gray value features and some are region features, or if region features are mixed, e.g., ’circularity’ (unit: scalar)
and ’area’ (unit: pixel squared)). In these cases, the training of the net will typically require fewer iterations than
without normalization.
For Preprocessing = ’principal_components’, a principal component analysis is performed. First, the feature
vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space) that
decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is 0 and
the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that the
transformed features that contain the most variation is contained in the first components of the transformed feature
vector. With this, it is possible to omit the transformed features in the last components of the feature vector,
which typically are mainly influenced by noise, without losing a large amount of information. The parameter
NumComponents can be used to detemine how many of the transformed feature vector components should be
used. Up to NumInput components can be selected. The operator get_prep_info_class_mlp can be
used to determine how much information each transformed component contains. Hence, it aids the selection of
NumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated.
In contrast to the above three transformations, which can be used for all MLP types, the transformation spec-
ified by Preprocessing = ’canonical_variates’ can only be used if the MLP is used as a classifier with
OutputFunction = ’softmax’). The computation of the canonical variates is also called linear discrimi-
nant analysis. In this case, a transformation that first normalizes the training vectors and then decorrelates the
training vectors on average over all classes is computed. At the same time, the transformation maximally sepa-
rates the mean values of the individual classes. As for Preprocessing = ’principal_components’, the trans-
formed components are sorted by information content, and hence transformed components with little informa-
tion content can be omitted. For canonical variates, up to min(NumOutput − 1, NumInput) components can
be selected. Also in this case, the information content of the transformed components can be determined with
get_prep_info_class_mlp. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the actual number of
input units of the MLP is determined by NumComponents, whereas NumInput determines the dimensionality
of the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transfor-
mations, the number of input variables, and thus usually also the number of hidden units can be reduced. With this,
the time needed to train the MLP and to evaluate and classify a feature vector is typically reduced.
Usually, NumHidden should be selected in the order of magnitude of NumInput and NumOutput. In many
cases, much smaller values of NumHidden already lead to very good classification results. If NumHidden is
chosen too large, the MLP may overfit the training data, which typically leads to bad generalization properties, i.e.,
the MLP learns the training data very well, but does not return very good results on unknown data.
create_class_mlp initializes the above described weights with random numbers. To ensure that the results of
training the classifier with train_class_mlp are reproducible, the seed value of the random number generator
is passed in RandSeed. If the training results in a relatively large error, it sometimes may be possible to achieve
a smaller error by selecting a different value for RandSeed and retraining an MLP.
After the MLP has been created, typically training samples are added to the MLP by repeatedly calling
add_sample_class_mlp or read_samples_class_mlp. After this, the MLP is typically trained us-
ing train_class_mlp. Hereafter, the MLP can be saved using write_class_mlp. Alternatively, the
HALCON 8.0.2
32 CHAPTER 1. CLASSIFICATION
MLP can be used immediately after training to evaluate data using evaluate_class_mlp or, if the MLP is
used as a classifier (i.e., for OutputFunction = ’softmax’), to classify data using classify_class_mlp.
A comparison of the MLP and the support vector machine (SVM) (see create_class_svm) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameter
* X = [...]
* Compute the output of the MLP on the test data
for J := 0 to N-1 by 1
evaluate_class_mlp (MLPHandle, X[J], Y)
endfor
clear_class_mlp (MLPHandle)
Result
If the parameters are valid, the operator create_class_mlp returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
create_class_mlp is processed completely exclusively without parallelization.
Possible Successors
add_sample_class_mlp
Alternatives
create_class_svm, create_class_gmm, create_class_box
See also
clear_class_mlp, train_class_mlp, classify_class_mlp, evaluate_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
HALCON 8.0.2
34 CHAPTER 1. CLASSIFICATION
the values in Result can be interpreted as probabilities. Hence, for OutputFunction = ’logistic’ the ele-
ments of Result represent the probabilities of the presence of the respective independent attributes. Typically,
a threshold of 0.5 is used to decide whether the attribute is present or not. Depending on the application, other
thresholds may be used as well. For OutputFunction = ’softmax’ usually the position of the maximum value
of Result is interpreted as the class of the feature vector, and the corresponding value as the probability of the
class. In this case, classify_class_mlp should be used instead of evaluate_class_mlp because
classify_class_mlp directly returns the class and corresponding probability.
Parameter
Compute the information content of the preprocessed feature vectors of a multilayer perceptron.
get_prep_info_class_mlp computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_mlp.
The information content is derived from the variations of the transformed components of the feature vector, i.e.,
it is computed solely based on the training data, independent of any error rate on the training data. The informa-
tion content is computed for all relevant components of the transformed feature vectors (NumInput for ’princi-
pal_components’ and min(NumOutput−1, NumInput) for ’canonical_variates’, see create_class_mlp),
and is returned in InformationCont as a number between 0 and 1. To convert the information content into
a percentage, it simply needs to be multiplied by 100. The cumulative information content of the first n compo-
nents is returned in the n-th component of CumInformationCont, i.e., CumInformationCont contains
the sums of the first n elements of InformationCont. To use get_prep_info_class_mlp, a suffi-
cient number of samples must be added to the multilayer perceptron (MLP) given by MLPHandle by using
add_sample_class_mlp or read_samples_class_mlp.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_mlp. The call to get_prep_info_class_mlp al-
ready requires the creation of an MLP, and hence the setting of NumComponents in create_class_mlp to
an initial value. However, if get_prep_info_class_mlp is called it is typically not known how many com-
ponents are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step ap-
proach should typically be used to select NumComponents: In a first step, an MLP with the maximum number for
NumComponents is created (NumInput for ’principal_components’ and min(NumOutput − 1, NumInput)
for ’canonical_variates’). Then, the training samples are added to the MLP and are saved in a file using
write_samples_class_mlp. Subsequently, get_prep_info_class_mlp is used to determine the
information content of the components, and with this NumComponents. After this, a new MLP with the de-
sired number of components is created, and the training samples are read with read_samples_class_mlp.
Finally, the MLP is trained with train_class_mlp.
HALCON 8.0.2
36 CHAPTER 1. CLASSIFICATION
Parameter
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Htuple . Hlong
MLP handle.
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "principal_components"
List of values : Preprocessing ∈ {"principal_components", "canonical_variates"}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Cumulative information content of the transformed feature vectors.
Example (Syntax: HDevelop)
Result
If the parameters are valid, the operator get_prep_info_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
get_prep_info_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
get_prep_info_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
clear_class_mlp, create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
* Train an MLP
create_class_mlp (NIn, NHidden, NOut, ’softmax’, ’canonical_variates’,
NComp, 42, MLPHandle)
read_samples_class_mlp (MLPHandle, ’samples.mtf’)
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Reclassify the training samples
get_sample_num_class_mlp (MLPHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_mlp (MLPHandle, I, Data, Target)
classify_class_mlp (MLPHandle, Data, 1, Class, Confidence)
Result := gen_tuple_const(NOut,0)
Result[Class] := 1
Diffs := Target-Result
if (sum(fabs(Diffs)) > 0)
* Sample has been classified incorrectly
endif
endfor
clear_class_mlp (MLPHandle)
Result
If the parameters are valid, the operator get_sample_class_mlp returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
get_sample_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp, get_sample_num_class_mlp
Possible Successors
classify_class_mlp, evaluate_class_mlp
See also
create_class_mlp
Module
Foundation
HALCON 8.0.2
38 CHAPTER 1. CLASSIFICATION
Return the number of training samples stored in the training data of a multilayer perceptron.
get_sample_num_class_mlp returns in NumSamples the number of training samples that are stored in
the multilayer perceptron (MLP) given by MLPHandle. get_sample_num_class_mlp should be called
before the individual training samples are accessed with get_sample_class_mlp, e.g., for the purpose of
reclassifying the training data (see get_sample_class_mlp).
Parameter
See also
create_class_mlp, write_class_mlp
Module
Foundation
HALCON 8.0.2
40 CHAPTER 1. CLASSIFICATION
algorithm.
Example (Syntax: HDevelop)
* Train an MLP
create_class_mlp (NIn, NHidden, NOut, ’softmax’, ’normalization’, 1,
42, MLPHandle)
read_samples_class_mlp (MLPHandle, ’samples.mtf’)
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
write_class_mlp (MLPHandle, ’classifier.mlp’)
clear_class_mlp (MLPHandle)
Result
If the parameters are valid, the operator train_class_mlp returns the value H_MSG_TRUE. If necessary an
exception handling is raised.
train_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing = ’canon-
ical_variates’ is used. This typically indicates that not enough training samples have been stored for each class.
Parallelization Information
train_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
evaluate_class_mlp, classify_class_mlp, write_class_mlp
Alternatives
read_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
HALCON 8.0.2
42 CHAPTER 1. CLASSIFICATION
Possible Predecessors
train_class_mlp
Possible Successors
clear_class_mlp
See also
create_class_mlp, read_class_mlp, write_samples_class_mlp
Module
Foundation
1.4 Support-Vector-Machines
is the target of the sample, which must be in the range of 0 to NumClasses-1 (see create_class_svm).
Before the SVM can be trained with train_class_svm, training samples must be added to the SVM with
add_sample_class_svm. The usage of support vectors of an already trained SVM as training samples is
described in train_class_svm.
The number of currently stored training samples can be queried with get_sample_num_class_svm. Stored
training samples can be read out again with get_sample_class_svm.
Normally, it is useful to save the training samples in a file with write_samples_class_svm to facilitate
reusing the samples and to facilitate that, if necessary, new training samples can be added to the data set, and hence
to facilitate that a newly created SVM can be trained with the extended data set.
Parameter
HALCON 8.0.2
44 CHAPTER 1. CLASSIFICATION
clear_all_class_svm ( )
T_clear_all_class_svm ( )
clear_class_svm clears the support vector machine (SVM) given by SVMHandle and frees all memory
required for the SVM. After calling clear_class_svm, the SVM can no longer be used. The handle
SVMHandle becomes invalid.
Parameter
HALCON 8.0.2
46 CHAPTER 1. CLASSIFICATION
nsv
!
X
f (z) = sign αi yi < xi , z > +b
i=1
Here, xi are the support vectors, yi encodes their class membership (±1) and αi the weight coefficients. The dis-
tance of the hyperplane to the origin is b. The α and b are determined during training with train_class_svm.
Note that only a subset of the original training set (nsv : number of support vectors) is necessary for the definition
of the decision boundary and therefore data vectors that are not support vectors are discarded. The classification
speed depends on the evaluation of the dot product between support vectors and the feature vector to be classified,
and hence depends on the length of the feature vector and the number nsv of support vectors.
For classification problems in which the classes are not linearly separable the algorithm is extended in two ways.
First, during training a certain amount of errors (overlaps) is compensated with the use of slack variables. This
means that the α are upper bounded by a regularization constant. To enable an intuitive control of the amount of
training errors, the Nu-SVM version of the training algorithm is used. Here, the regularization parameter Nu is an
asymptotic upper bound on the number of training errors and an asymptotic lower bound on the number of support
vectors. As a rule of thumb, the parameter Nu should be set to the prior expectation of the application’s specific
error ratio, e.g., 0.01 (corresponding to a maximum training error of 1%). Please note that a too big value for Nu
might lead to an infeasible training problem, i.e., the SVM cannot be trained correctly (see train_class_svm
for more details). Since this can only be determined during training, an exception can only be raised there. In this
case, a new SVM with Nu chosen smaller must be created.
Second, because the above SVM exclusively calculates dot products between the feature vectors, it is possible to
incorporate a kernel function into the training and testing algorithm. This means that the dot products are substi-
tuted by a kernel function, which implicitly performs the dot product in a higher dimensional feature space. Given
the appropriate kernel transformation, an originally not linearly separable classification task becomes linearly sep-
arable in the higher dimensional feature space.
Different kernel functions can be selected with the parameter KernelType. For KernelType = ’linear’ the
dot product, as specified in the above formula is calculated. This kernel should solely be used for linearly or nearly
linearly separable classification tasks. The parameter KernelParam is ignored here.
The radial basis function (RBF) KernelType = ’rbf’ is the best choice for a kernel function because it achieves
good results for many classification tasks. It is defined as:
2
−γ·
x−z
K(x, z) = e
Here, the parameter KernelParam is used to select γ. The intuitive meaning of γ is the amount of influence of
a support vector upon its surroundings. A big value of γ (small influence on the surroundings) means that each
training vector becomes a support vector. The training algorithm learns the training data “by heart”, but lacks any
generalization ability (over-fitting). Additionally, the training/classification times grow significantly. A too small
value for γ (big influence on the surroundings) leads to few support vectors defining the separating hyperplane
(under-fitting). One typical strategy is to select a small γ-Nu pair and consecutively increase the values as long as
the recognition rate increases.
With KernelType = ’polynomial_homogeneous’ or ’polynomial_inhomogeneous’, polynomial kernels can be
selected. They are defined in the following way:
The degree of the polynomial kernel must be set with KernelParam. Please note that a too high degree polyno-
mial (d > 10) might result in numerical problems.
As a rule of thumb, the RBF kernel provides a good choice for most of the classification problems and should
therefore be used in almost all cases. Nevertheless, the linear and polynomial kernels might be better suited
for certain applications and can be tested for comparison. Please note that the novelty-detection Mode and the
reduce_class_svm operator are provided only for the RBF kernel.
Mode specifies the general classification task, which is either how to break down a multi-class decision problem to
binary sub-cases or whether to use a special classifier mode called ’novelty-detection’. Mode = ’one-versus-all’
creates a classifier where each class is compared to the rest of the training data. During testing the class with the
largest output (see the classification formula without sign) is chosen. Mode = ’one-versus-one’ creates a binary
classifier between each single class. During testing a vote is cast and the class with the majority of the votes
is selected. The optimal Mode for multi-class classification depends on the number of classes. Given n classes
’one-versus-all’ creates n classifiers, whereas ’one-versus-one’ creates n(n − 1)/2. Note that for a binary decision
task ’one-versus-one’ would create exactly one, whereas ’one-versus-all’ unnecessarily creates two symmetric
classifiers. For few classes (3-10) ’one-versus-one’ is faster for training and testing, because the sub-classifier all
consist of fewer training data and result in overall fewer support vectors. In case of many classes ’one-versus-all’
is preferable, because ’one-versus-one’ generates a prohibitively large amount of sub-classifiers, as their number
grows quadratically with the number of classes.
A special case of classification is Mode = 0 novelty − detection 0 , where the test data is classified with regard to
membership to the training data. The separating hyperplane lies around the training data and thereby implicitly
divides the training data from the rejection class. The advantage is that the rejection class is not defined explicitly,
which is difficult to do in certain applications like texture classification. The resulting support vectors are all lying
at the border. With the parameter Nu, the ratio of outliers in the training data set is specified.
The parameters Preprocessing and NumComponents can be used to specify a preprocessing of the feature
vectors. For Preprocessing = ’none’, the feature vectors are passed unaltered to the SVM. NumComponents
is ignored in this case.
For all other values of Preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification.
For Preprocessing = ’normalization’, the feature vectors are normalized. In case of a polynomial kernel, the
minimum and maximum value of the training data set is transformed to -1 and +1. In case of the RBF kernel, the
data is normalized by subtracting the mean of the training vectors and dividing the result by the standard deviation
of the individual components of the training vectors. Hence, the transformed feature vectors have a mean of 0 and
a standard deviation of 1. The normalization does not change the length of the feature vector. NumComponents
is ignored in this case. This transformation can be used if the mean and standard deviation of the feature vectors
differs substantially from 0 and 1, respectively, or for data in which the components of the feature vectors are
measured in different units (e.g., if some of the data are gray value features and some are region features, or if
region features are mixed, e.g., ’circularity’ (unit: scalar) and ’area’ (unit: pixel squared)). The normalization
transformation should be performed in general, because it increases the numerical stability during training/testing.
For Preprocessing = ’principal_components’, a principal component analysis (PCA) is performed. First, the
feature vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space)
that decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is
0 and the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that
the transformed features that contain the most variation is contained in the first components of the transformed
feature vector. With this, it is possible to omit the transformed features in the last components of the feature vector,
HALCON 8.0.2
48 CHAPTER 1. CLASSIFICATION
which typically are mainly influenced by noise, without losing a large amount of information. The parameter
NumComponents can be used to determine how many of the transformed feature vector components should be
used. Up to NumFeatures components can be selected. The operator get_prep_info_class_svm can
be used to determine how much information each transformed component contains. Hence, it aids the selection of
NumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated. Please note that the RBF kernel is very robust against the dimensionality reduction
performed by PCA and should therefore be the first choice when speeding up the classification time.
The transformation specified by Preprocessing = ’canonical_variates’ first normalizes the training vectors
and then decorrelates the training vectors on average over all classes. At the same time, the transformation maxi-
mally separates the mean values of the individual classes. As for Preprocessing = ’principal_components’,
the transformed components are sorted by information content, and hence transformed components with little infor-
mation content can be omitted. For canonical variates, up to min(NumClasses−1, NumFeatures) components
can be selected. Also in this case, the information content of the transformed components can be determined with
get_prep_info_class_svm. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction. The computation of the canonical variates is also called linear discriminant
analysis.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the length of input
data of the SVM is determined by NumComponents, whereas NumFeatures determines the dimensionality of
the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transforma-
tions, the size of the SVM with respect to data length is reduced, leading to shorter training/classification times by
the SVM.
After the SVM has been created with create_class_svm, typically training samples are added to the SVM
by repeatedly calling add_sample_class_svm or read_samples_class_svm. After this, the SVM is
typically trained using train_class_svm. Hereafter, the SVM can be saved using write_class_svm.
Alternatively, the SVM can be used immediately after training to classify data using classify_class_svm.
A comparison of the SVM and the multi-layer perceptron (MLP) (see create_class_mlp) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameter
Result
If the parameters are valid the operator create_class_svm returns the value H_MSG_TRUE. If necessary,
an exception handling is raised.
Parallelization Information
create_class_svm is processed completely exclusively without parallelization.
Possible Successors
add_sample_class_svm
Alternatives
create_class_mlp, create_class_gmm, create_class_box
See also
clear_class_svm, train_class_svm, classify_class_svm
References
Bernhard Schölkopf, Alexander J.Smola: “Learning with Kernels”; MIT Press, London; 1999.
HALCON 8.0.2
50 CHAPTER 1. CLASSIFICATION
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Module
Foundation
Compute the information content of the preprocessed feature vectors of a support vector machine
get_prep_info_class_svm computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_svm.
The information content is derived from the variations of the transformed components of the feature vec-
tor, i.e., it is computed solely based on the training data, independent of any error rate on the training
data. The information content is computed for all relevant components of the transformed feature vec-
tors (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’, see create_class_svm), and is returned in InformationCont as a number between 0 and
1. To convert the information content into a percentage, it simply needs to be multiplied by 100. The cumulative
information content of the first n components is returned in the n-th component of CumInformationCont,
i.e., CumInformationCont contains the sums of the first n elements of InformationCont. To use
get_prep_info_class_svm, a sufficient number of samples must be added to the support vector machine
(SVM) given by SVMHandle by using add_sample_class_svm or read_samples_class_svm.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_svm. The call to get_prep_info_class_svm al-
ready requires the creation of an SVM, and hence the setting of NumComponents in create_class_svm
to an initial value. However, when get_prep_info_class_svm is called, it is typically not known how
many components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-
step approach should typically be used to select NumComponents: In a first step, an SVM with the maximum
number for NumComponents is created (NumFeatures for ’principal_components’ and min(NumClasses−
1, NumFeatures) for ’canonical_variates’). Then, the training samples are added to the SVM and are saved in
a file using write_samples_class_svm. Subsequently, get_prep_info_class_svm is used to deter-
mine the information content of the components, and with this NumComponents. After this, a new SVM with the
desired number of components is created, and the training samples are read with read_samples_class_svm.
Finally, the SVM is trained with train_class_svm.
Parameter
HALCON 8.0.2
52 CHAPTER 1. CLASSIFICATION
endfor
write_samples_class_svm (SVMHandle, ’samples.mtf’)
* Compute the information content of the transformed features
get_prep_info_class_svm (SVMHandle, ’principal_components’,
InformationCont, CumInformationCont)
* Determine NComp by inspecting InformationCont and CumInformationCont
* NComp = [...]
clear_class_svm (SVMHandle)
* Create the actual SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’principal_components’, NComp, SVMHandle)
* Train the SVM
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
write_class_svm (SVMHandle, ’classifier.svm’)
clear_class_svm (SVMHandle)
Result
If the parameters are valid the operator get_prep_info_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
get_prep_info_class_svm may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
get_prep_info_class_svm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
clear_class_svm, create_class_svm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
Return a training sample from the training data of a support vector machine.
get_sample_class_svm reads out a training sample from the support vector machine (SVM) given by
SVMHandle that was added with add_sample_class_svm or read_samples_class_svm. The
index of the sample is specified with IndexSample. The index is counted from 0, i.e., IndexSample
must be a number between 0 and IndexSamples − 1, where IndexSamples can be determined with
get_sample_num_class_svm. The training sample is returned in Features and Target. Features
is a feature vector of length NumFeatures (see create_class_svm), while Target is the index of the
class, ranging between 0 and NumClasses-1 (see add_sample_class_svm).
get_sample_class_svm can, for example, be used to reclassify the training data with
classify_class_svm in order to determine which training samples, if any, are classified incorrectly.
Parameter
* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
* Reclassify the training samples
get_sample_num_class_svm (SVMHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_svm (SVMHandle, I, Data, Target)
classify_class_svm (SVMHandle, Data, 1, Class)
if (Class # Target)
* Sample has been classified incorrectly
endif
endfor
clear_class_svm (SVMHandle)
Result
If the parameters are valid the operator get_sample_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
get_sample_class_svm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm, get_sample_num_class_svm,
get_support_vector_class_svm
Possible Successors
classify_class_svm
See also
create_class_svm
Module
Foundation
Return the number of training samples stored in the training data of a support vector machine.
get_sample_num_class_svm returns in NumSamples the number of training samples that are stored in
the support vector machine (SVM) given by SVMHandle. get_sample_num_class_svm should be called
before the individual training samples are accessed with get_sample_class_svm, e.g., for the purpose of
reclassifying the training data (see get_sample_class_svm).
Parameter
HALCON 8.0.2
54 CHAPTER 1. CLASSIFICATION
Result
If SVMHandle is valid the operator get_sample_num_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
get_sample_num_class_svm is reentrant and processed without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
get_sample_class_svm
See also
create_class_svm
Module
Foundation
Return the index of a support vector from a trained support vector machine.
The operator get_support_vector_class_svm maps support vectors of a trained SVM (given
in SVMHandle) to the original training data set. The index of the SV is specified with
IndexSupportVector. The index is counted from 0, i.e., IndexSupportVector must be a number
between 0 and IndexSupportVectors − 1, where IndexSupportVectors can be determined with
get_support_vector_num_class_svm. The index of this SV in the training data is returned in Index.
This Index can be used for a query with get_sample_class_svm to obtain the feature vectors that become
support vectors. get_sample_class_svm can, for example, be used to visualize the support vectors.
Note that when using train_class_svm with a mode different from ’default’ or reducing the SVM with
reduce_class_svm, the returned Index will always be -1, i.e., it will be invalid. The reason for this is that a
consistent mapping between SV and training data becomes impossible.
Parameter
HALCON 8.0.2
56 CHAPTER 1. CLASSIFICATION
Possible Successors
classify_class_svm
See also
create_class_svm, write_class_svm
Module
Foundation
Approximate a trained support vector machine by a reduced support vector machine for faster classification.
As described in create_class_svm, the classification time of a SVM depends on the number of kernel
evaluations between the support vectors and the feature vectors. While the length of the data vectors can be
reduced in a preprocessing step like ’pricipal_components’ or ’canonical_variates’ (see create_class_svm
for details), the number of resulting SV depends on the complexity of the classification problem. The number
of SVs is determined during training. To further reduce classification time, the number of SVs can be reduced
by approximating the original separating hyperplane with fewer SVs than originally required. For this purpose, a
copy of the original SVM provided by SVMHandle is created and returned in SVMHandleReduced. This new
SVM has the same parametrization as the original SVM, but a different SV expansion. The training samples that
are included in SVMHandle are not copied. The original SVM is not modified by reduce_class_svm.
The reduction method is selected with Method. Currently, only a bottom up approch is supported, which itera-
tively merges SVs. The algorithm stops if either the minimum number of SVs is reached (MinRemainingSV)
or if the accumulated maximum error exceeds the threshold MaxError. Note that the approximation reduces the
complexity of the hyperplane and thereby leads to a deteriorated classification rate. A common approch is therefore
to start from a small MaxError e.g., 0.001, and to increase its value step by step. To control the reduction ratio,
at each step the number of remaining SVs is determined with get_support_vector_num_class_svm and
the classification rate is checked on a separate test data set with classify_class_svm.
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong
Original SVM handle.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of postprocessing to reduce number of SV.
Default Value : "bottom_up"
List of values : Method ∈ {"bottom_up"}
. MinRemainingSV (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Minimum number of remaining SVs.
Default Value : 2
Suggested values : MinRemainingSV ∈ {2, 3, 4, 5, 7, 10, 15, 20, 30, 50}
Restriction : MinRemainingSV ≥ 2
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Maximum allowed error of reduction.
Default Value : 0.001
Suggested values : MaxError ∈ {0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05}
Restriction : MaxError > 0.0
. SVMHandleReduced (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong *
SVMHandle of reduced SVM.
Example (Syntax: HDevelop)
* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
* Create a reduced SVM
reduce_class_svm (SVMHandle, ’bottom_up’, 2, 0.01, SVMHandleReduced)
write_class_svm (SVMHandleReduced, ’classifier.svm)
clear_class_svm (SVMHandleReduced)
clear_class_svm (SVMHandle)
Result
If the parameters are valid the operator train_class_svm returns the value H_MSG_TRUE. If necessary, an
exception handling is raised.
Parallelization Information
reduce_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
train_class_svm, get_support_vector_num_class_svm
HALCON 8.0.2
58 CHAPTER 1. CLASSIFICATION
Possible Successors
classify_class_svm, write_class_svm, get_support_vector_num_class_svm
See also
train_class_svm
Module
Foundation
Parameter
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong
SVM handle.
. Epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Stop parameter for training.
Default Value : 0.001
Suggested values : Epsilon ∈ {0.00001, 0.0001, 0.001, 0.01, 0.1}
. TrainMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; const char * / Hlong
Mode of training. For normal operation: ’default’. If SVs already included in the SVM should be used for
training: ’add_sv_to_train_set’. For alpha seeding: the respective SVM handle.
Default Value : "default"
List of values : TrainMode ∈ {"default", "add_sv_to_train_set"}
Example (Syntax: HDevelop)
* Train an SVM
create_class_svm (NumFeatures, ’rbf’, 0.01, 0.01, NumClasses,
’one-versus-all’, ’normalization’, NumFeatures,
SVMHandle)
read_samples_class_svm (SVMHandle, ’samples.mtf’)
train_class_svm (SVMHandle, 0.001, ’default’)
write_class_svm (SVMHandle, ’classifier.svm)
clear_class_svm (SVMHandle)
Result
If the parameters are valid the operator train_class_svm returns the value H_MSG_TRUE. If necessary, an
exception handling is raised.
Parallelization Information
train_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
classify_class_svm, write_class_svm
Alternatives
read_class_svm
See also
create_class_svm
References
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Bernhard Schölkopf, Alexander J.Smola: “Lerning with Kernels”; MIT Press, London; 1999.
Module
Foundation
HALCON 8.0.2
60 CHAPTER 1. CLASSIFICATION
Parameter
File
2.1 Images
HALCON also searches images in the subdirectory "‘images"’ (Images for the program examples). The environ-
ment variable HALCONROOT is used for the HALCON directory.
Attention
If CMYK or YCCK JPEG files are read, HALCON assumes that these files follow the Adobe Photoshop convention
that the CMYK channels are stored inverted, i.e., 0 represents 100% ink coverage, rather than 0% ink as one would
expect. The images are converted to RGB images using this convention. If the JPEG file does not follow this
61
62 CHAPTER 2. FILE
convention, but stores the CMYK channels in the usual fashion, invert_image must be called after reading
the image.
If PNG images that contain an alpha channel are read, the alpha channel is returned as the second or fourth channel
of the output image, unless the alpha channel contains exactly two different gray values, in which case a one or
three channel image with a reduced domain is returned, in which the points in the domain correspond to the points
with the higher gray value in the alpha channel.
Parameter
. Image (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4
/ real
Read image.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read(-array) ; (Htuple .) const char *
Name of the image to be read.
Default Value : "fabrik"
Suggested values : FileName ∈ {"monkey", "fabrik", "mreut"}
Example
/* Reading an image: */
read_image(&Image,"mreut") ;
Result
If the parameters are correct the operator read_image returns the value H_MSG_TRUE. Otherwise an exception
handling is raised.
Parallelization Information
read_image is reentrant and processed without parallelization.
Possible Successors
disp_image, threshold, regiongrowing, count_channels, decompose3,
class_ndim_norm, gauss_image, fill_interlace, zoom_image_size,
zoom_image_factor, crop_part, write_image, rgb1_to_gray
Alternatives
read_sequence
See also
set_system, write_image
Module
Foundation
Read images.
The operator read_sequence reads unformatted image data, from a file and returns a “suitable” image. The
image data must be filled consecutively pixel by pixel and line by line.
Any file headers (with the length HeaderSize bytes) are skipped. The parameters SourceWidth and
SourceHeight indicate the size of the filled image. DestWidth and DestHeight indicate the size of the
image. In the simplest case these parameters are the same. However, areas can also be read. The upper left corner
of the required image area can be determined via StartRow and StartColumn.
The pixel types ’bit’, ’byte’, ’short’ (16 bits, unsigned), ’signed_short’ (16 bits, signed), ’long’ (32 bits, signed),
’swapped_long’ (32 bits, with swapped segments), and ’real’ (32 bit floating point numbers) are supported. Fur-
thermore, the operator read_sequence enables the extraction of components of a RBG image, if a triple of
three bytes (in the sequence “red”, “green”, “blue”) was filed in the image file. For the red component the pixel type
’r_byte’ must be chosen, and correspondingly for the green and blue components ’g_byte’ or ’b_byte’, respectively.
’MSBFirst’ (most significant bit first) or the inversion thereof (’LSBFirst’) can be chosen for the bit order
(BitOrder). The byte orders (ByteOrder) ’MSBFirst’ (most significant byte first) or ’LSBFirst’, respectively,
are processed analogously. Finally an alignment (Pad) can be set at the end of the line: ’byte’, ’short’ or ’long’. If
a whole image sequence is stored in the file a single image (beginning at Index 1) can be chosen via the parameter
Index.
Image files are searched in the current directory (determined by the environment variable) and in the image direc-
tory of HALCON . The image directory of HALCON is preset at ’.’ and ’/usr/local/halcon/images’ in a UNIX
environment and can be set via the operator set_system. More than one image directory can be indicated. This
is done by separating the individual directories by a colon.
Furthermore the search path can be set via the environment variable HALCONIMAGES (same structure as ’im-
age_dir’). Example:
HALCON also searches images in the subdirectory "‘images"’ (Images for the program examples). The environ-
ment variable HALCONROOT is used for the HALCON directory.
Attention
If files of pixel type ’real’ are read and the byte order is chosen incorrectly (i.e., differently from the byte order in
which the data is stored in the file) program error and even crashes because of floating point exceptions may result.
Parameter
HALCON 8.0.2
64 CHAPTER 2. FILE
See also
read_image
Module
Foundation
’tiff’ TIFF format, 3-channel-images (RGB): 3 samples per pixel; other images (grayvalue images): 1 sample per
pixel, 8 bits per sample, uncompressed,72 dpi; file extension: *.tif
’bmp’ Windows-BMP format, 3-channel-images (RGB): 3 bytes per pixel; other images (gray value image): 1
byte per pixel; file extension: *.bmp
’jpeg’ JPEG format, with lost of information; together with the format string the quality value determining the
compression rate can be provided: e.g., ’jpeg 30’. Attention: images stored for being processed later should
not be compressed with the jpeg format according to the lost of information.
’jp2’ : JPEG-2000 format (lossless and lossy compression); together with the format string the quality value
determing the compression rate can be provided (e.g., ’jp2 40’). This value corresponds to the ratio of the
size of the compressed image and the size of the uncompressed image (in percent). Since lossless JPEG-
2000 compression already reduces the file size significantly, only smaller values (typically smaller than 50)
influence the file size. If no value is provided for the compression (and only then), the image is compressed
lossless. The image can contain an arbitrary number of channels. Possible types are byte, cyclic, direction,
int1, uint2, int2, and int4. In the case of int4 it is only possible to store images with less or equal to 24
bits precision (otherwise an exception handling is raised). If an image with a reduced domain is written, the
region is stored as 1 bit alpha channel.
’png’ PNG format (lossless compression); together with the format string, a compresion level between 0 and 9 can
be specified, where 0 corresponds to no compression and 9 to the best possible compression. Alternatively,
the compression can be selected with the following strings: ’best’, ’fastest’, and ’none’. Hence, examples for
correct parameters are ’png’, ’png 7’, and ’png none’. Images of type byte and uint2 can be stored in PNG
files. If an image with a reduced domain is written, the region is stored as the alpha channel, where the points
within the domain are stored as the maximum gray value of the image type and the points outside the domain
are stored as the gray value 0. If an image with a full domain is written, no alpha channel is stored.
’ima’ The data is written binary line by line (without header or carriage return). The size of the image and the
pixel type are stored in the description file "’FileName.exp"’. All HALCON pixel types except complex
and vector_field can be written. Only the first channel of the image is stored in the file. The file extension
is: ’.ima’
Parameter
. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Output image(s).
. Format (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Graphic format.
Default Value : "tiff"
List of values : Format ∈ {"tiff", "bmp", "jpeg", "ima", "jpeg 100", "jpeg 80", "jpeg 60", "jpeg 40", "jpeg
20", "jp2", "jp2 50", "jp2 40", "jp2 30", "jp2 20", "png", "png best", "png fastest", "png none"}
HALCON 8.0.2
66 CHAPTER 2. FILE
2.2 Misc
delete_file ( const char *FileName )
T_delete_file ( const Htuple FileName )
Delete a file.
delete_file deletes the file given by FileName.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename ; const char *
Name of file to be checked.
Result
delete_file returns the value H_MSG_TRUE if the file exists and could be deleted. Otherwise, an exception
is raised.
Parallelization Information
delete_file is reentrant and processed without parallelization.
Module
Foundation
Parallelization Information
file_exists is reentrant and processed without parallelization.
Possible Successors
open_file
Alternatives
open_file
Module
Foundation
HALCON 8.0.2
68 CHAPTER 2. FILE
read_world_file reads a geocoding from an ARC/INFO world file with the file name FileName
and returns it as a homogeneous 2D transformation matrix in WorldTransformation. To find the file
FileName, all directories contained in the HALCON system variable ’image_dir’ (usually this is the con-
tent of the environment variable HALCONIMAGES) are searched (see read_image). This transforma-
tion matrix can be used to transform XLD contours to the world coordinate system before writing them
with write_contour_xld_arc_info. If the matrix WorldTransformation is inverted by call-
ing hom_mat2d_invert, the resulting matrix can be used to transform contours that have been read with
read_contour_xld_arc_info to the image coordinate system.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
Name of the ARC/INFO world file.
. WorldTransformation (output_control) . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Transformation matrix from image to world coordinates.
Result
If the parameters are correct and the world file could be read, the operator read_world_file returns the value
H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
read_world_file is reentrant and processed without parallelization.
Possible Successors
hom_mat2d_invert, affine_trans_contour_xld, affine_trans_polygon_xld
See also
write_contour_xld_arc_info, read_contour_xld_arc_info,
write_polygon_xld_arc_info, read_polygon_xld_arc_info
Module
Foundation
2.3 Region
Example
Result
If the parameter values are correct the operator read_region returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
read_region is reentrant and processed without parallelization.
Possible Predecessors
read_image
Possible Successors
reduce_domain, disp_region
See also
write_region, read_image
Module
Foundation
regiongrowing(Img,&Segmente,3,3,5,10) ;
write_region(Segmente,"result1") ;
Result
If the parameter values are correct the operator write_region returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
write_region is reentrant and processed without parallelization.
Possible Predecessors
open_window, read_image, read_region, threshold, regiongrowing
See also
read_region
Module
Foundation
HALCON 8.0.2
70 CHAPTER 2. FILE
2.4 Text
close_all_files ( )
T_close_all_files ( )
open_file("/tmp/data.txt","input",&FileHandle) ;
/* ... */
close_file(FileHandle) ;
Result
If the file handle is correct close_file returns the value H_MSG_TRUE. Otherwise an exception handling is
raised.
Parallelization Information
close_file is processed completely exclusively without parallelization.
Possible Predecessors
open_file
See also
open_file
Module
Foundation
fwrite_string(FileHandle,"Good Morning") ;
fnew_line(FileHandle) ;
Result
If an output file is open and it can be written to the file the operator fnew_line returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
fnew_line is reentrant and processed without parallelization.
Possible Predecessors
fwrite_string
See also
fwrite_string
Module
Foundation
do {
fread_char(FileHandle,&Char) ;
if (!strcmp(Char,"nl")) fnew_line(FileHandle) ;
if (!strcmp(Char,"nl")) fwrite_string(FileHandle,Char)) ;
} while(strcmp(Char,"eof")) ;
Result
If an input file is open the operator fread_char returns H_MSG_TRUE. Otherwise an exception handling is raised.
HALCON 8.0.2
72 CHAPTER 2. FILE
Parallelization Information
fread_char is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
fread_string, read_string, fread_line
See also
open_file, close_file, fread_string, fread_line
Module
Foundation
do {
fread_line(FileHandle,&Line,&IsEOF) ;
} while(IsEOF==0) ;
Result
If the file is open and a suitable line is read fread_line returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
fread_line is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
fread_char, fread_string
See also
open_file, close_file, fread_char, fread_string
Module
Foundation
Result
If a file is open and a suitable string is read fread_string returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
fread_string is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
fread_char, read_string, fread_line
See also
open_file, close_file, fread_char, fread_line
Module
Foundation
HALCON 8.0.2
74 CHAPTER 2. FILE
Parameter
. FileHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . file ; (Htuple .) Hlong
File handle.
. String (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char * / Hlong / double
Values to be put out on the text file.
Default Value : "hallo"
Example
/* Tupel Version */
int i;
double d;
Htuple Tuple ;
create_tuple(&Tuple,4) ;
i = 5 ;
d = 10.0 ;
set_s(Tuple,"text with numbers: ",0) ;
set_i(Tuple,i,1) ;
set_s(Tuple," and ",2) ;
set_d(Tuple,d,3) ;
T_fwrite_string(FileHandle,HilfsTuple) ;
Result
If the writing procedure was carried out successfully the operator fwrite_string returns the value
H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
fwrite_string is reentrant and processed without parallelization.
Possible Predecessors
open_file
Possible Successors
close_file
Alternatives
write_string
See also
open_file, close_file, set_system
Module
Foundation
(’output’ or ’append’) are created. For terminal input and output the file names ’standard’ (’input’ and ’output’)
and ’error’ (only ’output’) are reserved.
Parameter
Result
If the parameters are correct the operator open_file returns the value H_MSG_TRUE. Otherwise an exception
handling is raised.
Parallelization Information
open_file is processed completely exclusively without parallelization.
Possible Successors
fwrite_string, fread_char, fread_string, fread_line, close_file
See also
close_file, fwrite_string, fread_char, fread_string, fread_line
Module
Foundation
2.5 Tuple
HALCON 8.0.2
76 CHAPTER 2. FILE
Parallelization Information
read_tuple is reentrant and processed without parallelization.
Alternatives
fwrite_string
See also
write_tuple, gnuplot_plot_ctrl, write_image, write_region, open_file
Module
Foundation
2.6 XLD
read_contour_xld_arc_info ( Hobject *Contours, const char *FileName )
T_read_contour_xld_arc_info ( Hobject *Contours,
const Htuple FileName )
Parameter
Result
If the parameters are correct and the file could be read, the operator read_contour_xld_arc_info returns
the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
read_contour_xld_arc_info is reentrant and processed without parallelization.
Possible Successors
hom_mat2d_invert, affine_trans_contour_xld
See also
read_world_file, write_contour_xld_arc_info, read_polygon_xld_arc_info
Module
Foundation
• POLYLINE
– 2D curves made up of line segments
– Closed 2D curves made up of line segments
• LWPOLYLINE
• LINE
• POINT
• CIRCLE
• ARC
HALCON 8.0.2
78 CHAPTER 2. FILE
• ELLIPSE
• SPLINE
• BLOCK
• INSERT
The x and y coordinates of the DXF entities are stored in the column and row coordinates, respectively, of the XLD
contours Contours.
If the file has been created with the operator write_contour_xld_dxf, all attributes and global attributes that
were originally defined for the XLD contours are read. This means that read_contour_xld_dxf supports all
the extended data written by the operator write_contour_xld_dxf. The reading of these attributes can be
switched off by setting the generic parameter ’read_attributes’ to ’false’. Generic parameters are set by specifying
the parameter name(s) in GenParamNames and the corresponding value(s) in GenParamValues.
DXF entities of the type CIRCLE, ARC, ELLIPSE, and SPLINE are approximated by XLD contours. The
accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’ and
’max_approx_error’ (for SPLINE only ’max_approx_error’). The parameter ’min_num_points’ defines the mini-
mum number of sampling points that are used for the approximation. Note that the parameter ’min_num_points’
always refers to the full circle or ellipse, respectively, even for ARCs or elliptical arcs, i.e., if ’min_num_points’ is
set to 50 and a DXF entity of the type ARC is read that represents a semi-circle, this semi-circle is approximated
by at least 25 sampling points. The parameter ’max_approx_error’ defines the maximum deviation of the XLD
contour from the ideal circle or ellipse, respectively (unit: pixel). For the determination of the accuracy of the
approximation both criteria are evaluated. Then, the criterion that leads to the more accurate approximation is
used.
Internally, the following default values are used for the generic parameters:
’read_attributes’ = ’true’
’min_num_points’ = 20
’max_approx_error’ = 0.25
To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
Parameter
query_contour_global_attribs_xld, get_contour_attrib_xld,
get_contour_global_attrib_xld
Module
Foundation
Result
If the parameters are correct and the file could be read, the operator read_polygon_xld_arc_info returns
the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
read_polygon_xld_arc_info is reentrant and processed without parallelization.
Possible Successors
hom_mat2d_invert, affine_trans_polygon_xld
See also
read_world_file, write_polygon_xld_arc_info, read_contour_xld_arc_info
Module
Foundation
HALCON 8.0.2
80 CHAPTER 2. FILE
read_polygon_xld_dxf reads the contents of the DXF file FileName (DXF version AC1009, AutoCAD
Release 12) and converts them to the XLD polygons Polygons. If no absolute path is given in FileName the
DXF file is searched in the current directory of the HALCON process.
The output parameter DxfStatus contains information about the number of polygons that were read and, if
necessary, warnings that parts of the DXF file could not be interpreted.
The operator read_polygon_xld_dxf supports the following DXF entities:
• POLYLINE
– 2D curves made up of line segments
– Closed 2D curves made up of line segments
• LWPOLYLINE
• LINE
• POINT
• CIRCLE
• ARC
• ELLIPSE
• SPLINE
• BLOCK
• INSERT
The x and y coordinates of the DXF entities are stored in the column and row coordinates, respectively, of the XLD
polygons Polygons.
DXF entities of the type CIRCLE, ARC, ELLIPSE, and SPLINE are approximated by XLD polygons. The
accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’ and
’max_approx_error’ (for SPLINE only ’max_approx_error’). Generic parameters are set by specifying the pa-
rameter name(s) in GenParamNames and the corresponding value(s) in GenParamValues. The parameter
’min_num_points’ defines the minimum number of sampling points that are used for the approximation. Note that
the parameter ’min_num_points’ always refers to the full circle or ellipse, respectively, even for ARCs or elliptical
arcs, i.e., if ’min_num_points’ is set to 50 and a DXF entity of the type ARC is read that represents a semi-circle,
this semi-circle is approximated by at least 25 sampling points. The parameter ’max_approx_error’ defines the
maximum deviation of the XLD polygon from the ideal circle or ellipse, respectively (unit: pixel). For the deter-
mination of the accuracy of the approximation both criteria are evaluated. Then, the criterion that leads to the more
accurate approximation is used.
Internally, the following default values are used for the generic parameters:
’min_num_points’ = 20
’max_approx_error’ = 0.25
To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
Note that reading a DXF file with read_polygon_xld_dxf results in exactly the same geometric information
as reading the file with read_contour_xld_dxf. However, the resulting data structure is different.
Parameter
. Polygons (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_poly(-array) ; Hobject *
Read XLD polygons.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; (Htuple .) const char *
Name of the DXF file.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that can be adjusted for the DXF input.
Default Value : []
List of values : GenParamNames ∈ {"min_num_points", "max_approx_error"}
. GenParamValues (input_control) . . . . . .attribute.value(-array) ; (Htuple .) double / Hlong / const char *
Values of the generic parameters that can be adjusted for the DXF input.
Default Value : []
Suggested values : GenParamValues ∈ {0.1, 0.25, 0.5, 1, 2, 5, 10, 20}
Result
If the parameters are correct and the file could be written, the operator write_contour_xld_arc_info
returns the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
write_contour_xld_arc_info is reentrant and processed without parallelization.
Possible Predecessors
affine_trans_contour_xld
See also
read_world_file, read_contour_xld_arc_info, write_polygon_xld_arc_info
Module
Foundation
HALCON 8.0.2
82 CHAPTER 2. FILE
DXF Explanation
1000 Meaning
contour attributes
1002 Beginning of the value list
{
1070 Number of attributes (here: 3)
3
1040 Value of the first attribute
5.00434303
1040 Value of the second attribute
126.8638916
1040 Value of the third attribute
4.99164152
1002 End of the value list
}
The global attributes are written in the following format as extended data of each POLYLINE:
DXF Explanation
1000 Meaning
global contour attributes
1002 Beginning of the value list
{
1070 Number of global attributes (here: 5)
5
1040 Value of the first global attribute
0.77951831
1040 Value of the second global attribute
0.62637949
1040 Value of the third global attribute
103.94314575
1040 Value of the fourth global attribute
0.21434096
1040 Value of the fifth global attribute
0.21921949
1002 End of the value list
}
The names of the attributes are written in the following format as extended data of each POLYLINE:
DXF Explanation
1000 Meaning
names of contour attributes
1002 Beginning of the value list
{
1070 Number of attribute names (here: 3)
3
1000 Name of the first attribute
angle
1000 Name of the second attribute
response
1000 Name of the third attribute
edge_direction
1002 End of the value list
}
The names of the global attributes are written in the following format as extended data of each POLYLINE:
DXF Explanation
1000 Meaning
names of global contour attributes
1002 Beginning of the value list
{
1070 Number of global attribute names (here: 5)
5
1000 Name of the first global attribute
regr_norm_row
1000 Name of the second global attribute
regr_norm_col
1000 Name of the third global attribute
regr_dist
1000 Name of the fourth global attribute
regr_mean_dist
1000 Name of the fifth global attribute
regr_dev_dist
1002 End of the value list
}
HALCON 8.0.2
84 CHAPTER 2. FILE
Parameter
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject
XLD contours to be written.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of the DXF file.
Result
If the parameters are correct and the file could be written the operator write_contour_xld_dxf returns the
value H_MSG_TRUE. Otherwise, an exception is raised.
Parallelization Information
write_contour_xld_dxf is reentrant and processed without parallelization.
Possible Predecessors
edges_sub_pix
See also
read_contour_xld_dxf, write_polygon_xld_dxf, query_contour_attribs_xld,
query_contour_global_attribs_xld, get_contour_attrib_xld,
get_contour_global_attrib_xld
Module
Foundation
Result
If the parameters are correct and the file could be written, the operator write_polygon_xld_arc_info
returns the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
write_polygon_xld_arc_info is reentrant and processed without parallelization.
Possible Predecessors
affine_trans_polygon_xld
See also
read_world_file, read_polygon_xld_arc_info, write_contour_xld_arc_info
Module
Foundation
HALCON 8.0.2
86 CHAPTER 2. FILE
Filter
3.1 Arithmetic
abs_image ( const Hobject Image, Hobject *ImageAbs )
T_abs_image ( const Hobject Image, Hobject *ImageAbs )
Result
The operator abs_image returns the value H_MSG_TRUE. The behavior in case of empty input (no input
images available) is set via the operator set_system(’no_object_result’,<Result>)
Parallelization Information
abs_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
See also
convert_image_type, power_byte
Module
Foundation
87
88 CHAPTER 3. FILTER
The operator add_image adds two images. The gray values (g1, g2) of the input images (Image1 and Image2)
are transformed as follows:
If an overflow or an underflow occurs the values are clipped. This is not the case with int2 images if Mult is equal
to 1 and Add is equal to 0. To reduce the runtime the underflow and overflow check is skipped. The resulting
image is stored in ImageResult.
It is possible to add byte images with int2, uint2 or int4 images and to add int4 to int2 or uint2 images. In this case
the result will be of type int2 or int4 respectively.
Several images can be processed in one call. In this case both input parameters contain the same number of images
which are then processed in pairs. An output image is generated for every pair.
Please note that the runtime of the operator varies with different control parameters. For frequently used combina-
tions special optimizations are used. Additionally, for byte, int2, uint2, and int4 images special optimizations are
implemented that use SIMD technology. The actual application of these special optimizations is controlled by the
system parameter ’mmx_enable’ (see set_system). If ’mmx_enable’ is set to ’true’ (and the SIMD instruction
set is available), the internal calculations are performed using SIMD technology.
Attention
Note that SIMD technology performs best on large, compact input regions. Depending on the input region and
the capabilities of the hardware the execution of add_image might even take significantly more time with
SIMD technology than without. In this case, the use of SIMD technology can be avoided by set_system
(’mmx_enable’,’false’).
Parameter
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 2.
. ImageResult (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2
/ int4 / real / direction / cyclic / com-
plex
Result image(s) by the addition.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Factor for gray value adaption.
Default Value : 0.5
Suggested values : Mult ∈ {0.2, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 5.0}
Typical range of values : -255.0 ≤ Mult ≤ 255.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Value for gray value range adaption.
Default Value : 0
Suggested values : Add ∈ {0, 64, 128, 255, 512}
Typical range of values : -512.0 ≤ Add ≤ 512.0
Minimum Increment : 0.01
Recommended Increment : 1.0
Example
read_image(&Image0,"fabrik");
disp_image(Image0,WindowHandle);
read_image(&Image1,"Affe");
disp_image(Image1,WindowHandle);
add_image(Image0,Image1,&Result,2.0,10.0);
disp_image(Result,WindowHandle);
Result
The operator add_image returns the value H_MSG_TRUE if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
add_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
sub_image, mult_image
See also
sub_image, mult_image
Module
Foundation
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ complex
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ complex
Image(s) 2.
. ImageResult (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2
/ int4 / real / complex
Result image(s) by the division.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Factor for gray range adaption.
Default Value : 255
Suggested values : Mult ∈ {0.1, 0.2, 0.5, 1.0, 2.0, 3.0, 10, 100, 500, 1000}
Typical range of values : -1000 ≤ Mult ≤ 1000
Minimum Increment : 0.001
Recommended Increment : 1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Value for gray range adaption.
Default Value : 0
Suggested values : Add ∈ {0.0, 128.0, 256.0, 1025}
Typical range of values : -1000 ≤ Add ≤ 1000
Minimum Increment : 0.01
Recommended Increment : 1.0
Example
HALCON 8.0.2
90 CHAPTER 3. FILTER
read_image(&Image0,"fabrik");
disp_image(Image0,WindowHandle);
read_image(&Image1,"Affe");
disp_image(Image1,WindowHandle);
div_image(Image0,Image1,&Result,2.0,10.0);
disp_image(Result,WindowHandle);
Result
The operator div_image returns the value H_MSG_TRUE if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
div_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
add_image, sub_image, mult_image
See also
add_image, sub_image, mult_image
Module
Foundation
Invert an image.
The operator invert_image inverts the gray values of an image. For images of the ’byte’ and ’cyclic’ type the
result is calculated as:
g 0 = 255 − g
In the case of signed types the values are negated. The resulting image has the same pixel type as the input image.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Input image(s).
. ImageInvert (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic
/ int1 / int2 / uint2 / int4 / real
Image(s) with inverted gray values.
Example
read_image(&Orig,"fabrik");
invert_image(Orig,&Invert);
disp_image(Invert,WindowHandle);
Parallelization Information
invert_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
watersheds
Alternatives
scale_image
See also
scale_image, add_image, sub_image
Module
Foundation
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 2.
. ImageMax (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2 / int4
/ real / direction / cyclic
Result image(s) by the maximization.
Example
read_image(&Bild1,"affe");
read_image(&Bild2,"fabrik");
max_image(Bild1,Bild2,&Max);
disp_image(Max,WindowHandle);
Result
If the parameter values are correct the operator max_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
max_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
max_image
See also
min_image
Module
Foundation
HALCON 8.0.2
92 CHAPTER 3. FILTER
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic
Image(s) 2.
. ImageMin (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2 / int4
/ real / direction / cyclic
Result image(s) by the minimization.
Result
If the parameter values are correct the operator min_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
min_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_erosion
See also
max_image, min_image
Module
Foundation
g 0 := g1 ∗ g2 ∗ Mult + Add
Parameter
. Image1 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 1.
. Image2 (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
/ direction / cyclic / complex
Image(s) 2.
. ImageResult (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2
/ int4 / real / direction / cyclic / com-
plex
Result image(s) by the product.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Factor for gray range adaption.
Default Value : 0.005
Suggested values : Mult ∈ {0.001, 0.01, 0.5, 1.0, 2.0, 3.0, 5.0, 10.0}
Typical range of values : -255.0 ≤ Mult ≤ 255.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Value for gray range adaption.
Default Value : 0
Suggested values : Add ∈ {0.0, 128.0, 256.0}
Typical range of values : -512.0 ≤ Add ≤ 512.0
Minimum Increment : 0.01
Recommended Increment : 1.0
Example
read_image(&Image0,"fabrik");
disp_image(Image0,WindowHandle);
read_image(&Image1,"Affe");
disp_image(Image1,WindowHandle);
mult_image(Image0,Image1,&Result,2.0,10.0);
disp_image(Result,WindowHandle);
Result
The operator mult_image returns the value H_MSG_TRUE if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
mult_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
add_image, sub_image, div_image
See also
add_image, sub_image, div_image
Module
Foundation
HALCON 8.0.2
94 CHAPTER 3. FILTER
The operator scale_image scales the input images (Image) by the following transformation:
g 0 := g ∗ Mult + Add
255
Mult = Add = −Mult ∗ GMin
GMax − GMin
The values for GMin and GMax can be determined, e.g., with the operator min_max_gray.
Please note that the runtime of the operator varies with different control parameters. For frequently used combi-
nations special optimizations are used. Additionally, special optimizations are implemented that use fixed point
arithmetic (for int2 and uint2 images), and further optimizations that use SIMD technology (for byte, int2, and uint2
images). The actual application of these special optimizations is controlled by the system parameters ’int_zooming’
and ’mmx_enable’ (see set_system). If ’int_zooming’ is set to ’true’, the internal calculation is performed us-
ing fixed point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed
gray values is slightly lower in this mode. The difference to the more accurate calculation (using ’int_zooming’
= ’false’) is typically less than two gray levels. If ’mmx_enable’ is set to ’true’(and the SIMD instruction set is
available), the internal calculations are performed using fixed point arithmetic and SIMD technology. In this case
the setting of ’int_zooming’ is ignored.
Attention
Note that SIMD technology performs best on large, compact input regions. Depending on the input region and
the capabilities of the hardware the execution of scale_image might even take significantly more time with
SIMD technology than without. In this case, the use of SIMD technology can be avoided by set_system
(’mmx_enable’,’false’).
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real /
direction / cyclic / complex
Image(s) whose gray values are to be scaled.
. ImageScaled (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2
/ int4 / real / direction / cyclic / com-
plex
Result image(s) by the scale.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Scale factor.
Default Value : 0.01
Suggested values : Mult ∈ {0.001, 0.003, 0.005, 0.008, 0.01, 0.02, 0.03, 0.05, 0.08, 0.1, 0.5, 1.0}
Minimum Increment : 0.001
Recommended Increment : 0.1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Offset.
Default Value : 0
Suggested values : Add ∈ {0, 10, 50, 100, 200, 500}
Minimum Increment : 0.01
Recommended Increment : 1.0
Example
Result
The operator scale_image returns the value H_MSG_TRUE if the parameters are correct. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) Otherwise an exception treatment is carried out.
Parallelization Information
scale_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
min_max_gray
Alternatives
mult_image, add_image, sub_image
See also
min_max_gray
Module
Foundation
HALCON 8.0.2
96 CHAPTER 3. FILTER
system parameter ’mmx_enable’ (see set_system). If ’mmx_enable’ is set to ’true’ (and the SIMD instruction
set is available), the internal calculations are performed using SIMD technology.
Attention
Note that SIMD technology performs best on large, compact input regions. Depending on the input region and
the capabilities of the hardware the execution of sub_image might even take significantly more time with
SIMD technology than without. In this case, the use of SIMD technology can be avoided by set_system
(’mmx_enable’,’false’).
Parameter
. ImageMinuend (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 /
int4 / real / direction / cyclic / com-
plex
Minuend(s).
. ImageSubtrahend (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 /
uint2 / int4 / real / direction /
cyclic / complex
Subtrahend(s).
. ImageSub (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int1 / int2 / uint2 / int4
/ real / direction / cyclic / complex
Result image(s) by the subtraction.
. Mult (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Correction factor.
Default Value : 1.0
Suggested values : Mult ∈ {0.0, 1.0, 2.0, 3.0, 4.0}
Typical range of values : -255.0 ≤ Mult ≤ 255.0
Minimum Increment : 0.001
Recommended Increment : 0.1
. Add (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Correction value.
Default Value : 128.0
Suggested values : Add ∈ {0.0, 128.0, 256.0}
Typical range of values : -512.0 ≤ Add ≤ 512.0
Minimum Increment : 0.01
Recommended Increment : 1.0
Example
read_image(&Image0,"fabrik");
disp_image(Image0,WindowHandle);
read_image(&Image1,"Affe");
disp_image(Image1,WindowHandle);
sub_image(Image0,Image1,&Result,2.0,10.0);
disp_image(Result,WindowHandle);
Result
The operator sub_image returns the value H_MSG_TRUE if the parameters are correct. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
sub_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
dual_threshold
Alternatives
mult_image, add_image, sub_image
See also
add_image, mult_image, dyn_threshold, check_difference
Module
Foundation
3.2 Bit
bit_and ( const Hobject Image1, const Hobject Image2, Hobject *ImageAnd )
T_bit_and ( const Hobject Image1, const Hobject Image2,
Hobject *ImageAnd )
read_image(&Image0,"affe");
disp_image(Image0,WindowHandle);
read_image(&Image1,"fabrik");
disp_image(Image1,WindowHandle);
bit_and(Image0,Image1,&ImageBitA);
disp_image(ImageBitA,WindowHandle);
Result
If the images are correct (type and number) the operator bit_and returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_and is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_mask, add_image, max_image
See also
bit_mask, add_image, max_image
Module
Foundation
HALCON 8.0.2
98 CHAPTER 3. FILTER
unsigned short, int/long). If an overflow occurs the result is limited to the maximum value of the respective pixel
type. Only the pixels within the definition range of the image are processed.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4
Input image(s).
. ImageLShift (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic
/ int1 / int2 / uint2 / int4
Result image(s) by shift operation.
. Shift (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Shift value.
Default Value : 3
Suggested values : Shift ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20, 24, 30, 31}
Typical range of values : 0 ≤ Shift ≤ 31
Minimum Increment : 1
Recommended Increment : 1
Restriction : (Shift ≥ 1) ∧ (Shift ≤ 31)
Example
read_image(&ByteImage,"fabrik");
convert_image_type(ByteImage,&Int2Image,"int2");
bit_lshift(Int2Image,&FullInt2Image,8);
Result
If the images are correct (type) and if Shift has a valid value the operator bit_lshift returns the
value H_MSG_TRUE. The behavior in case of empty input (no input images available) is set via the operator
set_system(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_lshift is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
scale_image
See also
bit_rshift
Module
Foundation
read_image(&Image0,"affe");
disp_image(Image0,WindowHandle);
bit_not(Image0,&ImageBitN);
disp_image(ImageBitN,WindowHandle);
Result
If the images are correct (type) the operator bit_not returns the value H_MSG_TRUE. The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_not is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_or, bit_and, add_image
See also
bit_slice, bit_mask
HALCON 8.0.2
100 CHAPTER 3. FILTER
Module
Foundation
read_image(&Image0,"affe");
disp_image(Image0,WindowHandle);
read_image(&Image1,"fabrik");
disp_image(Image1,WindowHandle);
bit_or(Image0,Image1,&ImageBitO);
disp_image(ImageBitO,WindowHandle);
Result
If the images are correct (type and number) the operator bit_or returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_or is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_and, add_image
See also
bit_xor, bit_and
Module
Foundation
The operator bit_rshift calculates a “right shift” of all pixels of the input image bit by bit. The semantics
of the “right shift” operation corresponds to that of C (“»”) for the respective types (signed char, unsigned char,
short, unsigned short, int/long). Only the pixels within the definition range of the image are processed.
Several images can be processed in one call. An output image is generated for every input image.
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4
Input image(s).
. ImageRShift (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / direction / cyclic
/ int1 / int2 / uint2 / int4
Result image(s) by shift operation.
. Shift (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
shift value
Default Value : 3
Suggested values : Shift ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20, 24, 30, 31}
Typical range of values : 0 ≤ Shift ≤ 31
Minimum Increment : 1
Recommended Increment : 1
Restriction : (Shift ≥ 1) ∧ (Shift ≤ 31)
Example
bit_rshift(Int2Image,&ReducedInt2Image,8);
convert_image_type(ReducedInt2Image,&ByteImage,"byte");
Result
If the images are correct (type) and Shift has a valid value the operator bit_rshift returns the value
H_MSG_TRUE. The behavior in case of empty input (no input images available) is set via the operator
set_system(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_rshift is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
scale_image
See also
bit_lshift
Module
Foundation
HALCON 8.0.2
102 CHAPTER 3. FILTER
read_image(&ByteImage,"fabrik");
for (bit=1; bit<=8; i++)
{
bit_slice(ByteImage,&Slice,bit);
threshold(Slice,&Region,0,255);
disp_region(Region,WindowHandle);
clear(bit_slice); clear(Slice); clear(Region);
}
Result
If the images are correct (type) and Bit has a valid value, the operator bit_slice returns the value
H_MSG_TRUE. The behavior in case of empty input (no input images available) is set via the operator
set_system(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_slice is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold, bit_or
Alternatives
bit_mask
See also
bit_and, bit_lshift
Module
Foundation
Example
read_image(&Image0,"affe");
disp_image(Image0,WindowHandle);
read_image(&Image1,"fabrik");
disp_image(Image1,WindowHandle);
bit_xor(Image0,Image1,&ImageBitX);
disp_image(ImageBitX,WindowHandle);
Result
If the parameter values are correct the operator bit_xor returns the value H_MSG_TRUE. The behav-
ior in case of empty input (no input images available) can be determined by the operator set_system
(’no_object_result’,<Result>) If necessary an exception handling is raised.
Parallelization Information
bit_xor is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
bit_or, bit_and, add_image
See also
bit_or, bit_and
Module
Foundation
3.3 Color
cfa_to_rgb ( const Hobject CFAImage, Hobject *RGBImage,
const char *CFAType, const char *Interpolation )
G B G B G B ···
R G R G R G ···
G B G B G B ···
R G R G R G ···
.. .. .. .. .. .. ..
. . . . . . .
Each gray value of the input image CFAImage corresponds to the brightness of the pixel behind the corresponding
color filter. Hence, in the above layout, the pixel (0,0) corresponds to a green color value, while the pixel (0,1)
HALCON 8.0.2
104 CHAPTER 3. FILTER
corresponds to a blue color value. The layout of the Bayer filter is completely determined by the first two elements
of the first row of the image, and can be chosen with the parameter CFAType. In particular, this enables the correct
conversion of color filter array images that have been cropped out of a larger image (e.g., using crop_part or
crop_rectangle1). The algorithm that is used to interpolate the RGB values is determined by the parameter
Interpolation. Currently, the only possible choice is ’bilinear’.
Parameter
. CFAImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Input image.
. RGBImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / uint2
Output image.
. CFAType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Color filter array type.
Default Value : "bayer_gb"
List of values : CFAType ∈ {"bayer_gb", "bayer_gr", "bayer_bg", "bayer_rg"}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Interpolation type.
Default Value : "bilinear"
List of values : Interpolation ∈ {"bilinear"}
Result
cfa_to_rgb returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
cfa_to_rgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
gen_image1_extern, gen_image1, grab_image
Possible Successors
decompose3
See also
trans_from_rgb
Module
Foundation
Compute the transformation matrix of the principal component analysis of multichannel images.
gen_principal_comp_trans computes the transformation matrix of a principal components analysis of
multichannel images. This is useful for images obtained, e.g., with the thematic mapper of the Landsat satellite.
Because the spectral bands are highly correlated, it is desirable to transform them to uncorrelated images. This can
be used to save storage, since the bands containing little information can be discarded, and with respect to a later
classification step.
The operator gen_principal_comp_trans takes one or more multichannel images
MultichannelImage and computes the transformation matrix Trans for the principal components
analysis, as well as its inverse TransInv. All input images must have the same number of channels.
The principal components analysis is performed based on the collection of data of all images. Hence,
gen_principal_comp_trans facilitates using the statistics of multiple images.
If n is the number of channels, Trans and TransInv are matrices of dimension n × (n + 1), which describe
an affine transformation of the multichannel gray values. They can be used to transform a multichannel image
with linear_trans_color. For information purposes, the mean gray value of the channels and the n × n
covariance matrix of the channels are returned in Mean and Cov, respectively. The parameter InfoPerComp
contains the relative information content of each output channel.
Parameter
. MultichannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / uint2 / int4
/ real
Multichannel input image.
. Trans (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Transformation matrix for the computation of the PCA.
. TransInv (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Transformation matrix for the computation of the inverse PCA.
. Mean (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Mean gray value of the channels.
. Cov (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Covariance matrix of the channels.
. InfoPerComp (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Information content of the transformed channels.
Result
The operator gen_principal_comp_trans returns the value H_MSG_TRUE if the parameters are correct.
Otherwise an exception is raised.
Parallelization Information
gen_principal_comp_trans is reentrant and processed without parallelization.
Possible Successors
linear_trans_color
Alternatives
principal_comp
Module
Foundation
[0.299, 0.587, 0.144, 0.0, 0.595, −0.276, −0.333, 128.0, 0.209, −0.522, 0.287, 128.0]
Here, it should be noted that the above transformation is unnormalized, i.e., the resulting color values can lie
outside the range [0, 255]. The transformation ’yiq’ in trans_from_rgb additionally scales the rows of the
matrix (except for the constant offset) appropriately.
To avoid a loss of information, linear_trans_color returns an image of type real. If a different image type
is desired, the image can be transformed with convert_image_type.
HALCON 8.0.2
106 CHAPTER 3. FILTER
Parameter
. Image (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Multichannel input image.
. ImageTrans (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject * : real
Multichannel output image.
. TransMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Transformation matrix for the color values.
Result
The operator linear_trans_color returns the value H_MSG_TRUE if the parameters are correct. Otherwise
an exception is raised.
Parallelization Information
linear_trans_color is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
gen_principal_comp_trans
Possible Successors
convert_image_type
Alternatives
principal_comp, trans_from_rgb, trans_to_rgb
Module
Foundation
Parameter
. RGBImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Three-channel RBG image.
. GrayImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / int2 / uint2
Gray scale image.
Example
Parallelization Information
rgb1_to_gray is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
compose3
Alternatives
trans_from_rgb, rgb3_to_gray
Module
Foundation
Parameter
. ImageRed (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Input image (red channel).
. ImageGreen (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Input image (green channel).
. ImageBlue (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Input image (blue channel).
. ImageGray (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / int2 / uint2
Gray scale image.
HALCON 8.0.2
108 CHAPTER 3. FILTER
Example
Parallelization Information
rgb3_to_gray is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
decompose3
Alternatives
rgb1_to_gray, trans_from_rgb
Module
Foundation
Transform an image from the RGB color space to an arbitrary color space.
trans_from_rgb transforms an image from the RGB color space to an arbitrary color space (ColorSpace).
The three channels of the image are passed as three separate images on input and output.
The operator trans_from_rgb supports the image types byte, uint2, int4, and real. In the case of int4 images,
the images should not contain negative values. In the case of real images, all values should lay within 0 and 1. If
not, the results of the transformation may not be reasonable.
Certain scalings are performed accordingly to the image type:
• Considering byte and uint2 images, the domain of color space values is generally mapped to the full domain
of [0..255] resp. [0..65535]. Because of this, the origin of signed values (e.g., CIELab or YIQ) may not be at
the center of the domain.
• Hue values are represented by angles of [0..2π] and are coded for the particular image types differently:
– byte-images map the angle domain on [0..255].
– uint2/int4-images are coded in angle minutes [0..21600].
– real-images are coded in radians [0..2π] .
• Saturation values are represented by percentages of [0..100] and are coded for the particular image type
differently:
– byte-images map the saturation values to [0..255].
– uint2/int4-images map the the saturation values to [0..10000].
– real-images map the saturation values to [0..1].
The following transformations are supported:
(All range of values are based on RGB values scaled to [0;1]. To obtain the range of values for a certain image
type, they must be multiplied with the maximum of the image type, e.g., 255 in the case of a byte image)
’yiq’
Y 0.299 0.587 0.144 R
I = 0.595 −0.276 −0.333 G
Q 0.209 −0.522 0.287 B
Range of values:
Y ∈ [0; 1.03], I ∈ [−0.609; 0.595], Q ∈ [−0.522; 0.496]
Range of values:
Y ∈ [0; 1], U ∈ [−0.436; 0.436], V ∈ [−0.615; 0.496]
’argyb’
A 0.30 0.59 0.11 R
Rg = 0.50 −0.50 0.00 G
Yb 0.25 0.25 −0.50 B
Range of values:
A ∈ [0; 1], Rg ∈ [−0.5; 0.5], Y b ∈ [−0.5; 0.5]
’ciexyz’
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
The primary colors used correspond to sRGB respectively CIE Rec. 709. D65 is used as white point.
Used primary
colors (x, y, z):
0.6400 0.3000 0.1500 0.3127
red:= , green:= , blue:= , white65 :=
0.3300 0.6000 0.0600 0.3290
Range of values:
X ∈ [0; 0.950456], Y ∈ [0; 1], Z ∈ [0; 1.088754]
’hls’ min = min(R,G,B)
max = max(R,G,B)
L = (min + max) / 2
if (max == min)
H = 0
S = 0
else
if (L > 0.5)
S = (max - min) / (2 - max - min)
else
S = (max - min) / (max + min)
fi
if (R == max)
H = ((G - B) / (max - min)) * 60
elif (G == max)
H = (2 + (B - R) / (max - min)) * 60
elif (B == max)
H = (4 + (R - G) / (max - min)) * 60
fi
fi
Range of values:
H ∈ [0; 2π], L ∈ [0; 1], S ∈ [0; 1]
HALCON 8.0.2
110 CHAPTER 3. FILTER
’hsi’
√2 −1 −1
√ √
M1 6 6 6 R
√1 −1
M2 =
0 2
√
2
G
I1 √1 √1 √1 B
3 3 3
M2
H √arctan M 1
S = M 12 + M 22
I1
I √
3
Range of values: q
2
H ∈ [0; 2π], S ∈ [0; 3 ], I ∈ [0; 1]
’cielab’
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
Y
L = 116 ∗ f ( ) − 16
Yw
X Y
a = 500 ∗ (f ( ) − f ( ))
Xw Yw
Y Z
b = 200 ∗ (f ( ) − f ( ))
Yw Zw
where 1 24 3
f (t) = t 3 , t > ( 116 )
841 16
f (t) = 108 ∗ t + 116 , otherwise
Black point B:
(Rb , Gb , Bb ) = (0, 0, 0)
White point W = (Rw , Gw , Bw ), according to image type:
Wbyte = (255, 255, 255), Wuint2 = (216 − 1, 216 − 1, 216 − 1),
Wint4 = (231 − 1, 231 − 1, 231 − 1), Wreal = (1.0, 1.0, 1.0)
Range of values:
L ∈ [0; 100], a ∈ [−86.1813; 98.2352], b ∈ [−107.8617; 94.4758]
(Scaled to the maximum gray value in the case of byte and uint2. In the case of int4 L and a are scaled
to the maximum gray value, b is scaled to the minimum gray value, such that the origin stays at 0.)
’i1i2i3’
I1 0.333 0.333 0.333 R
I2 = 1.0 0.0 −1.0 G
I3 −0.5 1.0 −0.5 B
Range of values:
I1 ∈ [0; 1], I2 ∈ [−1; 1], I3 ∈ [−1; 1]
’ciexyz2’
X 0.620 0.170 0.180 R
Y = 0.310 0.590 0.110 G
Z 0.000 0.066 1.020 B
Range of values:
X ∈ [0; 0.970], Y ∈ [0; 1.010], Z ∈ [0; 1.086]
’ciexyz3’
X 0.618 0.177 0.205 R
Y = 0.299 0.587 0.114 G
Z 0.000 0.056 0.944 B
Range of values:
X ∈ [0; 1], Y ∈ [0; 1], Z ∈ [0; 1]
’ciexyz4’
X 0.476 0.299 0.175 R
Y = 0.262 0.656 0.082 G
Z 0.020 0.161 0.909 B
colors(x, y, z):
Used primary
0.628 0.268 0.150 0.313
red:= 0.346 , green:= 0.588 , blue:= 0.070 , white65 := 0.329
0.026 0.144 0.780 0.358
HALCON 8.0.2
112 CHAPTER 3. FILTER
Range of values:
X ∈ [0; 0.951], Y ∈ [0; 1], Z ∈ [0; 1.088]
Parameter
Result
trans_from_rgb returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
trans_from_rgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
decompose3
Possible Successors
compose3
Alternatives
rgb1_to_gray, rgb3_to_gray
See also
trans_to_rgb
Module
Foundation
Transform an image from an arbitrary color space to the RGB color space.
trans_to_rgb transforms an image from an arbitrary color space (ColorSpace) to the RGB color space.
The three channels of the image are passed as three separate images on input and output.
The operator trans_to_rgb supports the image types byte, uint2, int4, and real. The domain of the input
images must match the domain provided by a corresponding transformation with trans_from_rgb. If not, the
results of the transformation may not be reasonable.
This includes some scalings in the case of certain image types and transformations:
• Considering byte and uint2 images, the domain of color space values is expected to be spread to the full
domain of [0..255] resp. [0..65535]. This includes a shift in the case of signed values, such that the origin of
signed values (e.g. CIELab or YIQ) may not be at the center of the domain.
• Hue values are represented by angles of [0..2π] and are coded for the particular image types differently:
– byte-images map the angle domain on [0..255].
– uint2/int4-images are coded in angle minutes [0..21600].
– real-images are coded in radians [0..2π] .
• Saturation values are represented by percentages of [0..100] and are coded for the particular image type
differently:
– byte-images map the saturation values to [0..255].
– uint2/int4-images map the the saturation values to [0..10000].
– real-images map the saturation values to [0..1].
Domain:
Y ∈ [0; 1.03], I ∈ [−0.609; 0.595], Q ∈ [−0.522; 0.496]
Domain:
Y ∈ [0; 1], U ∈ [−0.436; 0.436], V ∈ [−0.615; 0.496]
’argyb’
R 1.00 1.29 0.22 A
G = 1.00 −0.71 0.22 Rg
B 1.00 0.29 −1.78 Yb
HALCON 8.0.2
114 CHAPTER 3. FILTER
Domain:
A ∈ [0; 1], Rg ∈ [−0.5; 0.5], Y b ∈ [−0.5; 0.5]
’ciexyz’
R 3.240479 −1.53715 −0.498535 X
G = −0.969256 1.875991 0.041556 Y
B 0.055648 −0.204043 1.057311 Z
The primary colors used correspond to sRGB respectively CIE Rec. 709. D65 is used as white point.
Used primary
colors (x, y, z):
0.6400 0.3000 0.1500 0.3127
red:= , green:= , blue:= , white65 :=
0.3300 0.6000 0.0600 0.3290
Domain:
X ∈ [0; 0.950456], Y ∈ [0; 1], Z ∈ [0; 1.088754]
’cielab’
fy = (L + 16)/116
fx = a/500 + fy
fz = b/200 − fy
24
X = Xw ∗ fx3 , fx > 116
16 108
X = (fx − 116 ) ∗ Xw ∗ 841 , otherwise
24
Y = Yw ∗ fy3 , fy > 116
16 108
Y = (fy − 116 ) ∗ Yw ∗ 841 , otherwise
24
Z = Zw ∗ fz3 , fz > 116
16 108
Z = (fz − 116 ) ∗ Zw ∗ 841 , otherwise
R 3.240479 −1.53715 −0.498535 X
G = −0.969256 1.875991 0.041556 Y
B 0.055648 −0.204043 1.057311 Z
Black point B:
(Rb , Gb , Bb ) = (0, 0, 0)
White point W = (Rw , Gw , Bw ), according to image type:
Wbyte = (255, 255, 255), Wuint2 = (216 − 1, 216 − 1, 216 − 1),
Wint4 = (231 − 1, 231 − 1, 231 − 1), Wreal = (1.0, 1.0, 1.0)
Domain:
L ∈ [0; 100], a ∈ [−94.3383; 90.4746], b ∈ [−101.3636; 84.4473]
(Scaled to the maximum gray value in the case of byte and uint2. In the case of int4 L and a are scaled
to the maximum gray value, b is scaled to the minimum gray value, such that the origin stays at 0.)
’hls’ Hi = integer(H * 6)
Hf = fraction(H * 6)
if (L <= 0.5)
max = L * (S + 1)
else
max = L + S - (L * S)
fi
min = 2 * L - max
if (S == 0)
R = L
G = L
B = L
else
if (Hi == 0)
R = max
G = min + Hf * (max - min)
B = min
elif (Hi == 1)
R = min + (1 - Hf) * (max - min)
G = max
B = min
elif (Hi == 2)
R = min
G = max
B = min + Hf * (max - min)
elif (Hi == 3)
R = min
G = min + (1 - Hf) * (max - min)
B = max
elif (Hi == 4)
R = min + Hf * (max - min)
G = min
B = max
elif (Hi == 5)
R = max
G = min
B = min + (1 - Hf) * (max - min)
fi
fi
Domain:
H ∈ [0; 2π], L ∈ [0; 1], S ∈ [0; 1]
’hsi’
M 1 = S ∗ sin H
M 2 = S ∗ cos H
I
I1 = √
3
√2
0 √13
R 6 M1
−1
G = √ √1 1
√ M2
6 2 3
B −1
√ −1
√ √1 I1
6 2 3
’hsv’ Domain: q
H ∈ [0; 2π], S ∈ [0; 23 ], I ∈ [0; 1]
if (S == 0)
R = V
G = V
B = V
else
Hi = integer(H)
Hf = fraction(H)
if (Hi == 0)
R = V
G = V * (1 - (S * (1 - Hf)))
B = V * (1 - S)
elif (Hi == 1)
R = V * (1 - (S * Hf))
G = V
B = V * (1 - S)
elif (Hi == 2)
R = V * (1 - S)
G = V
B = V * (1 - (S * (1 - Hf)))
HALCON 8.0.2
116 CHAPTER 3. FILTER
elif (Hi == 3)
R = V * (1 - S)
G = V * (1 - (S * Hf))
B = V
elif (Hi == 4)
R = V * (1 - (S * (1 - Hf)))
G = V * (1 - S)
B = V
elif (Hi == 5)
R = V
G = V * (1 - S)
B = V * (1 - (S * Hf))
fi
fi
Domain:
H ∈ [0; 2π], S ∈ [0; 1], V ∈ [0; 1]
’ciexyz4’
R 2.750 −1.149 −0.426 X
G = −1.118 2.026 0.033 Y
B 0.138 −0.333 1.104 Z
Domain:
X ∈ [0; 0.951], Y ∈ [0; 1], Z ∈ [0; 1.088]
Parameter
. ImageInput1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (channel 1).
. ImageInput2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (channel 2).
. ImageInput3 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / int4 / real
Input image (channel 3).
. ImageRed (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2 / int4 / real
Red channel.
. ImageGreen (output_object) . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2 / int4 / real
Green channel.
. ImageBlue (output_object) . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2 / int4 / real
Blue channel.
. ColorSpace (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Color space of the input image.
Default Value : "hsv"
List of values : ColorSpace ∈ {"hsi", "yiq", "yuv", "argyb", "ciexyz", "ciexyz4", "cielab", "hls", "hsv"}
Example
Result
trans_to_rgb returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
trans_to_rgb is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
decompose3
Possible Successors
compose3, disp_color
See also
decompose3
Module
Foundation
3.4 Edges
sobel_amp(Image,&EdgeAmp,"sum_abs",5);
threshold(EdgeAmp,&EdgeRegion,40.0,255.0);
skeleton(EdgeRegion,&ThinEdge);
close_edges(ThinEdge,EdgeAmp,&CloseEdges,15);
skeleton(CloseEdges,&ThinCloseEdges);
Result
close_edges returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
HALCON 8.0.2
118 CHAPTER 3. FILTER
Parallelization Information
close_edges is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
edges_image, sobel_amp, threshold, skeleton
Possible Successors
skeleton
Alternatives
close_edges_length, dilation1, closing
See also
gray_skeleton
Module
Foundation
Example
sobel_amp(Image,&EdgeAmp,"sum_abs",5);
threshold(EdgeAmp,&EdgeRegion,40.0,255.0);
skeleton(EdgeRegion,&ThinEdge);
close_edges_length(ThinEdge,EdgeAmp,&CloseEdges,15,3);
Result
close_edges_length returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
close_edges_length is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
edges_image, sobel_amp, threshold, skeleton
Alternatives
close_edges, dilation1, closing
References
M. Üsbeck: “Untersuchungen zur echtzeitfähigen Segmentierung”; Studienarbeit, Bayerisches Forschungszentrum
für Wissensbasierte Systeme (FORWISS), Erlangen, 1993.
Module
Foundation
HALCON 8.0.2
120 CHAPTER 3. FILTER
A = EG − F 2
2
∂g(x, y)
E = 1+
∂x
∂g(x, y) ∂g(x, y)
F =
∂x ∂y
2
∂g(x, y)
G = 1+
∂y
∂x + ∂y
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real
Input image.
. DerivGauss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : real
Filtered result image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double
Sigma of the Gaussian.
Default Value : 1.0
Suggested values : Sigma ∈ {0.7, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0}
Typical range of values : 0.2 ≤ Sigma ≤ 50.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma > 0.0
HALCON 8.0.2
122 CHAPTER 3. FILTER
read_image(&Image,"mreut");
derivate_gauss(Image,&Gauss,3.0,"x");
zero_crossing(Gauss,&ZeroCrossings);
Parallelization Information
derivate_gauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold
Alternatives
laplace, laplace_of_gauss, binomial_filter, gauss_image, smooth_image,
isotropic_diffusion
See also
zero_crossing, dual_threshold
Module
Foundation
Sigma
sigma1 = r
log ( SigF1actor )
−2 SigFactor 2 −1
sigma1
sigma2 =
SigFactor
DiffOfGauss = (Image ∗ gauss(sigma1)) − (Image ∗ gauss(sigma2))
For a SigFactor = 1.6, according to Marr, an approximation to the Mexican-Hat-Operator results. The resulting
image is stored in DiffOfGauss.
Parameter
read_image(&Image,"mreut");
diff_of_gauss(Image,&Laplace,2.0,1.6);
zero_crossing(Laplace,&ZeroCrossings);
Complexity
The execution time depends linearly on the number of pixels and the size of sigma.
Result
diff_of_gauss returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
diff_of_gauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold
Alternatives
laplace, derivate_gauss
References
D. Marr: “Vision (A computational investigation into human representation and processing of visual information)”;
New York, W.H. Freeman and Company; 1982.
Module
Foundation
HALCON 8.0.2
124 CHAPTER 3. FILTER
Xn n
∂fi ∂fi X ∂fi ∂fi
fxT fx T
i=1 ∂x ∂x ∂x ∂y
fx fy i=1
G= = X .
n n
fxT fy T
fy fy ∂fi ∂fi X ∂fi ∂fi
i=1
∂x ∂y i=1
∂y ∂y
The partial derivatives of the images, which are necessary to calculate the metric tensor, are calculated with the
corresponding edge filters, analogously to edges_image. For Filter = ’canny’, the partial derivatives of
the Gaussian smoothing masks are used (see derivate_gauss), for ’deriche1’ and Filter = ’deriche2’ the
corresponding Deriche filters, for Filter = ’shen’ the corresponding Shen filters, and for Filter = ’sobel_fast’
the Sobel filter. Analogously to single-channel images, the gradient direction is defined by the vector v in which the
rate of change f is maximum. The vector v is given by the eigenvector corresponding to the largest eigenvalue of
G. The square root of the eigenvalue is the equivalent of the gradient magnitude (the amplitude) for single-channel
images, and is returned in ImaAmp. For single-channel images, both definitions are equivalent. Since the gradient
magnitude may be larger than what can be represented in the input image data type (byte or uint2), it is stored in
the next larger data type (uint2 or int4) in ImaAmp. The eigenvector also is used to define the edge direction. In
contrast to single-channel images, the edge direction can only be defined modulo 180 degrees. Like in the output
of edges_image, the edge directions are stored in 2-degree steps, and are returned in ImaDir. Points with
edge amplitude 0 are assigned the edge direction 255 (undefined direction). For speed reasons, the edge directions
are not computed explicitly for Filter = ’sobel_fast’. Therefore, ImaDir is an empty object in this case.
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarilyfor all filters except ’sobel_fast’ (where
the filter width is 3 × 3 and Alpha is ignored), and can be estimated by calling info_edges for concrete values
of the parameter Alpha. It decreases for increasing Alpha for the Deriche and Shen filters and increases for
the Canny filter, where it is the standard deviation of the Gaussian on which the Canny operator is based. “Wide”
filters exhibit a larger invariance to noise, but also a decreased ability to detect small details. Non-recursive filters,
such as the Canny filter, are realized using filter masks, and thus the execution time increases for increasing filter
width. In contrast, the execution time for recursive filters does not depend on the filter width. Thus, arbitrary
filter widths are possible using the Deriche and Shen filters without increasing the run time of the operator. The
resulting advantage in speed compared to the Canny operator naturally increases for larger filter widths. As border
treatment, the recursive operators assume that the images are zero outside of the image, while the Canny operator
mirrors the gray value at the image border. Comparable filter widths can be obtained by the following choices of
Alpha:
nonmax_suppression_dir(...,NMS,...)
hysteresis_threshold(...,Low,High,1000,...)
For ’sobel_fast’, the same non-maximum-suppression is performed for all values of NMS except ’none’. Further-
more, the hysteresis threshold operation is always performed. Additionally, for ’sobel_fast’ the resulting edges are
thinned to a width of one pixel.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. ImaAmp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : uint2 / int4
Edge amplitude (gradient magnitude) image.
. ImaDir (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : direction
Edge direction image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Edge operator to be applied.
Default Value : "canny"
List of values : Filter ∈ {"canny", "deriche1", "deriche2", "shen", "sobel_fast"}
HALCON 8.0.2
126 CHAPTER 3. FILTER
R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; pp. 167-187; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; pp. 78-87; 1990.
J. Shen, S. Castan: “An Optimal Linear Operator for Step Edge Detection”; Computer Vision, Graphics, and Image
Processing: Graphical Models and Image Processing, vol. 54, no. 2; pp. 112-133; 1992.
Module
Foundation
Extract subpixel precise color edges using Deriche, Shen, or Canny filters.
edges_color_sub_pix extracts subpixel precise color edges from the input image Image. The definition
of color edges is given in the description of edges_color. The same edge filters as in edges_color
can be selected: ’canny’, ’deriche1’, ’deriche2’, and ’shen’. In addition, a fast Sobel filter can be selected with
’sobel_fast’. The filters are specified by the parameter Filter.
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily. For a detailed description of this
parameter see edges_color. This parameter is ignored for Filter = ’sobel_fast’.
The extracted edges are returned as subpixel precise XLD contours in Edges. For all edge operators except for
’sobel_fast’, the following attributes are defined for each edge point (see get_contour_attrib_xld):
’edge_direction’ Edge direction
’angle’ Direction of the normal vectors to the contour (oriented such that the normal vectors point to
the right side of the contour as the contour is traversed from start to end point; the angles are
given with respect to the row axis of the image.)
’response’ Edge amplitude (gradient magnitude)
edges_color_sub_pix links the edge points into edges by using an algorithm similar to a hysteresis thresh-
old operation, which is also used in edges_sub_pix and lines_gauss. Points with an amplitude larger
than High are immediately accepted as belonging to an edge, while points with an amplitude smaller than Low
are rejected. All other points are accepted as edges if they are connected to accepted edge points (see also
lines_gauss and hysteresis_threshold).
Because edge extractors are often unable to extract certain junctions, a mode that tries to extract these missing
junctions by different means can be selected by appending ’_junctions’ to the values of Filter that are described
above. This mode is analogous to the mode for completing junctions that is available in edges_sub_pix and
lines_gauss.
The edge operator ’sobel_fast’ has the same semantics as all the other edge operators. Internally, howver, it is
based on significantly simplified variants of the individual processing steps (hysteresis thresholding, edge point
linking, and extraction of the subpixel edge positions). Therefore, ’sobel_fast’ in some cases may return slightly
less accurate edge positions and may select different edge parts.
Parameter
HALCON 8.0.2
128 CHAPTER 3. FILTER
Module
2D Metrology
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily for all filters except ’sobel_fast’ (where
the filter width is 3 × 3 and Alpha is ignored), and can be estimated by calling info_edges for concrete
values of the parameter Alpha. It decreases for increasing Alpha for the Deriche, Lanser and Shen filters and
increases for the Canny filter, where it is the standard deviation of the Gaussian on which the Canny operator
is based. “Wide” filters exhibit a larger invariance to noise, but also a decreased ability to detect small details.
Non-recursive filters, such as the Canny filter, are realized using filter masks, and thus the execution time increases
for increasing filter width. In contrast, the execution time for recursive filters does not depend on the filter width.
Thus, arbitrary filter widths are possible using the Deriche, Lanser and Shen filters without increasing the run time
of the operator. The resulting advantage in speed compared to the Canny operator naturally increases for larger
filter widths. As border treatment, the recursive operators assume that the images to be zero outside of the image,
while the Canny operator repeats the gray value at the image’s border. Comparable filter widths can be obtained
by the following choices of Alpha:
The originally proposed recursive filters (’deriche1’, ’deriche2’, ’shen’) return a biased estimate of the amplitude
of diagonal edges. This bias is removed in the corresponding modified version of the operators (’lanser1’, ’lanser2’
und ’mshen’), while maintaining the same execution speed.
For relatively small filter widths (11 × 11), i.e., for Alpha (’lanser2’ = 0.5), all filters yield similar results. Only for
“wider” filters differences begin to appear: the Shen filters begin to yield qualitatively inferior results. However,
they are the fastest of the implemented operators — closely followed by the Deriche operators.
edges_image optionally offers to apply a non-maximum-suppression (NMS = ’nms’/’inms’/’hvnms’; ’none’ if
not desired) and hysteresis threshold operation (Low,High; at least one negative if not desired) to the resulting
edge image. Conceptually, this corresponds to the following calls:
nonmax_suppression_dir(...,NMS,...)
hysteresis_threshold(...,Low,High,999,...)
For ’sobel_fast’, the same non-maximum-suppression is performed for all values of NMS except ’none’. Further-
more, the hysteresis threshold operation is always performed. Additionally, for ’sobel_fast’ the resulting edges are
thinned to a width of one pixel.
Parameter
HALCON 8.0.2
130 CHAPTER 3. FILTER
read_image(&Image,"fabrik");
edges_image(Image,&Amp,&Dir,"lanser2",0.5,"none",-1,-1);
hysteresis_threshold(Amp,&Margin,20,30,30);
Result
edges_image returns H_MSG_TRUE if all parameters are correct and no error occurs during execution. If
the input is empty the behaviour can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
edges_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
info_edges
Possible Successors
threshold, hysteresis_threshold, close_edges_length
Alternatives
sobel_dir, frei_dir, kirsch_dir, prewitt_dir, robinson_dir
See also
info_edges, nonmax_suppression_amp, hysteresis_threshold, bandpass_image
References
S.Lanser, W.Eckstein: “Eine Modifikation des Deriche-Verfahrens zur Kantendetektion”; 13. DAGM-Symposium,
München; Informatik Fachberichte 290; Seite 151 - 158; Springer-Verlag; 1991.
S.Lanser: “Detektion von Stufenkanten mittels rekursiver Filter nach Deriche”; Diplomarbeit; Technische Univer-
sität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
J.Canny: “Finding Edges and Lines in Images”; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cambridge;
1983.
J.Canny: “A Computational Approach to Edge Detection”; IEEE Transactions on Pattern Analysis and Machine
Intelligence; PAMI-8, vol. 6; S. 679-698; 1986.
R.Deriche: “Using Canny’s Criteria to Derive a Recursively Implemented Optimal Edge Detector”; International
Journal of Computer Vision; vol. 1, no. 2; S. 167-187; 1987.
R.Deriche: “Optimal Edge Detection Using Recursive Filtering”; Proc. of the First International Conference on
Computer Vision, London; S. 501-505; 1987.
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; S. 78-87; 1990.
S.Castan, J.Zhao und J.Shen: “Optimal Filter for Edge Detection Methods and Results”; Proc. of the First Euro-
pean Conference on Computer Vision, Antibes; Lecture Notes on computer Science; no. 427; S. 12-17; Springer-
Verlag; 1990.
Module
Foundation
Extract sub-pixel precise edges using Deriche, Lanser, Shen, or Canny filters.
edges_sub_pix detects step edges using recursively implemented filters (according to Deriche, Lanser and
Shen) or the conventionally implemented “derivative of Gaussian” filter (using filter masks) proposed by Canny.
Thus, the following edge operators are available:
’deriche1’, ’lanser1’, ’deriche2’, ’lanser2’, ’shen’, ’mshen’, ’canny’, ’sobel’, and ’sobel_fast’
(parameter Filter).
The extracted edges are returned as sub-pixel precise XLD contours in Edges. For all edge operators except
’sobel_fast’, the following attributes are defined for each edge point (see get_contour_attrib_xld):
’edge_direction’ Edge direction
’angle’ Direction of the normal vectors to the contour (oriented such that the normal vectors point to
the right side of the contour as the contour is traversed from start to end point; the angles are
given with respect to the row axis of the image.)
’response’ Edge amplitude (gradient magnitude)
The “filter width” (i.e., the amount of smoothing) can be chosen arbitrarily for all edge operators except ’sobel
and ’sobel_fast’, and can be estimated by calling info_edges for concrete values of the parameter Alpha. It
decreases for increasing Alpha for the Deriche, Lanser and Shen filters and increases for the Canny filter, where
it is the standard deviation of the Gaussian on which the Canny operator is based. “Wide” filters exhibit a larger
invariance to noise, but also a decreased ability to detect small details. Non-recursive filters, such as the Canny
filter, are realized using filter masks, and thus the execution time increases for increasing filter width. In contrast,
the execution time for recursive filters does not depend on the filter width. Thus, arbitrary filter widths are possible
using the Deriche, Lanser and Shen filters without increasing the run time of the operator. The resulting advantage
in speed compared to the Canny operator naturally increases for larger filter widths. As border treatment, the
recursive operators assume that the images to be zero outside of the image, while the Canny operator repeats the
gray value at the image’s border. Comparable filter widths can be obtained by the following choices of Alpha:
The originally proposed recursive filters (’deriche1’, ’deriche2’, ’shen’) return a biased estimate of the amplitude
of diagonal edges. This bias is removed in the corresponding modified version of the operators (’lanser1’, ’lanser2’
und ’mshen’), while maintaining the same execution speed.
For relatively small filter widths (11 × 11), i.e., for Alpha (’lanser2’ = 0.5), all filters yield similar results. Only for
“wider” filters differences begin to appear: the Shen filters begin to yield qualitatively inferior results. However,
they are the fastest of the implemented operators that supprt arbitrary mask sizes, closely followed by the Deriche
operators. The two Sobel filters, which use a fixed mask size of (3 × 3), are faster than the other filters. Of these
two, the filter ’sobel_fast’ is significantly faster than ’sobel’.
edges_sub_pix links the edge points into edges by using an algorithm similar to a hysteresis threshold op-
eration, which is also used in lines_gauss. Points with an amplitude larger than High are immediately
accepted as belonging to an edge, while points with an amplitude smaller than Low are rejected. All other
points are accepted as edges if they are connected to accepted edge points (see also lines_gauss and
hysteresis_threshold).
HALCON 8.0.2
132 CHAPTER 3. FILTER
Because edge extractors are often unable to extract certain junctions, a mode that tries to extract these missing
junctions by different means can be selected by appending ’_junctions’ to the values of Filter that are described
above. This mode is analogous to the mode for completing junctions that is available in lines_gauss.
The edge operator ’sobel_fast’ has the same semantics as all the other edge operators. Internally, howver, it is
based on significantly simplified variants of the individual processing steps (hysteresis thresholding, edge point
linking, and extraction of the subpixel edge positions). Therefore, ’sobel_fast’ in some cases may return slightly
less accurate edge positions and may select different edge parts.
Parameter
read_image(&Image,"fabrik");
edges_sub_pix(Image,&Edges,"lanser2",0.5,20,40);
Complexity
Let A be the number of pixels in the domain of Image. Then the runtime complexity is O(A ∗ Sigma) for the
Canny filter and O(A) for the recursive Lanser, Deriche, and Shen filters.
Let S = Width ∗ Height be the number of pixels of Image. Then edges_sub_pix requires at least 60 ∗ S bytes
of temporary memory during execution for all edge operators except ’sobel_fast’. For ’sobel_fast’, at least 9 ∗ S
bytes of temporary memory are required.
Result
edges_sub_pix returns H_MSG_TRUE if all parameters are correct and no error occurs during execution.
1 1 1
A= 0 0 0
−1 −1 −1
1 0 −1
B= 1 0 −1
1 0 −1
The result image contains the maximum response of the masks A and B.
Parameter
HALCON 8.0.2
134 CHAPTER 3. FILTER
Example
read_image(&Image,"fabrik");
frei_amp(Image,&Frei_amp);
threshold(Frei_amp,&Edges,128,255);
Result
frei_amp always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
frei_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Alternatives
sobel_amp, kirsch_amp, prewitt_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation
√1 0 −1
√
B= 2 0 − 2
1 0 −1
The result image contains the maximum response of the masks A and B. The edge directions are returned in
ImageEdgeDir, and are stored in 2-degree steps, i.e., an edge direction of x degrees with respect to the horizontal
axis is stored as x/2 in the edge direction image. Furthermore, the direction of the change of intensity is taken into
account. Let [Ex , Ey ] denote the image gradient. Then the following edge directions are returned as r/2:
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
Parameter
read_image(&Image,"fabrik");
frei_dir(Image,&Frei_dirA,&Frei_dirD);
threshold(Frei_dirA,&Res,128,255);
Result
frei_dir always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
frei_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, robinson_dir, prewitt_dir, kirsch_dir
See also
bandpass_image, laplace_of_gauss
Module
Foundation
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 −35 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
This corresponds to applying a mean operator ( mean_image), and then subtracting the original gray value. A
value of 128 is added to the result, i.e., zero crossings occur for 128.
This filter emphasizes high frequency components (edges and corners). The cutoff frequency is determined by the
size (Height × Width) of the filter matrix: the larger the matrix, the smaller the cutoff frequency is.
At the image borders the pixels’ gray values are mirrored. In case of over- or underflow the gray values are clipped
(255 and 0, resp.).
HALCON 8.0.2
136 CHAPTER 3. FILTER
Attention
If even values are passed for Height or Width, the operator uses the next larger odd value instead. Thus, the
center of the filter mask is always uniquely determined.
Parameter
highpass_image(Image,&Highpass,7,5);
threshold(Highpass,&Region,60.0,255.0);
skeleton(Region,&Skeleton);
Result
highpass_image returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
highpass_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold, skeleton
Alternatives
mean_image, sub_image, convol_image, bandpass_image
See also
dyn_threshold
Module
Foundation
read_image(&Image,"fabrik");
info_edges("lanser2","edge",0.5,Size,Coeffs) ;
edges_image(Image,&Amp,&Dir,"lanser2",0.5,"none",-1,-1);
hysteresis_threshold(Amp,&Margin,20,30,30);
Result
info_edges returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
info_edges is reentrant and processed without parallelization.
Possible Successors
edges_image, threshold, skeleton
See also
edges_image
Module
Foundation
−3 −3 5
−3 0 5
−3 −3 5
HALCON 8.0.2
138 CHAPTER 3. FILTER
−3 5 5
−3 0 5
−3 −3 −3
5 5 5
−3 0 −3
−3 −3 −3
5 5 −3
5 0 −3
−3 −3 −3
5 −3 −3
5 0 −3
5 −3 −3
−3 −3 −3
5 0 −3
5 5 −3
−3 −3 −3
−3 0 −3
5 5 5
−3 −3 −3
−3 0 5
−3 5 5
read_image(&Image,"fabrik");
kirsch_amp(Image,&Kirsch_amp);
threshold(Kirsch_amp,&Edges,128,255);
Result
kirsch_amp always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
kirsch_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Alternatives
sobel_amp, frei_amp, prewitt_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation
−3 −3 5
−3 0 5
−3 −3 5
−3 5 5
−3 0 5
−3 −3 −3
5 5 5
−3 0 −3
−3 −3 −3
5 5 −3
5 0 −3
−3 −3 −3
5 −3 −3
5 0 −3
5 −3 −3
−3 −3 −3
5 0 −3
5 5 −3
−3 −3 −3
−3 0 −3
5 5 5
−3 −3 −3
−3 0 5
−3 5 5
The result image contains the maximum response of all masks. The edge directions are returned in
ImageEdgeDir, and are stored as x/2. They correspond to the direction of the mask yielding the maximum
response.
Parameter
read_image(&Image,"fabrik");
kirsch_dir(Image,&Kirsch_dirA,&Kirsch_dirD);
threshold(Kirsch_dirA,&Res,128,255);
HALCON 8.0.2
140 CHAPTER 3. FILTER
Result
kirsch_dir always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
kirsch_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, robinson_dir, prewitt_dir, frei_dir
See also
bandpass_image, laplace_of_gauss
Module
Foundation
’n_8’
1 1 1
1 −8 1
1 1 1
’n_8_isotropic’
10 22 10
22 −128 22
10 22 10
For the three filter masks the following normelizations of the resulting gray values is applied, (i.e., by dividing
the result by the given divisor): ’n_4’ normalization by 1, ’n_8’, normalization by 2 and for ’n_8_isotropic’
normalization by 32.
For a Laplace operator with size 3 × 3, the corresponding filter is applied directly, while for larger filter
sizes the input image is first smoothed using using a Gaussian filter (see gauss_image) or a binomial fil-
ter (see binomial_filter) of size MaskSize-2. The Gaussian filter is selected for the above values of
ResultType. Here, MaskSize = 5, 7, 9, 11, or 13 must be used. The binomial filter is selected by appending
’_binomial’ to the above values of ResultType. Here, MaskSize can be selected between 5 and 39. Fur-
thermore, it is possible to select different amounts of smoothing for the column and row direction by passing two
values in MaskSize. Here, the first value of MaskSize corresponds to the mask width (smoothing in the column
direction), while the second value corresponds to the mask height (smoothing in the row direction) of the binomial
filter. Therefore,
laplace(O:R:’absolute’,MaskSize,N:)
gauss_image(O:G:MaskSize-2:) .
laplace(G:R:’absolute’,MaskSize,N:).
and
laplace(O:R:’absolute_binomial’,MaskSize,N:)
is equivalent to
binomial_filter(O:B:MaskSize-2,MaskSize-2:) .
laplace(B:R:’absolute’,3,N:)
laplace either returns the absolute value of the Laplace filtered image (ResultType = ’absolute’) in a byte
or uint2 image or the signed result (ResultType = ’signed’ or ’signed_clipped’). Here, the output image type
has the same number of bytes per pixel as the input image (i.e., int1 or int2) for ’signed_clipped’, while the output
image has the next larger number of pixels (i.e., int2 or int4) for ’signed’.
Parameter
read_image(&Image,"mreut");
laplace(Image,&Laplace,"signed",3,"n_8_isotropic");
zero_crossing(Laplace,&ZeroCrossings);
Result
laplace returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
laplace is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold, threshold
Alternatives
diff_of_gauss, laplace_of_gauss, derivate_gauss
HALCON 8.0.2
142 CHAPTER 3. FILTER
See also
highpass_image, edges_image
Module
Foundation
∂ 2 g(x, y) ∂ 2 g(x, y)
∆g(x, y) = +
∂x2 ∂y 2
The derivatives in laplace_of_gauss are calculated by appropriate derivatives of the Gaussian, resulting in
the following formula for the convolution mask:
∆Gσ (x, y) =
x2 + y 2
2
x + y2
1
− 1 exp −
2πσ 4 2σ 2 2σ 2
Parameter
. Image (input_object) . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. ImageLaplace (output_object) . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : int2
Laplace filtered image.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Smoothing parameter of the Gaussian.
Default Value : 2.0
Suggested values : Sigma ∈ {0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 7.0}
Typical range of values : 0.7 ≤ Sigma ≤ 5.0
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : (Sigma > 0.7) ∧ (Sigma ≤ 25.0)
Example
read_image(&Image,"mreut");
laplace_of_gauss(Image,&Laplace,2.0);
zero_crossing(Laplace,&ZeroCrossings);
Parallelization Information
laplace_of_gauss is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
zero_crossing, dual_threshold
Alternatives
laplace, diff_of_gauss, derivate_gauss
See also
derivate_gauss
Module
Foundation
1 1 1
A= 0 0 0
−1 −1 −1
1 0 −1
B= 1 0 −1
1 0 −1
The result image contains the maximum response of the masks A and B.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
Example
read_image(&Image,"fabrik");
prewitt_amp(Image,&Prewitt);
threshold(Prewitt,&Edges,128,255);
Result
prewitt_amp always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
prewitt_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
threshold, gray_skeleton, nonmax_suppression_amp, close_edges,
close_edges_length
Alternatives
sobel_amp, kirsch_amp, frei_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation
HALCON 8.0.2
144 CHAPTER 3. FILTER
1 1 1
A= 0 0 0
−1 −1 −1
1 0 −1
B= 1 0 −1
1 0 −1
The result image contains the maximum response of the masks A and B. The edge directions are returned in
ImageEdgeDir, and are stored in 2-degree steps, i.e., an edge direction of x degrees with respect to the horizontal
axis is stored as x/2 in the edge direction image. Furthermore, the direction of the change of intensity is taken into
account. Let [Ex , Ey ] denote the image gradient. Then the following edge directions are returned as r/2:
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. ImageEdgeAmp (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
. ImageEdgeDir (output_object) . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : direction
Edge direction image.
Example
read_image(&Image,"fabrik");
prewitt_dir(Image,&PrewittA,&PrewittD);
threshold(PrewittA,&Edges,128,255);
Result
prewitt_dir always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
prewitt_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, robinson_dir, frei_dir, kirsch_dir
See also
bandpass_image, laplace_of_gauss
Module
Foundation
A B
C D
If an overflow occurs the result is clipped. The result of the operator is stored at the pixel with the coordinates of
“D”.
Parameter
read_image(&Image,"fabrik");
roberts(Image,&Roberts,"roberts_max");
threshold(Roberts,&Margin,128.0,255.0);
Result
roberts returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
roberts is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image
Possible Successors
threshold, skeleton
Alternatives
edges_image, sobel_amp, frei_amp, kirsch_amp, prewitt_amp
See also
laplace, highpass_image, bandpass_image
Module
Foundation
HALCON 8.0.2
146 CHAPTER 3. FILTER
−1 0 1
−2 0 2
−1 0 1
2 1 0
1 0 −1
0 −1 −2
0 1 2
−1 0 1
−2 −1 0
1 2 1
0 0 0
−1 −2 −1
read_image(&Image,"fabrik");
robinson_amp(Image,&Robinson_amp);
threshold(Robinson_amp,&Edges,128,255);
Result
robinson_amp always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
robinson_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Alternatives
sobel_amp, frei_amp, prewitt_amp, robinson_amp, roberts
See also
bandpass_image, laplace_of_gauss
Module
Foundation
−1 0 1
−2 0 2
−1 0 1
2 1 0
1 0 −1
0 −1 −2
0 1 2
−1 0 1
−2 −1 0
1 2 1
0 0 0
−1 −2 −1
The result image contains the maximum response of all masks. The edge directions are returned in
ImageEdgeDir, and are stored as x/2. They correspond to the direction of the mask yielding the maximum
response.
Parameter
read_image(&Image,"fabrik");
robinson_dir(Image,&Robinson_dirA,&Robinson_dirD);
threshold(Robinson_dirA,&Res,128,255);
Result
robinson_dir always returns H_MSG_TRUE. If the input is empty the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
robinson_dir is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, sigma_image, median_image, smooth_image
Possible Successors
hysteresis_threshold, threshold, gray_skeleton, nonmax_suppression_dir,
close_edges, close_edges_length
Alternatives
edges_image, sobel_dir, kirsch_dir, prewitt_dir, frei_dir
HALCON 8.0.2
148 CHAPTER 3. FILTER
See also
bandpass_image, laplace_of_gauss
Module
Foundation
1 2 1
A= 0 0 0
−1 −2 −1
1 0 −1
B= 2 0 −2
1 0 −1
These masks are used differently, according to the selected filter type. (In the following, a und b denote the results
of convolving an image with A und B for one particular pixel.)
√
’sum_sqrt’ a2 + b2 /4
’sum_abs’ (|a| + |b|)/4
’thin_sum_abs’ (thin(|a|) + thin(|b|))/4
’thin_max_abs’ max(thin(|a|), thin(|b|))/4
’x’ b/4
’y’ a/4
Here, thin(x) is equal to x for a vertical maximum (mask A) and a horizontal maximum (mask B), respectively,
and 0 otherwise. Thus, for ’thin_sum_abs’ and ’thin_max_abs’ the gradient image is thinned. For the filter types ’x’
and ’y’ if the input image is of type byte the output image is of type int1, of type int2 otherwise. For a Sobel operator
with size 3 × 3, the corresponding filters A and B are applied directly, while for larger filter sizes the input image
is first smoothed using a Gaussian filter (see gauss_image) or a binomial filter (see binomial_filter) of
size Size-2. The Gaussian filter is selected for the above values of FilterType. Here, Size = 5, 7, 9, 11, or
13 must be used. The binomial filter is selected by appending ’_binomial’ to the above values of FilterType.
Here, Size can be selected between 5 and 39. Furthermore, it is possible to select different amounts of smoothing
the the column and row direction by passing two values in Size. Here, the first value of Size corresponds
to the mask width (smoothing in the column direction), while the second value corresponds to the mask height
(smoothing in the row direction) of the binomial filter. The binomial filter can only be used for images of type
byte and uint2. Since smoothing reduces the edge amplitudes, in this case the edge amplitudes are multiplied by a
factor of 2 to prevent information loss. Therefore,
sobel_amp(I,E,Dir,FilterTyp,S)
scale_image(I,F,2,0)
gauss_image(F,G,S-2)
sobel_amp(G,E,FilterType,3)
or to
scale_image(I,F,2,0)
binomial_filter(F,G,S[0]-2,S[1]-2)
sobel_amp(G,E,FilterType,3).
For sobel_amp special optimizations are implemented FilterType = 0 sum_abs 0 that use SIMD technol-
ogy. The actual application of these special optimizations is controlled by the system parameter ’mmx_enable’
(see set_system). If ’mmx_enable’ is set to ’true’ (and the SIMD instruction set is available), the internal
calculations are performed using SIMD technology. Note that SIMD technology performs best on large, compact
input regions. Depending on the input region and the capabilities of the hardware the execution of sobel_amp
might even take significantly more time with SIMD technology than without.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. EdgeAmplitude (output_object) . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : int1 / int2 / uint2
Edge amplitude (gradient magnitude) image.
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Filter type.
Default Value : "sum_abs"
List of values : FilterType ∈ {"sum_abs", "thin_sum_abs", "thin_max_abs", "sum_sqrt", "x", "y",
"sum_abs_binomial", "thin_sum_abs_binomial", "thin_max_abs_binomial", "sum_sqrt_binomial",
"x_binomial", "y_binomial"}
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Size of filter mask.
Default Value : 3
List of values : Size ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39}
Example
read_image(&Image,"fabrik");
sobel_amp(Image,&Amp,"sum_abs",3);
threshold(Amp,&Edg,128.0,255.0);
Result
sobel_amp returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
sobel_amp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, mean_image, anisotropic_diffusion, sigma_image
Possible Successors
threshold, nonmax_suppression_amp, gray_skeleton
Alternatives
frei_amp, roberts, kirsch_amp, prewitt_amp, robinson_amp
See also
laplace, highpass_image, bandpass_image
Module
Foundation
HALCON 8.0.2
150 CHAPTER 3. FILTER
sobel_dir calculates first derivative of an image and is used as an edge detector. The filter is based on the
following filter masks:
1 2 1
A= 0 0 0
−1 −2 −1
1 0 −1
B= 2 0 −2
1 0 −1
These masks are used differently, according to the selected filter type. (In the following, a und b denote the results
of convolving an image with A und B for one particular pixel.)
√
’sum_sqrt’ a2 + b2 /4
’sum_abs’ (|a| + |b|)/4
For a Sobel operator with size 3 × 3, the corresponding filters A and B are applied directly, while for larger filter
sizes the input image is first smoothed using a Gaussian filter (see gauss_image) or a binomial filter (see
binomial_filter) of size Size-2. The Gaussian filter is selected for the above values of FilterType.
Here, Size = 5, 7, 9, 11, or 13 must be used. The binomial filter is selected by appending ’_binomial’ to the
above values of FilterType. Here, Size can be selected between 5 and 39. Furthermore, it is possible to
select different amounts of smoothing the the column and row direction by passing two values in Size. Here, the
first value of Size corresponds to the mask width (smoothing in the column direction), while the second value
corresponds to the mask height (smoothing in the row direction) of the binomial filter. The binomial filter can only
be used for images of type byte and uint2. Since smoothing reduces the edge amplitudes, in this case the edge
amplitudes are multiplied by a factor of 2 to prevent information loss. Therefore,
sobel_dir(I:Amp,Dir:FilterTyp,S:)
for Size > 3 is conceptually equivalent to
scale_image(I,F,2,0)
gauss_image(F,G,S-2)
sobel_dir(G,Amp,Dir,FilterType,3:)
or to
scale_image(I,F,2,0)
binomial_filter(F,G,S[0]-2,S[1]-2)
sobel_dir(G,Amp,Dir,FilterType,3:).
The edge directions are returned in EdgeDirection, and are stored in 2-degree steps, i.e., an edge direction of x
degrees with respect to the horizontal axis is stored as x/2 in the edge direction image. Furthermore, the direction
of the change of intensity is taken into account. Let [Ex , Ey ] denote the image gradient. Then the following edge
directions are returned as r/2:
edge direction r
from bottom to top 0/+ 0
from lower right to upper left +/− ]0, 90[
from right to left +/0 90
from upper right to lower left +/+ ]90, 180[
from top to bottom 0/+ 180
from upper left to lower right −/+ ]180, 270[
from left to right +/0 270
from lower left to upper right −/− ]270, 360[
Points with edge amplitude 0 are assigned the edge direction 255 (undefined direction).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. EdgeAmplitude (output_object) . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Edge amplitude (gradient magnitude) image.
. EdgeDirection (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : direction
Edge direction image.
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Filter type.
Default Value : "sum_abs"
List of values : FilterType ∈ {"sum_abs", "sum_sqrt", "sum_abs_binomial", "sum_sqrt_binomial"}
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Size of filter mask.
Default Value : 3
List of values : Size ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39}
Example
read_image(&Image,"fabrik");
sobel_dir(Image,&Amp,&Dir,"sum_abs",3);
threshold(Amp,&Edg,128.0,255.0);
Result
sobel_dir returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
sobel_dir is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
binomial_filter, gauss_image, mean_image, anisotropic_diffusion, sigma_image
Possible Successors
nonmax_suppression_dir, hysteresis_threshold, threshold
Alternatives
edges_image, frei_dir, kirsch_dir, prewitt_dir, robinson_dir
See also
roberts, laplace, highpass_image, bandpass_image
Module
Foundation
3.5 Enhancement
T_adjust_mosaic_images ( const Hobject Images,
Hobject *CorrectedImages, const Htuple From, const Htuple To,
const Htuple ReferenceImage, const Htuple HomMatrices2D,
const Htuple EstimationMethod, const Htuple EstimateParameters,
const Htuple OECFModel )
HALCON 8.0.2
152 CHAPTER 3. FILTER
’gold_standard’. The availability of the individual method is depending on the selected EstimateParameters,
which determines the model to be used for estimating the radiometric adjustment terms. It is always pos-
sible to determine the amount of vignetting in the images by selecting ’vignetting’. However, if selected,
EstimationMethod must be set to ’gold_standard’. For the remainder of the radiometric adjustment three
different options are available:
1. Image adjustment with the additive model. This should only be used to adjust images with very small differences
in exposure or white balance. To choose this method, EstimateParameters must be set to ’add_gray’. This
model can be selected either exclusively and only with EstimationMethod = ’standard’ or in combination
with EstimateParameters = ’vignetting’ and only with EstimationMethod = ’gold_standard’.
2. Image adjustment with the linear model. In this model, images are expected to be taken with a camera using
a linear transfer function. The adjustment terms are consequently represented as multiplication factors. To select
this model, EstimateParameters must be set to ’mult_gray’. It can be called with EstimationMethod
= ’standard’ or EstimationMethod = ’gold_standard’. A combined call with EstimateParameters =
’vignetting’ is also possible, EstimationMethod must be set to ’gold_standard’ in that case.
3. Image adjustment with the calibrated model. In this model, images are assumed to be taken with a camera using
a nonlinear transfer function. A function of the OECF class selected with OECFModel is used to approximate
the actually used OECF in the process of image acquisition. As with the linear model, the correction terms
are represented as multiplication factors. This model can be selected by choosing EstimateParameters =
[’mult_gray’,’response’] and must be called with EstimationMethod = ’gold_standard’. It is possible to
determine the amount of vignetting as well in this case by choosing EstimateParameters = ’vignetting’.
This model is similar to the linear model. However, in this case the camera may have a nonlinear response. This
means that before the gray values of the images can be multiplied by their respective correction factor, the gray
values must be backprojected to a linear response. To do so, the camera’s response must be determined. Since the
response usually does not change over an image sequence, this parameter is assumed to be constant throughout the
whole image sequence.
Any kind of function could be considered to be used as an OECF. As in the operator
radiometric_self_calibration, a polynomial fitting might be used, but for typical images in a
mosaicking application this would not work very well. The reason for this is that polynomial fitting has too
many parameters that need to be determined. Instead, only simpler types of response functions can be estimated.
Currently, only so-called Laguerre-functions are available.
The response of a Laguerre-type OECF is determined by only one parameter called Phi. In a first step, the whole
gray value spectrum (in case of 8bit images the values 0 to 255) is converted to floating point numbers in the
interval [0:1]. Then, the OECF backprojection is calculated based on this and the resulting gray values are once
again converted to the original interval.
The inverse transform of the gray values back to linear values based on a Laguerre-type OECF is described by the
following equation:
2 P hi · sin(π · I_nl)
I_l = I_nl + · arctan( )
π 1 − P hi · cos(π · I_nl)
with I_l the linear gray value and I_nl the (nonlinear) gray value.
The parameter OECFModel is only used if the calibrated model has been chosen. Otherwise, any input for
OECFModel will be ignored.
The parameter EstimateParameters can also be used to influence the performance and memory consumption
of the operator. With ’no_cache’ the internal caching mechanism can be disabled. This switch only has and influ-
ence if EstimationMethod is set to ’gold_standard’. Otherwise this switch will be ignored. When disabling
the internal caching, the operator uses far less memory, but therefore calculates the corresponding grayvalue pairs
in each iteration of the minimization algorithm again. Therefore, disabling caching is only advisable if all physical
memory is used up at some point of the calculation and the operating system starts using swap space.
A second option to inluence the performance is using subsampling. When setting EstimateParameters to
’subsampling_2’, images are internally zoomed down by a factor of 2. Despite the suggested value list, not only
factors of 2 and 4 are available, but any integer number might be specified by appending it to subsampling_ in
EstimateParameters. With this, the amount of image data is tremendously reduced, which leads to a much
faster computation of the internal minimization. In fact, using moderate subsampling might even lead to better
results since it also decreases the influence of slightly misaligned pixels. Although subsampling also influences
the minimization if EstimationMethod is set to ’standard’, it is mostly useful for ’gold_standard’.
Some more general remarks on using adjust_mosaic_images in applications:
• Estimation of vignetting will only work well if significant vignetting is visible in the images. Otherwise, the
operator may lead to erratic results.
• Estimation of the response is rather slow because the problem is quite complex. Therefore, it is advisable not
to determine the response in time critical applications. Apart from this, the response can only be determined
correctly if there are relatively large brightness differences between the images.
• It is not possible to correct saturation. If there are saturated areas in an image, they will remain saturated.
• adjust_mosaic_images can only be used to correct different brightness in images, which is caused by different
exposure (shutter time, aperture) or different light intensity. It cannot be used to correct brightness differences
based on inhomogeneous illumination within each image.
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; Hobject : byte
Input images.
. CorrectedImages (output_object) . . . . . . . . . . . . . . . . . . . . (multichannel-)image-array ; Hobject * : byte
Output images.
. From (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
List of source images.
. To (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
List of destination images.
. ReferenceImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Reference image.
. HomMatrices2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Projective matrices.
. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Estimation algorithm for the correction.
Default Value : "standard"
List of values : EstimationMethod ∈ {"standard", "gold_standard"}
. EstimateParameters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Parameters to be estimated.
Default Value : ["mult_gray"]
List of values : EstimateParameters ∈ {"add_gray", "mult_gray", "response", "vignetting",
"subsampling_2", "subsampling_4", "no_cache"}
. OECFModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Model of OECF to be used.
Default Value : ["laguerre"]
List of values : OECFModel ∈ {"laguerre"}
Example (Syntax: HDevelop)
Result
If the parameters are valid, the operator adjust_mosaic_images returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
HALCON 8.0.2
154 CHAPTER 3. FILTER
Parallelization Information
adjust_mosaic_images is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
Possible Successors
gen_spherical_mosaic
References
David Hasler, Sabine S"usstrunk: Mapping colour in image stitching applications. Journal of Visual Communica-
tion and Image Representation, 15(1):65-90, 2004.
Module
Foundation
ut = div(G(u)∇u)
formulated by Weickert. With a 2 × 2 coefficient matrix G that depends on the gray values in Image, this is an
enhancement of the mean curvature flow or intrinsic heat equation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
on the gray value function u defined by the input image Image at a time t0 = 0. The smoothing operator
mean_curvature_flow is a direct application of the mean curvature flow equation. The discrete diffusion
equation is solved in Iterations time steps of length Theta, so that the output image ImageCED contains
the gray value function at the time Iterations · Theta.
To detect the edge direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter Sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator isotropic_diffusion for isotropic image smoothing.
While the matrix G is given by
1
GM CF (u) = I − ∇u(∇u)T ,
|∇u|2
in the case of the operator mean_curvature_flow, where I denotes the unit matrix, GMCF is again smoothed
componentwise by a Gaussian filter of standard deviation Rho for coherence_enhancing_diff. Then, the
final coefficient matrix
is constructed from the eigenvalues λ1 , λ2 and eigenvectors w1 , w2 of the resulting intermediate matrix, where the
functions
g1 (p) = 0.001
−1
g2 (p) = 0.001 + 0.999 exp
p
HALCON 8.0.2
156 CHAPTER 3. FILTER
First the procedure carries out a filtering with the low pass ( mean_image). The resulting gray values (res) are
calculated from the obtained gray values (mean) and the original gray values (orig) as follows:
Factor serves as measurement of the increase in contrast. The division frequency is determined via the size of
the filter matrix: The larger the matrix, the lower the disivion frequency.
As an edge treatment the gray values are mirrored at the edges of the image. Overflow and/or underflow of gray
values is clipped.
Parameter
read_image(&Image,"mreut");
disp_image(Image,WindowHandle);
draw_region(&Region,WindowHandle);
reduce_domain(Image,Region,&Mask);
emphasize(Mask,&Sharp,7,7,2.0);
disp_image(Sharp,WindowHandle);
Result
If the parameter values are correct the operator emphasize returns the value H_MSG_TRUE The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
emphasize is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
mean_image, sub_image, laplace, add_image
See also
mean_image, highpass_image
Module
Foundation
h(x) describes the relative frequency of the occurrence of the gray value x. For uint2 images, the only difference
is that the value 255 is replaced with a different maximum value. The maximum value is computed from the
number of significant bits stored with the input image, provided that this value is set. If not, the value of the system
parameter ’int2_bits’ is used (see set_system), if this value is set (i.e., different from -1). If none of the two
values is set, the number of significant bits is set to 16.
This transformation linearises the cumulative histogram. Maxima in the original histogram are "‘spreaded"’ and
thus the contrast in image regions with these frequently occuring gray values is increased. Supposedly homogenous
regions receive more easily visible structures. On the other hand, of course, the noise in the image increases cor-
respondlingly. Minima in the original histogram are dually "‘compressed"’. The transformed histogram contains
gaps, but the remaining gray values used occur approximately at the same frequency ("‘histogram equalization"’).
Attention
The operator equ_histo_image primarily serves for optical processing of images for a human viewer. For
example, the (local) contrast spreading can lead to a detection of fictitious edges.
Parameter
HALCON 8.0.2
158 CHAPTER 3. FILTER
Illuminate image.
The operator illuminate enhances contrast. Very dark parts of the image are "‘illuminated"’ more strongly,
very light ones are "‘darkened"’. If orig is the original gray value and mean is the corresponding gray value of the
low pass filtered image detected via the operators mean_image and filter size MaskHeight x MaskWidth.
For byte-images val equals 127, for int2-images and uint2-images val equals the median value. The resulting gray
value is new:
The low pass should have rather large dimensions (30 x 30 to 200 x 200). Reasonable parameter combinations
might be:
i.e. the larger the low pass mask is chosen, the larger Factor should be as well.
The following "‘spotlight effect"’ should be noted: If, for example, a dark object is in front of a light wall the object
as well as the wall, which is already light in the immediate proximity of the object contours, are lightened by the
operator illuminate. This corresponds roughly to the effect that is produced when the object is illuminated
by a strong spotlight. The same applies to light objects in front of a darker background. In this case, however, the
fictitious "‘spotlight"’ darkens objects.
Parameter
Example
read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
illuminate(Image,&Better,40,40,0.55);
disp_image(Better,WindowHandle);
Result
If the parameter values are correct the operator illuminate returns the value H_MSG_TRUE The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
illuminate is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
scale_image_max, equ_histo_image, mean_image, sub_image
See also
emphasize, gray_histo
Module
Foundation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
to the gray value function u defined by the input image Image at a time t0 = 0. The discretized equation is solved
in Iterations time steps of length Theta, so that the output image contains the gray value function at the time
Iterations · Theta.
The mean curvature flow causes a smoothing of Image in the direction of the edges in the image, i.e. along the
contour lines of u, while perpendicular to the edge direction no smoothing is performed and hence the boundaries
of image objects are not smoothed. To detect the image direction more robustly, in particular on noisy input data,
an additional isotropic smoothing step can precede the computation of the gray value gradients. The parameter
Sigma determines the magnitude of the smoothing by means of the standard deviation of a corresponding Gaussian
convolution kernel, as used in the operator isotropic_diffusion for isotropic image smoothing.
Parameter
HALCON 8.0.2
160 CHAPTER 3. FILTER
ut = s |∇u|
on the function u defined by the gray values in Image at a time t0 = 0. The discretized equation is solved in
Iterations time steps of length Theta, so that the output image SharpenedImage contains the gray value
function at the time Iterations · Theta.
The decision between dilation and erosion is made using the sign function s ∈ {−1, 0, +1} on a conventional edge
detector. The detector of Canny
∇u ∇u 2
s = −sgn D u( , )
|∇u| |∇u|
is available with Mode = 0 canny 0 and the detector of Marr/Hildreth (the Laplace operator)
s = −sgn(∆u)
HALCON 8.0.2
162 CHAPTER 3. FILTER
Parallelization Information
shock_filter is reentrant and automatically parallelized (on tuple level).
References
F. Guichard, J. Morel; “A Note on Two Classical Shock Filters and Their Asymptotics”; Michael Kerckhove (Ed.):
Scale-Space and Morphology in Computer Vision, LNCS 2106, pp. 75-84; Springer, New York; 2001.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation
3.6 FFT
convol_fft ( const Hobject ImageFFT, const Hobject ImageFilter,
Hobject *ImageConvol )
gen_highpass(Highpass,0.2,’n’,’dc_edge’,Width,Height)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_fft(ImageFFT,Highpass:ImageConvol)
fft_generic(ImageConvol,ImageResult,’from_freq’,1,’none’,’dc_edge’,’byte’)
Result
convol_fft returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
convol_fft is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
fft_image, fft_generic, rft_generic, gen_highpass, gen_lowpass, gen_bandpass,
gen_bandfilter
Possible Successors
power_byte, power_real, power_ln, fft_image_inv, fft_generic, rft_generic
Alternatives
convol_gabor
See also
gen_gabor, gen_highpass, gen_lowpass, gen_bandpass, convol_gabor, fft_image_inv
Module
Foundation
gen_gabor(Filter,1.4,0.4,1.0,1.5,’n’,’dc_edge’,512,512)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_gabor(ImageFFT,Filter,Gabor,Hilbert,’dc_edge’)
fft_generic(Gabor,GaborInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
fft_generic(Hilbert,HilbertInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
energy_gabor(GaborInv,HilbertInv,Energy)
Result
convol_gabor returns H_MSG_TRUE if all images are of correct type. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
convol_gabor is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
fft_image, fft_generic, gen_gabor
Possible Successors
power_byte, power_real, power_ln, fft_image_inv, fft_generic
Alternatives
convol_fft
See also
convol_image
Module
Foundation
HALCON 8.0.2
164 CHAPTER 3. FILTER
correlation_fft calculates the correlation of the Fourier-transformed input images in the frequency do-
main. The correlation is calculated by multiplying ImageFFT1 with the complex conjugate of ImageFFT2.
It should be noted that in order to achieve a correct scaling of the correlation in the spatial domain, the oper-
ators fft_generic or rft_generic with Norm = ’none’ must be used for the forward transform and
fft_generic or rft_generic with Norm = ’n’ for the reverse transform. If ImageFFT1 and ImageFFT2
contain the same number of images, the corresponding images are correlated pairwise. Otherwise, ImageFFT2
must contain only one single image. In this case, the correlation is performed for each image of ImageFFT1 with
ImageFFT2 .
Attention
The filtering is always performed on the entire image, i.e., the domain of the image is ignored.
Parameter
Result
convol_fft returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
correlation_fft is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
fft_generic, fft_image, rft_generic
Possible Successors
fft_generic, fft_image_inv, rft_generic
Module
Foundation
Often the calculation of the energy is preceded by the convolution of an image with a Gabor filter and the Hilbert
transform of the Gabor filter (see convol_gabor). In this case, the first channel of the image passed to
energy_gabor is the Gabor-filtered image, transformed back into the spatial domain (see fft_image_inv),
and the second channel the result of the convolution with the Hilbert transform, also transformed back into the
spatial domain. The local energy is a measure for the local contrast of structures (e.g., edges and lines) in the
image.
Parameter
. ImageGabor (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
1st channel of input image (usually: Gabor image).
. ImageHilbert (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
2nd channel of input image (usually: Hilbert image).
. Energy (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Image containing the local energy.
Example
fft_image(Image,&FFT);
gen_gabor(&Filter,1.4,0.4,1.0,1.5,512);
convol_gabor(FFT,Filter,&Gabor,&Hilbert);
fft_image_inv(Gabor,&GaborInv);
fft_image_inv(Hilbert,&HilbertInv);
energy_gabor(GaborInv,HilbertInv,&Energy);
Result
energy_gabor returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
energy_gabor is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
gen_gabor, convol_gabor, fft_image_inv
Module
Foundation
M −1 N −1
1 X X s2πi(km/M +ln/N )
F (m, n) = e f (k, l)
c
k=0 l=0
Opinions vary on whether the sign s in the exponent should be set to 1 or -1 for the forward transform, i.e., the
transform for going to the frequency domain. There is also disagreement on the magnitude of the normalizing
factor c. This is √
sometimes set to 1 for the forward transform, sometimes to M N , and sometimes (in case of the
unitary FFT) to M N . Especially in image processing applications the DC term is shifted to the center of the
image.
fft_generic allows to select these choices individually. The parameter Direction allows to select the
logical direction of the FFT. (This parameter is not unnecessary; it is needed to discern how to shift the image if
HALCON 8.0.2
166 CHAPTER 3. FILTER
the DC term should rest in the center of the image.) Possible values are ’to_freq’ and ’from_freq’. The parameter
Exponent is used to determine the sign of the exponent. It can be set to 1 or -1. The normalizing factor can be
set with Norm, and can take on the values ’none’, ’sqrt’ and ’n’. The parameter Mode determines the location of
the DC term of the FFT. It can be set to ’dc_center’ or ’dc_edge’.
In any case, the user must ensure the consistent use of the parameters. This means that the normalizing factors
used for the forward and backward transform must yield M N when multiplied, the exponents must be of opposite
sign, and Mode must be equal for both transforms.
A consistent combination is, for example (’to_freq’,-1,’n’,’dc_edge’) for the forward transform and
(’from_freq’,1,’none’,’dc_edge’) for the reverse transform. In this case, the FFT can be interpreted as interpo-
lation with trigonometric basis functions. Another possible combination is (’to_freq’,-1,’sqrt’,’dc_center’) and
(’from_freq’,1,’sqrt’,’dc_center’).
The parameter ResultType can be used to specify the result image type of the reverse transform (Direction
= ’from_freq’). In the forward transform (Direction = ’to_freq’), ResultType must be set to ’complex’.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex
Input image.
. ImageFFT (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex
Fourier-transformed image.
. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Calculate forward or reverse transform.
Default Value : "to_freq"
List of values : Direction ∈ {"to_freq", "from_freq"}
. Exponent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Sign of the exponent.
Default Value : -1
List of values : Exponent ∈ {-1, 1}
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Normalizing factor of the transform.
Default Value : "sqrt"
List of values : Norm ∈ {"none", "sqrt", "n"}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Location of the DC term in the frequency domain.
Default Value : "dc_center"
List of values : Mode ∈ {"dc_center", "dc_edge"}
. ResultType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Image type of the output image.
Default Value : "complex"
List of values : ResultType ∈ {"complex", "byte", "int1", "int2", "uint2", "int4", "real", "direction",
"cyclic"}
Example
/* simulation of fft */
my_fft(Hobject In, Hobject *Out)
{
fft_generic(In,Out,"to_freq",-1,"sqrt","dc_center","complex");
}
/* simulation of fft_image_inv */
my_fft_image_inv(Hobject In, Hobject *Out)
{
fft_generic(In,&Out,"from_freq",1,"sqrt","dc_center","byte");
}
Result
fft_generic returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
fft_generic is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
optimize_fft_speed, read_fft_optimization_data
Possible Successors
convol_fft, convol_gabor, convert_image_type, power_byte, power_real, power_ln,
phase_deg, phase_rad, energy_gabor
Alternatives
fft_image, fft_image_inv, rft_generic
Module
Foundation
fft_generic(Image,ImageFFT,’to_freq’,-1,’sqrt’,’dc_center’,’complex’)
.
Attention
The filtering is always done on the entire image, i.e., the region of the image is ignored.
Parameter
HALCON 8.0.2
168 CHAPTER 3. FILTER
fft_generic(Image,ImageFFT,’from_freq’,1,’sqrt’,’dc_center’,’byte’)
.
Attention
The filtering is always done on the entire image, i.e., the region of the image is ignored.
Parameter
= ’dc_center’ must be used. If rft_generic is used, Mode = ’rft’ must be used. The resulting image contains
an annulus with the value 0, and a value determined by the normalization outside of this annulus.
Parameter
Result
gen_bandfilter returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
gen_bandfilter is reentrant and processed without parallelization.
Possible Successors
convol_fft
Alternatives
gen_circle, paint_region
See also
gen_highpass, gen_lowpass, gen_bandpass, gen_gauss_filter,
gen_derivative_filter
Module
Foundation
HALCON 8.0.2
170 CHAPTER 3. FILTER
Result
gen_bandpass returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
gen_bandpass is reentrant and processed without parallelization.
Possible Successors
convol_fft
See also
gen_highpass, gen_lowpass, gen_bandfilter, gen_gauss_filter,
gen_derivative_filter
Module
Foundation
HALCON 8.0.2
172 CHAPTER 3. FILTER
Result
gen_derivative_filter returns H_MSG_TRUE if all parameters are correct. If necessary, an exception
handling is raised.
Parallelization Information
gen_derivative_filter is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
convol_fft
See also
fft_image_inv, gen_gauss_filter, gen_lowpass, gen_bandpass, gen_bandfilter,
gen_highpass
Module
Foundation
Parallelization Information
gen_filter_mask is reentrant and processed without parallelization.
Possible Successors
fft_image, fft_generic
See also
convol_image
Module
Foundation
HALCON 8.0.2
174 CHAPTER 3. FILTER
cent” gets narrower). The larger Bandwidth is, the smaller the frequency band being passed gets (because the
“crescent” gets thinner).
To achieve a maximum efficiency of the filtering operation, the parameter Norm can be used to specify the normal-
ization factor of the filter. If fft_generic and Norm = ’n’ is used the normalization in the FFT can be avoided.
Mode can be used to determine where the DC term of the filter lies. If fft_generic is used, ’dc_edge’ can be
used to gain efficiency. If fft_image and fft_image_inv are used for filtering, Norm = ’none’ and Mode
= ’dc_center’ must be used. Note that gen_gabor cannot create a filter that can be used with rft_generic.
The resulting image is a two-channel real-image, containing the Gabor filter in the first channel and the corre-
sponding Hilbert filter in the second channel.
Parameter
gen_gabor(Filter,1.4,0.4,1.0,1.5,’n’,’dc_edge’,512,512)
fft_generic(Image,ImageFFT,’to_freq’,-1,’none’,’dc_edge’,’complex’)
convol_gabor(ImageFFT,Filter,Gabor,Hilbert,’dc_edge’)
fft_generic(Gabor,GaborInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
fft_generic(Hilbert,HilbertInv,’from_freq’,1,’none’,’dc_edge’,’byte’)
energy_gabor(GaborInv,HilbertInv,Energy)
Result
gen_gabor returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling is raised.
Parallelization Information
gen_gabor is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic
Possible Successors
convol_gabor
Alternatives
gen_bandpass, gen_bandfilter, gen_highpass, gen_lowpass
See also
fft_image_inv, energy_gabor
Module
Foundation
HALCON 8.0.2
176 CHAPTER 3. FILTER
Result
gen_gauss_filter returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling
is raised.
Parallelization Information
gen_gauss_filter is reentrant and processed without parallelization.
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
convol_fft
See also
fft_image_inv, gen_gauss_filter, gen_lowpass, gen_bandpass, gen_bandfilter,
gen_highpass
Module
Foundation
Result
gen_highpass returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling is
raised.
Parallelization Information
gen_highpass is reentrant and processed without parallelization.
HALCON 8.0.2
178 CHAPTER 3. FILTER
Possible Successors
convol_fft
See also
convol_fft, gen_lowpass, gen_bandpass, gen_bandfilter, gen_gauss_filter,
gen_derivative_filter
Module
Foundation
convol_fft(ImageFFT,Lowpass:ImageConvol)
fft_generic(ImageConvol,ImageResult,’from_freq’,1,’none’,’dc_edge’,’byte’)
Result
gen_lowpass returns H_MSG_TRUE if all parameters are correct. If necessary, an exception handling is raised.
Parallelization Information
gen_lowpass is reentrant and processed without parallelization.
Possible Successors
convol_fft
See also
gen_highpass, gen_bandpass, gen_bandfilter, gen_gauss_filter,
gen_derivative_filter
Module
Foundation
HALCON 8.0.2
180 CHAPTER 3. FILTER
HALCON 8.0.2
182 CHAPTER 3. FILTER
optimize_fft_speed influences the runtime of the following operators, which use the FFT: fft_generic,
fft_image, fft_image_inv, wiener_filter, wiener_filter_ni, phot_stereo,
sfs_pentland, sfs_mod_lr, sfs_orig_lr.
Parameter
Parameter
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of the image for which the runtime should be optimized.
Default Value : 512
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768, 1024, 2048}
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of the image for which the runtime should be optimized.
Default Value : 512
Suggested values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576, 1024, 2048}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Thoroughness of the search for the optimum runtime.
Default Value : "standard"
List of values : Mode ∈ {"standard", "patient", "exhaustive"}
Result
optimize_rft_speed returns H_MSG_TRUE if all parameters are correct. If necessary, an exception han-
dling is raised.
Parallelization Information
optimize_rft_speed is reentrant and processed without parallelization.
Possible Successors
rft_generic, write_fft_optimization_data
Alternatives
read_fft_optimization_data
See also
optimize_fft_speed
Module
Foundation
90
phase = atan2(imaginary part, real part) .
π
Hence, ImagePhase contains half the phase angle. For negative phase angles, 180 is added.
Parameter
. ImageComplex (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image in frequency domain.
. ImagePhase (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : direction
Phase of the image in degrees.
Example
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
phase_deg(FFT,&Phase);
disp_image(Phase,WindowHandle);
Result
phase_deg returns H_MSG_TRUE if the image is of correct type. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
HALCON 8.0.2
184 CHAPTER 3. FILTER
Parallelization Information
phase_deg is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
disp_image
Alternatives
phase_rad
See also
fft_image_inv
Module
Foundation
Parameter
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
phase_rad(FFT,&Phase);
disp_image(Phase,WindowHandle);
Result
phase_rad returns H_MSG_TRUE if the image is of correct type. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
phase_rad is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic
Possible Successors
disp_image
Alternatives
phase_deg
See also
fft_image_inv, fft_generic, rft_generic
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image in frequency domain.
. PowerByte (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte
Power spectrum of the input image.
Example
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_byte(FFT,&Power);
disp_image(Power,WindowHandle);
Result
power_byte returns H_MSG_TRUE if the image is of correct type. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
power_byte is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic, convol_fft, convol_gabor
Possible Successors
disp_image
Alternatives
abs_image, convert_image_type, power_real, power_ln
See also
fft_image, fft_generic, rft_generic
Module
Foundation
HALCON 8.0.2
186 CHAPTER 3. FILTER
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image in frequency domain.
. ImageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Power spectrum of the input image.
Example
read_image(&Image,"monkey");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_ln(FFT,&Power);
disp_image(Power,WindowHandle);
Result
power_ln returns H_MSG_TRUE if the image is of correct type. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
power_ln is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic, convol_fft, convol_gabor
Possible Successors
disp_image, convert_image_type, scale_image
Alternatives
abs_image, convert_image_type, power_real, power_byte
See also
fft_image, fft_generic, rft_generic
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : complex
Input image in frequency domain.
. ImageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Power spectrum of the input image.
Example
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
fft_image(Image,&FFT);
power_real(FFT,&Power);
disp_image(Power,WindowHandle);
Result
power_real returns H_MSG_TRUE if the image is of correct type. If the input is empty the behavior can be set
via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
power_real is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
fft_image, fft_generic, rft_generic, convol_fft, convol_gabor
Possible Successors
disp_image, convert_image_type, scale_image
Alternatives
abs_image, convert_image_type, power_byte, power_ln
See also
fft_image, fft_generic, rft_generic
Module
Foundation
HALCON 8.0.2
188 CHAPTER 3. FILTER
Possible Predecessors
optimize_rft_speed, read_fft_optimization_data
Possible Successors
convol_fft, convert_image_type, power_byte, power_real, power_ln, phase_deg,
phase_rad
Alternatives
fft_generic, fft_image, fft_image_inv
Module
Foundation
3.7 Geometric-Transformations
T_affine_trans_image ( const Hobject Image, Hobject *ImageAffinTrans,
const Htuple HomMat2D, const Htuple Interpolation,
const Htuple AdaptImageSize )
HALCON 8.0.2
190 CHAPTER 3. FILTER
The region of the input image is ignored, i.e., assumed to be the full rectangle of the image. The region of the
resulting image is set to the transformed rectangle of the input image. If necessary, the resulting image is filled
with zero (black) outside of the region of the original image.
Generally, transformed points will lie between pixel coordinates. Therefore, an appropriate interpolation scheme
has to be used. The interpolation can also be used to avoid aliasing effects for scaled images. The quality and
speed of the interpolation can be set by the parameter Interpolation:
none Nearest-neighbor interpolation: The gray value is determined from the nearest pixel’s gray value (pos-
sibly low quality, very fast).
constant Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of mean
filter is used to prevent aliasing effects (medium quality and run time).
weighted Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of Gaussian
filter is used to prevent aliasing effects (best quality, slow).
In addition, the system parameter ’int_zooming’ (see set_system) affects the accuracy of the transformation. If
’int_zooming’ is set to ’true’, the transformation for byte, int2 and uint2 images is carried out internally using fixed
point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed gray values
is smaller in this case. For byte images, the differences to the more accurate calculation (using ’int_zooming’ =
’false’) is typically less than two gray levels. Correspondingly, for int2 and uint2 images, the gray value differences
are less than 1/128 times the dynamic gray value range of the image, i.e., they can be as large as 512 gray levels if
the entire dynamic range of 16 bit is used. Additionally, if a large scale factor is applied and a large output image
is obtained, then undefined gray values at the lower and at the right image border may result. The maximum width
Bmax of this border of undefined gray values can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale
factor in one dimension and I is the size of the output image in the corresponding dimension. For real images, the
parameter ’int_zooming’ does not affect the accuracy, since the internal calculations are always done using floating
point arithmetic.
The size of the target image can be controlled by the parameter AdaptImageSize: With value ’true’ the size
will be adapted so that no clipping occurs at the right or lower edge. With value ’false’ the target image has the
same size as the input image. Note that, independent of AdaptImageSize, the image is always clipped at the
left and upper edge, i.e., all image parts that have negative coordinates after the transformation are clipped.
Attention
The region of the input image is ignored.
The used coordinate system is the same as in affine_trans_pixel. This means that in fact not HomMat2D
is applied but a modified version. Therefore, applying affine_trans_image corresponds to the following
chain of transformations, which is applied to each point (Row_i, Col_i) of the image (input and output pixels as
homogeneous vectors):
RowT rans_i 1 0 −0.5 1 0 +0.5 Row_i
ColT rans_i = 0 1 −0.5 · HomMat2D · 0 1 +0.5 · Col_i
1 0 0 1 0 0 1 1
As an effect, you might get unexpected results when creating affine transformations based on coordinates that are
derived from the image, e.g., by operators like area_center_gray. For example, if you use this operator to
calculate the center of gravity of a rotationally symmetric image and then rotate the image around this point using
hom_mat2d_rotate, the resulting image will not lie on the original one. In such a case, you can compensate
this effect by applying the following translations to HomMat2D before using it in affine_trans_image:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_image(Image, ImageAffinTrans, HomMat2DAdapted, ’constant’,
’false’)
Parameter
hom_mat2d_identity(Matrix1)
hom_mat2d_scale(Matrix1,0.5,0.5,256.0,256.0,Matrix2)
hom_mat2d_rotate(Matrix2,3.14,256.0,256.0,Matrix3)
hom_mat2d_translate(Matrix3,-128.0,-128.0,Matrix4,)
affine_trans_image(Image,TransImage,Matrix4,1).
draw_rectangle2(WindowHandle,L,C,Phi,L1,L2)
hom_mat2d_identity(Matrix1)
get_system(width,Width)
get_system(height,Height)
hom_mat2d_translate(Matrix1,Height/2.0-L,Width/2.0-C,Matrix2)
hom_mat2d_rotate(Matrix2,3.14-Phi,Height/2.0,Width/2.0,Matrix3)
hom_mat2d_scale(Matrix3,Height/(2.0*L2),Width/(2.0*L1),
Height/2.0,Width/2.0,Matrix4)
affine_trans_image(Image,Matrix4,TransImage,1).
Result
If the matrix HomMat2D represents an affine transformation (i.e., not a projective transformation),
affine_trans_image returns H_MSG_TRUE. If the input is empty the behavior can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
affine_trans_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_rotate, hom_mat2d_scale
Alternatives
affine_trans_image_size, zoom_image_size, zoom_image_factor, mirror_image,
rotate_image, affine_trans_region
See also
set_part_style
Module
Foundation
HALCON 8.0.2
192 CHAPTER 3. FILTER
Apply an arbitrary affine 2D transformation to an image and specify the output image size.
affine_trans_image_size applies an arbitrary affine 2D transformation, i.e., scaling, rotation, translation,
and slant (skewing), to the images given in Image and returns the transformed images in ImageAffinTrans.
The affine transformation is described by the homogeneous transformation matrix given in HomMat2D, which
can be created using the operators hom_mat2d_identity, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_translate, etc., or be the result of operators like vector_angle_to_rigid.
The components of the homogeneous transformation matrix are interpreted as follows: The row coordinate of the
image corresponds to x and the col coordinate corresponds to y of the coordinate system in which the transforma-
tion matrix was defined. This is necessary to obtain a right-handed coordinate system for the image. In particular,
this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices quite
naturally corresponds to the usual (row,column) order for coordinates in the image.
The region of the input image is ignored, i.e., assumed to be the full rectangle of the image. The region of the
resulting image is set to the transformed rectangle of the input image. If necessary, the resulting image is filled
with zero (black) outside of the region of the original image.
Generally, transformed points will lie between pixel coordinates. Therefore, an appropriate interpolation scheme
has to be used. The interpolation can also be used to avoid aliasing effects for scaled images. The quality and
speed of the interpolation can be set by the parameter Interpolation:
none Nearest-neighbor interpolation: The gray value is determined from the nearest pixel’s gray value (pos-
sibly low quality, very fast).
constant Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of mean
filter is used to prevent aliasing effects (medium quality and run time).
weighted Bilinear interpolation. The gray value is determined from the four nearest pixels through bilinear
interpolation. If the affine transformation contains a scaling with a scale factor < 1, a kind of Gaussian
filter is used to prevent aliasing effects (best quality, slow).
In addition, the system parameter ’int_zooming’ (see set_system) affects the accuracy of the transformation. If
’int_zooming’ is set to ’true’, the transformation for byte, int2 and uint2 images is carried out internally using fixed
point arithmetic, leading to much shorter execution times. However, the accuracy of the transformed gray values
is smaller in this case. For byte images, the differences to the more accurate calculation (using ’int_zooming’ =
’false’) is typically less than two gray levels. Correspondingly, for int2 and uint2 images, the gray value differences
are less than 1/128 times the dynamic gray value range of the image, i.e., they can be as large as 512 gray levels if
the entire dynamic range of 16 bit is used. Additionally, if a large scale factor is applied and a large output image
is obtained, then undefined gray values at the lower and at the right image border may result. The maximum width
Bmax of this border of undefined gray values can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale
factor in one dimension and I is the size of the output image in the corresponding dimension. For real images, the
parameter ’int_zooming’ does not affect the accuracy, since the internal calculations are always done using floating
point arithmetic.
The size of the target image is specifed by the parameters Width and Height. Note that the image is always
clipped at the left and upper edge, i.e., all image parts that have negative coordinates after the transformation are
clipped. If the affine transformation (in particular, the translation) is chosen appropriately, a part of the image
can be transformed as well as cropped in one call. This is useful, for example, when using the variation model
(see compare_variation_model), because with this mechanism only the parts of the image that should be
examined, are transformed.
Attention
The region of the input image is ignored.
The used coordinate system is the same as in affine_trans_pixel. This means that in fact not HomMat2D
is applied but a modified version. Therefore, applying affine_trans_image_size corresponds to the
following chain of transformations, which is applied to each point (Row_i, Col_i) of the image (input and output
pixels as homogeneous vectors):
RowT rans_i 1 0 −0.5 1 0 +0.5 Row_i
ColT rans_i = 0 1 −0.5 · HomMat2D · 0 1 +0.5 · Col_i
1 0 0 1 0 0 1 1
As an effect, you might get unexpected results when creating affine transformations based on coordinates that
are derived from the image, e.g., by operators like area_center_gray. For example, if you use this op-
erator to calculate the center of gravity of a rotationally symmetric image and then rotate the image around
this point using hom_mat2d_rotate, the resulting image will not lie on the original one. In such a
case, you can compensate this effect by applying the following translations to HomMat2D before using it in
affine_trans_image_size:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_image_size(Image, ImageAffinTrans, HomMat2DAdapted,
’constant’, Width, Height)
Parameter
HALCON 8.0.2
194 CHAPTER 3. FILTER
Module
Matching
HALCON 8.0.2
196 CHAPTER 3. FILTER
Result
If the parameters are valid, the operator gen_cube_map_mosaic returns the value H_MSG_TRUE. If neces-
sary an exception handling is raised.
Parallelization Information
gen_cube_map_mosaic is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
Alternatives
gen_spherical_mosaic, gen_projective_mosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching
HALCON 8.0.2
198 CHAPTER 3. FILTER
gen_empty_obj (Images)
for J := 1 to 6 by 1
read_image (Image, ’mosaic/pcb_’+J$’02’)
concat_obj (Images, Image, Images)
endfor
From := [1,2,3,4,5]
To := [2,3,4,5,6]
Num := |From|
ProjMatrices := []
for J := 0 to Num-1 by 1
F := From[J]
T := To[J]
select_obj (Images, F, ImageF)
select_obj (Images, T, ImageT)
points_foerstner (ImageF, 1, 2, 3, 200, 0.3, ’gauss’, ’false’,
RowJunctionsF, ColJunctionsF, CoRRJunctionsF,
CoRCJunctionsF, CoCCJunctionsF, RowAreaF,
ColAreaF, CoRRAreaF, CoRCAreaF, CoCCAreaF)
points_foerstner (ImageT, 1, 2, 3, 200, 0.3, ’gauss’, ’false’,
RowJunctionsT, ColJunctionsT, CoRRJunctionsT,
CoRCJunctionsT, CoCCJunctionsT, RowAreaT,
ColAreaT, CoRRAreaT, CoRCAreaT, CoCCAreaT)
proj_match_points_ransac (ImageF, ImageT, RowJunctionsF,
ColJunctionsF, RowJunctionsT,
ColJunctionsT, ’ncc’, 21, 0, 0, 480, 640,
0, 0.5, ’gold_standard’, 1, 4364537,
ProjMatrix, Points1, Points2)
ProjMatrices := [ProjMatrices,ProjMatrix]
endfor
gen_projective_mosaic (Images, MosaicImage, 2, From, To, ProjMatrices,
’default’, ’false’, MosaicMatrices2D)
Parallelization Information
gen_projective_mosaic is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac, vector_to_proj_hom_mat2d,
hom_vector_to_proj_hom_mat2d
See also
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching
HALCON 8.0.2
200 CHAPTER 3. FILTER
Parameter
Result
If the parameters are valid, the operator gen_spherical_mosaic returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
gen_spherical_mosaic is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
Alternatives
gen_cube_map_mosaic, gen_projective_mosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Matching
HALCON 8.0.2
202 CHAPTER 3. FILTER
weights [0...1] are scaled to the range of values of the ’uint2’ image and therefore hold integer values from 0 bis
65535.
Furthermore, the weights must be chosen in a way that the range of values of the output image ImageMapped is
not exceeded. The geometric relation between the four channels 2-5 is illustrated in the following sketch:
2 3
4 5
The reference point of the four pixels is the upper left pixel. The linearized coordinate of the reference point is
stored in the first channel.
Attention
The weights must be choosen in a way that the range of values of the output image ImageMapped is not exceeded.
For runtime reasons during the mapping process, it is not checked whether the linearized coordinates which are
stored in the first channel of Map, lie inside the input image. Thus, it must be ensured by the user that this constraint
is fulfilled. Otherwise, the program may crash!
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Image to be mapped.
. Map (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : int4 / uint2
Image containing the mapping data.
. ImageMapped (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Mapped image.
Result
map_image returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception handling is
raised.
Parallelization Information
map_image is reentrant and processed without parallelization.
Possible Predecessors
gen_image_to_world_plane_map, gen_radial_distortion_map
See also
affine_trans_image, rotate_image
Module
Foundation
Mirror an image.
mirror_image reflects an image Image about one of three possible axes. If Mode is set to ’row’, it is reflected
about the horizontal axis, if Mode is set to ’column’, about the vertical axis, and if Mode is set to ’main’, about
the main diagonal x = y.
Parameter
. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Input image.
. ImageMirror (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 / int4
/ real
Reflected image.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Axis of reflection.
Default Value : "row"
List of values : Mode ∈ {"row", "column", "main"}
Example
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
mirror_image(Image,&MirImage,"row");
disp_image(MirImage,WindowHandle);
Parallelization Information
mirror_image is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
hom_mat2d_rotate, affine_trans_image, rotate_image
See also
rotate_image, hom_mat2d_rotate
Module
Foundation
Parameter
HALCON 8.0.2
204 CHAPTER 3. FILTER
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
polar_trans_image(Image,&PolarImage,100,100,314,200);
disp_image(PolarImage,WindowHandle);
Parallelization Information
polar_trans_image is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
polar_trans_image_ext
See also
polar_trans_image_inv, polar_trans_region, polar_trans_region_inv,
polar_trans_contour_xld, polar_trans_contour_xld_inv, affine_trans_image
Module
Foundation
pixel in the input image. With ’bilinear’, the gray value of a pixel in the output image is determined by bilinear
interpolation of the gray values of the four closest pixels in the input image. The mode ’bilinear’ results in images
of better quality, but is slower than the mode ’nearest_neighbor’.
The angles can be chosen from all real numbers. Center point and radii can be real as well. However, if they are
both integers and the difference of RadiusEnd and RadiusStart equals the height Height of the destination
image, calculation will be sped up through an optimized routine.
The radii and angles are inclusive, which means that the first row of the target image contains the circle with radius
RadiusStart and the last row contains the circle with radius RadiusEnd. For complete circles, where the
difference between AngleStart and AngleEnd equals 2π (360 degrees), this also means that the first column
of the target image will be the same as the last.
1
To avoid this, do not make this difference 2π, but 2π(1 − Width ) degrees instead.
The call:
polar_trans_image(Image, PolarTransImage, Row, Column, Width, Height)
produces the same result as the call:
polar_trans_image_ext(Image, PolarTransImage, Row-0.5, Column-0.5,
6.2831853, 6.2831853/Width, 0, Height-1, Width, Height, ’nearest_neighbor’)
The offset of 0.5 is necessary since polar_trans_image does not do exact nearest neighbor interpola-
tion and the radii and angles can be calculated using the information in the above paragraph and knowing that
polar_trans_image does not handle its arguments inclusively. The start angle is bigger than the end angle to
make polar_trans_image_ext go clockwise, just like polar_trans_image does.
Attention
For speed reasons, the domain of the input image is ignored. The output image always has a complete rectangle as
its domain.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Input image.
. PolarTransImage (output_object) . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Output image.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Row coordinate of the center of the arc.
Default Value : 256
Suggested values : Row ∈ {0, 16, 32, 64, 128, 240, 256, 480, 512}
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Column coordinate of the center of the arc.
Default Value : 256
Suggested values : Column ∈ {0, 16, 32, 64, 128, 256, 320, 512, 640}
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Angle of the ray to be mapped to the first column of the output image.
Default Value : 0.0
Suggested values : AngleStart ∈ {0.0, 0.78539816, 1.57079632, 3.141592654, 6.2831853,
12.566370616}
. AngleEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Angle of the ray to be mapped to the last column of the output image.
Default Value : 6.2831853
Suggested values : AngleEnd ∈ {0.0, 0.78539816, 1.57079632, 3.141592654, 6.2831853, 12.566370616}
. RadiusStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the circle to be mapped to the first row of the output image.
Default Value : 0
Suggested values : RadiusStart ∈ {0, 16, 32, 64, 100, 128, 256, 512}
Typical range of values : 0 ≤ RadiusStart
. RadiusEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Radius of the circle to be mapped to the last row of the output image.
Default Value : 100
Suggested values : RadiusEnd ∈ {0, 16, 32, 64, 100, 128, 256, 512}
Typical range of values : 0 ≤ RadiusEnd
HALCON 8.0.2
206 CHAPTER 3. FILTER
Parameter
HALCON 8.0.2
208 CHAPTER 3. FILTER
Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image_size, projective_trans_contour_xld,
projective_trans_region, projective_trans_point_2d, projective_trans_pixel
Module
Foundation
Apply a projective transformation to an image and specify the output image size.
projective_trans_image_size applies the projective transformation (homography) determined by the
homogeneous transformation matrix HomMat2D on the input image Image and stores the result into the output
image TransImage.
TransImage will be clipped at the output dimensions Height×Width. Apart from this,
projective_trans_image_size is identical to its alternative version projective_trans_image.
Parameter
HALCON 8.0.2
210 CHAPTER 3. FILTER
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
rotate_image(Image,&RotImage,270);
disp_image(RotImage,WindowHandle);
Parallelization Information
rotate_image is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
hom_mat2d_rotate, affine_trans_image
See also
mirror_image
Module
Foundation
zoom_image_factor scales the image Image by a factor of ScaleWidth in width and a factor
ScaleHeight in height. The parameter Interpolation determines the type of interpolation used (see
affine_trans_image).
Attention
If the system parameter ’int_zooming’ is set to ’true’, the internally used integer arithmetic may lead to errors in
the following two cases: First, if zoom_image_factor is used on an uint2 or int2 image with high dynamics
(i.e. images containing values close to the respective limits) in combination with scale factors smaller than 0.5,
then the gray values of the output image may be erroneous. Second, if Interpolation is set to a value other
than ’none’, a large scale factor is applied, and a large output image is obtained, then undefined gray values at the
lower and at the right image border may result. The maximum width Bmax of this border of undefined gray values
can be estimated as Bmax = 0.5 · S · I/215 , where S is the scale factor in one dimension and I is the size of the
output image in the corresponding dimension. In both cases, it is recommended to set ’int_zooming’ to ’false’ via
the operator set_system.
Parameter
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
zoom_image_factor(Image,&ZooImage,0,0.5,0.5);
disp_image(ZooImage,WindowHandle);
Parallelization Information
zoom_image_factor is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
zoom_image_size, affine_trans_image, hom_mat2d_scale
See also
hom_mat2d_scale, affine_trans_image
Module
Foundation
HALCON 8.0.2
212 CHAPTER 3. FILTER
read_image(&Image,"affe");
disp_image(Image,WindowHandle);
zoom_image_size(Image,&ZooImage,0,200,200);
disp_image(ZooImage,WindowHandle);
Parallelization Information
zoom_image_size is reentrant and automatically parallelized (on tuple level, channel level).
Alternatives
zoom_image_factor, affine_trans_image, hom_mat2d_scale
See also
hom_mat2d_scale, affine_trans_image
Module
Foundation
3.8 Inpainting
HALCON 8.0.2
214 CHAPTER 3. FILTER
The operator inpainting_aniso uses the anisotropic diffusion according to the model of Perona and Malik,
to continue image edges that cross the border of the region Region and to connect them inside of Region.
With this, the structure of the edges in Region will be made consistent with the surrounding image matrix, so that
an occlusion of errors or unwanted objects in the input image, a so called inpainting, is less visible to the human
beholder, since there remain no obvious artefacts or smudges.
Considering the image as a gray value function u, the algorithm is a discretization of the partial differential equation
ut = div(g(|∇u|2 , c)∇u)
with the initial value u = u0 defined by Image at a time t0 = 0. The equation is iterated Iterations times in
time steps of length Theta, so that the output image InpaintedImage contains the gray value function at the
time Iterations · Theta.
The primary goal of the anisotropic diffusion, which is also referred to as nonlinear isotropic diffusion, is the
elimination of image noise in constant image patches while preserving the edges in the image. The distinction
between edges and constant patches is achieved using the threshold Contrast on the magnitude of the gray
value differences between adjacent pixels. Contrast is referred to as the contrast parameter and is abbreviated
with the letter c. If the edge information is distributed in an environment of the already existing edges by smoothing
the edge amplitude matrix, it is furthermore possible to continue edges into the computation area Region. The
standard deviation of this smoothing process is determined by the parameter Rho.
The algorithm used is basically the same as in the anisotropic diffusion filter anisotropic_diffusion,
except that here, border treatment is not done by mirroring the gray values at the border of Region. Instead, this
procedure is only applicable on regions that keep a distance of at least 3 pixels to the border of the image matrix
of Image, since the gray values on this band around Region are used to define the boundary conditions for the
respective differential equation and thus assure consistency with the neighborhood of Region. Please note that
the inpainting progress is restricted to those pixels that are included in the ROI of the input image Image. If the
ROI does not include the entire region Region, a band around the intersection of Region and the ROI is used to
define the boundary values.
The result of the diffusion process depends on the gray values in the computation area of the input image Image.
It must be pointed out that already exisiting image edges are preserved within Region. In particular, this holds
for gray value jumps at the border of Region, which can result for example from a previous inpainting with
constant gray value. If the procedure is to be used for inpainting, it is recommended to apply the operator
harmonic_interpolation first to remove all unwanted edges inside the computation area and to minimize
the gray value difference between adjacent pixels, unless the input image already contains information inside
Region that should be preserved.
The variable diffusion coefficient g can be chosen to follow different monotonically decreasing functions with
values between 0 and 1 and determines the response of the diffusion process to an edge. With the parameter Mode,
the following functions can be selected:
1
g1 (x, c) = p
1 + 2 cx2
Choosing the function g1 by setting Mode to ’parabolic’ guarantees that the associated differential equation is
parabolic, so that a well-posedness theory exists for the problem and the procedure is stable for an arbitrary step
size Theta. In this case however, there remains a slight diffusion even across edges of an amplitude larger than c.
1
g2 (x, c) =
1 + cx2
The choice of ’perona-malik’ for Mode, as used in the publication of Perona and Malik, does not possess the
theoretical properties of g1 , but in practice it has proved to be sufficiently stable and is thus widely used. The
theoretical instability results in a slight sharpening of strong edges.
c8
g3 (x, c) = 1 − exp(−C )
x4
The function g3 with the constant C = 3.31488, proposed by Weickert, and selectable by setting Mode to ’weick-
ert’ is an improvement of g2 with respect to edge sharpening. The transition between smoothing and sharpening
happens very abruptly at x = c2 .
Furthermore, the choice of the value ’shock’ is possible for Mode to select a contrast invariant modification of the
anisotropic diffusion. In this variant, the generation of edges is not achieved by variation of the diffusion coefficient
g, but the constant coefficient g = 1 and thus isotropic diffusion is used. Additionally, a shock filter of type
ut = −sgn(∇|∇u|)|∇u|
is applied, which, just like a negative diffusion coefficient, causes a sharpening of the edges, but works independent
of the absolute value of |∇u|. In this mode, Contrast does not have the meaning of a contrast parameter,
but specifies the ratio between the diffusion and the shock filter part applied at each iteration step. Hence, the
value 0 would correspond to pure isotropic diffusion, as used in the operator isotropic_diffusion. The
parameter is scaled in such a way that diffusion and sharpening cancel each other out for Contrast = 1 . A
value Contrast > 1 should not be used, since it would make the algorithm unstable.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / real
Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Inpainting region.
. InpaintedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Output image.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of edge sharpening algorithm.
Default Value : "weickert"
List of values : Mode ∈ {"weickert", "perona-malik", "parabolic", "shock"}
. Contrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Contrast parameter.
Default Value : 5.0
Suggested values : Contrast ∈ {0.5, 2.0, 5.0, 10.0, 20.0, 50.0, 100.0}
Restriction : Contrast > 0
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Step size.
Default Value : 0.5
Suggested values : Theta ∈ {0.5, 1.0, 5.0, 10.0, 30.0, 100.0}
Restriction : Theta > 0
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 10
Suggested values : Iterations ∈ {1, 3, 10, 100, 500}
Restriction : Iterations ≥ 1
. Rho (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Smoothing coefficient for edge information.
Default Value : 3.0
Suggested values : Rho ∈ {0.0, 0.1, 0.5, 1.0, 3.0, 10.0}
Restriction : Rho ≥ 0
Example (Syntax: HDevelop)
Parallelization Information
inpainting_aniso is reentrant and automatically parallelized (on tuple level).
Alternatives
harmonic_interpolation, inpainting_ct, inpainting_mcf, inpainting_texture,
inpainting_ced
HALCON 8.0.2
216 CHAPTER 3. FILTER
References
J. Weickert; “’Anisotropic Diffusion in Image Processing’; PhD Thesis; Fachbereich Mathematik, Universität
Kaiserslautern; 1996.
P. Perona, J. Malik; “Scale-space and edge detection using anisotropic diffusion”; Transactions on Pattern Analysis
and Machine Intelligence 12(7), pp. 629-639; IEEE; 1990.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation
ut = div(G(u)∇u)
formulated by Weickert. With a 2 × 2 coefficient matrix G that depends on the gray values in Image, this is an
enhancement of the mean curvature flow or intrinsic heat equation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
on the gray value function u defined by the input image Image at a time t0 = 0. The smoothing opera-
tor mean_curvature_flow is a direct application of the mean curvature flow equation. With the opera-
tor inpainting_mcf, it can also be used for image inpainting. The discrete diffusion equation is solved in
Iterations time steps of length Theta, so that the output image InpaintedImage contains the gray value
function at the time Iterations · Theta.
To detect the image direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter Sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator isotropic_diffusion for isotropic image smoothing.
Similar to the operator inpainting_mcf, the structure of the image data in Region is simplified by smoothing
the level lines of Image. By this, image errors and unwanted objects can be removed from the image, while the
edges in the neighborhood are extended continuously. This procedure is called image inpainting. The objective is
to introduce a minimum amount of artefacts or smoothing effects, so that the image manipulation is least visible to
a human beholder.
While the matrix G is given by
1
GM CF (u) = I − ∇u(∇u)T ,
|∇u|2
in the case of the operator inpainting_mcf, where I denotes the unit matrix, GM CF is again smoothed
componentwise by a Gaussian filter of standard deviation Rho for coherence_enhancing_diff. Then, the
final coefficient matrix
is constructed from the eigenvalues λ1 , λ2 and eigenvectors w1 , w2 of the resulting intermediate matrix, where the
functions
g1 (p) = 0.001
−1
g2 (p) = 0.001 + 0.999 exp
p
HALCON 8.0.2
218 CHAPTER 3. FILTER
Parallelization Information
inpainting_ced is reentrant and automatically parallelized (on tuple level).
Alternatives
harmonic_interpolation, inpainting_ct, inpainting_aniso, inpainting_mcf,
inpainting_texture
References
J. Weickert, V. Hlavac, R. Sara; “Multiscale texture enhancement”; Computer analysis of images and patterns,
Lecture Notes in Computer Science, Vol. 970, pp. 230-237; Springer, Berlin; 1995.
J. Weickert, B. ter Haar Romeny, L. Florack, J. Koenderink, M. Viergever; “A review of nonlinear diffusion
filtering”; Scale-Space Theory in Computer Vision, Lecture Notes in Comp. Science, Vol. 1252, pp. 3-28;
Springer, Berlin; 1997.
Module
Foundation
• The order of the pixels to process is given by their Euclidean distance to the boundary of the region to inpaint.
• A new value ui is computed as a weighted average of already known values uj within a disc of radius
Epsilon around the current pixel. The disc is restricted to already known pixels.
• The size of this scheme’s mask depends on Epsilon.
The initially used image data comes from a stripe of thickness Epsilon around the region to inpaint. Thus,
Epsilon must be at least 1 for the scheme to work, but should be greater. The maximum value for Epsilon
depends on the gray values that should be transported into the region. Choosing Epsilon = 5 can be used in
many cases.
Since the goal is to close broken contour lines, the direction of the level lines must be estimated and used in the
weight. This estimated direction is called the coherence direction, and is computed by means of the structure tensor
S.
S = Gρ ∗ DvDv T
and
v = Gσ ∗ u
where ∗ denotes the convolution, u denotes the gray value image, D the derivative and G Gaussian kernels with
standard deviation σ and ρ. These standard deviations are defined by the operator’s parameters Sigma and Rho.
Sigma should have the size of the noise or uninportant little objects, which are then not considered in the estima-
tion step by the pre-smoothing. Rho gives the size of the window around a pixel that will be used for direction
estimation. The coherence direction c then is given by the eigendirection of S with respect to the minimal eigen-
value λ, i.e.
Sc = λc, |c| = 1
For multichannel or color images, the scheme above is applied to each channel separately, but the weights must be
the same for all channels to propagate information in the same direction. Since the weight depends on the coherence
direction, the common direction is given by the eigendirection of a composite structure tensor. If u1 , ..., un denote
the n channels of the image, the channel structure tensors S1 , ..., Sn are computed and then combined to the
composite structure tensor S.
n
X
S= ai Si
i=1
The coefficients ai are passed in ChannelCoefficients, which is a tuple of length n or length 1. If the tuple’s
length is 1, the arithmetic mean is used, i.e., ai = 1/n. If the length of ChannelCoefficients matches the
number of channels, the ai are set to
ChannelCoefficientsi
ai = Pn
i=1 ChannelCoefficientsi
in order to get a well-defined convex combination. Hence, the ChannelCoefficients must be greater than or
equal to zero and their sum must be greater than zero. If the tuple’s length is neither 1 nor the number of channels
or the requirement above is not satisfied, the operator returns an error message.
The purpose of using other ChannelCoefficients than the arithmetic mean is to adapt to different color
codes. The coherence direction is a geometrical information of the composite image, which is given by high
contrasts such as edges. Thus the more contrast a channel has, the more geometrical information it contains, and
consequently the greater its coefficient should be chosen (relative to the others). For RGB images, [0.299, 0.587,
0.114] is a good choice.
The weight in the scheme is the product of a directional component and a distance component. If p is the 2D
coordinate vector of the current pixel to be inpainted and q the 2D coordinate of a pixel in the neighborhood (the
disc restricted to already known pixels), the directional component measures the deviation of the vector p − q
from the coherence direction. If the deviation exponentially scaled by β is large, a low directional component is
assigned, whereas if it is small, a large directional component is assigned. β is controlled by Kappa (in percent):
β = 20 ∗ Epsilon ∗ Kappa/100
Kappa defines how important it is to propagate information along the coherence direction, so a large Kappa
yields sharp edges, while a low Kappa allows for more diffusion.
A special case is when Kappa is zero: In this case the directional component of the weight is constant (one).
The direction estimation step is then skipped to save computational costs and the parameters Sigma, Rho,
ChannelCoefficients become meaningless, i.e, the propagation of information is not based on the struc-
tures visible in the image.
The distance component is 1/|p − q|. Consequently, if q is far away from p, a low distance component is assigned,
whereas if it is near to p, a high distance component is assigned.
HALCON 8.0.2
220 CHAPTER 3. FILTER
Parameter
Parallelization Information
inpainting_ct is reentrant and automatically parallelized (on tuple level).
Alternatives
harmonic_interpolation, inpainting_aniso, inpainting_mcf, inpainting_ced,
inpainting_texture
References
Folkmar Bornemann, Tom März: “Fast Image Inpainting Based On Coherence Transport”; Journal of Mathemati-
cal Imaging and Vision; vol. 28, no. 3; pp. 259-278; 2007.
Module
Foundation
∇u
ut = div( )|∇u| = curv(u)|∇u|
|∇u|
on the gray value function u defined in the region Region by the input image Image at a time t0 = 0.
The discretized equation is solved in Iterations time steps of length Theta, so that the output image
InpaintedImage contains the gray value function at the time Iterations · Theta.
A stationary state of the mean curvature flow equation, which is also the basis of the operator
mean_curvature_flow, has the special property that the level lines of u all have the curvature 0. This means
that after sufficiently many iterations there are only straight edges left inside the computation area of the output
image InpaintedImage. By this, the structure of objects inside of Region can be simplified, while the re-
maining edges are continuously connected to those of the surrounding image matrix. This allows for a removal of
image errors and unwanted objects in the input image, a so called image inpainting, which is only weakly visible
to a human beholder since there remain no obvious artefacts or smudges.
To detect the image direction more robustly, in particular on noisy input data, an additional isotropic smoothing
step can precede the computation of the gray value gradients. The parameter Sigma determines the magnitude of
the smoothing by means of the standard deviation of a corresponding Gaussian convolution kernel, as used in the
operator isotropic_diffusion for isotropic image smoothing.
Parameter
HALCON 8.0.2
222 CHAPTER 3. FILTER
Alternatives
harmonic_interpolation, inpainting_ct, inpainting_aniso, inpainting_ced,
inpainting_texture
References
M. G. Crandall, P. Lions; “Convergent Difference Schemes for Nonlinear Parabolic Equations and Mean Curvature
Motion”; Numer. Math. 75 pp. 17-41; 1996.
G. Aubert, P. Kornprobst; “Mathematical Problems in Image Processing”; Applied Mathematical Sciences 147;
Springer, New York; 2002.
Module
Foundation
If the structure size of the ROI of Image or of the computation area Region is smaller than MaskSize, the
execution time of the algorithm can increase extremely. Hence, it is recommended to only use clearly structured
input regions.
Parameter
3.9 Lines
bandpass_image ( const Hobject Image, Hobject *ImageBandpass,
const char *FilterType )
FilterType: ’lines’
In contrast to the edge operator sobel_amp this filter detects lines instead of edges, i.e., two closely adjacent
edges.
HALCON 8.0.2
224 CHAPTER 3. FILTER
0 −2 −2 −2 0
−2 0 3 0 −2
−2 3 12 3 −2
−2 0 3 0 −2
0 −2 −2 −2 0
At the border of the image the gray values are mirrored. Over- and underflows of gray values are clipped. The
resulting images are returned in ImageBandpass.
Parameter
bandpass_image(Image,&LineImage,"lines");
threshold(LineImage,&Lines,60.0,255.0);
skeleton(Lines,&ThinLines);
Result
bandpass_image returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
bandpass_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold, skeleton
Alternatives
convol_image, topographic_sketch, texture_laws
See also
highpass_image, gray_skeleton
Module
Foundation
By defining color lines as dark lines in the amplitude image, in contrast to lines_gauss, for single-channel
images no distinction is made whether the lines are darker or brighter than their surroundings. Furthermore,
lines_color also returns staircase lines, i.e., lines for which the gray value of the lines lies between the gray
values in the surrounding area to the left and right sides of the line. In multi-channel images, the above definition
allows each channel to have a different line type. For example, in a three-channel image the first channel may have
a dark line, the second channel a bright line, and the third channel a staircase line at the same position.
If ExtractWidth is set to ’true’ the line width is extracted for each line point. Because the line extractor is
unable to extract certain junctions because of differential geometric reasons, it tries to extract these by different
means if CompleteJunctions is set to ’true’.
lines_color links the line points into lines by using an algorithm similar to a hysteresis threshold op-
eration, which is also used in lines_gauss and edges_color_sub_pix. Points with an amplitude
larger than High are immediately accepted as belonging to a line, while points with an amplitude smaller
than Low are rejected. All other points are accepted as lines if they are connected to accepted line points (see
also lines_gauss). Here, amplitude means the line amplitude of the dark line (see lines_gauss and
lines_facet). This value corresponds to the third directional derivative of the smoothed input image in the
direction perpendicular to the line.
For the choice of the thresholds High and Low one has to keep in mind that the third directional derivative depends
on the amplitude and width of the line as well as the choice of Sigma. The value of the third derivative depends
linearly on the amplitude, i.e., the larger the amplitude, the larger the response. For the width of the line there
is an inverse dependence: The wider the line is, the smaller the response gets. This holds analogously for the
dependence on Sigma: The larger Sigma is chosen, the smaller the second derivative will be. This means that
for larger smoothing correspondingly smaller values for High and Low should be chosen.
The extracted lines are returned in a topologically sound data structure in Lines. This means that lines are
correctly split at junction points.
lines_color defines the following attributes for each line point if ExtractWidth was set to ’false’:
’angle’ The angle of the direction perpendicular to the line (oriented such that the normal vectors point to
the right side of the line as the line is traversed from start to end point; the angles are given with
respect to the row axis of the image.)
’response’ The magnitude of the second derivative
If ExtractWidth was set to ’true’, additionally the following attributes are defined:
’width_left’ The line width to the left of the line
’width_right’ The line width to the right of the line
All these attributes can be queried via the operator get_contour_attrib_xld.
Attention √
In general, but in particular if the line width is to be extracted, Sigma ≥ w/ 3 should be selected, where w is
the width (half the diameter) of the lines in the image. As the lowest allowable value Sigma ≥ w/2.5 must be
selected. If, for example, lines with a width of 4 pixels (diameter 8 pixels) are to be extracted, Sigma ≥ 2.3
should be selected. If it is expected that staircase lines are present in at least one channel, and if such lines should
be extracted, in addition to the above restriction, Sigma ≤ w should be selected. This is necessary because
staircase lines turn into normal step edges for large amounts of smoothing, and therefore no longer appear as dark
lines in the amplitude image of the color edge filter.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Input image.
. Lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted lines.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of Gaussian smoothing to be applied.
Default Value : 1.5
Suggested values : Sigma ∈ {1, 1.2, 1.5, 1.8, 2, 2.5, 3, 4, 5}
Typical range of values : 0.7 ≤ Sigma ≤ 20
Recommended Increment : 0.1
HALCON 8.0.2
226 CHAPTER 3. FILTER
The extraction is done by using the facet model, i.e., a least squares fit, to determine the parameters of a quadratic
polynomial in x and y for each point of the image. The parameter MaskSize determines the size of the window
used for the least squares fit. Larger values of MaskSize lead to a larger smoothing of the image, but can
lead to worse localization of the line. The parameters of the polynomial are used to calculate the line direction
for each pixel. Pixels which exhibit a local maximum in the second directional derivative perpendicular to the
line direction are marked as line points. The line points found in this manner are then linked to contours. This
is done by immediately accepting line points that have a second derivative larger than High. Points that have
a second derivative smaller than Low are rejected. All other line points are accepted if they are connected to
accepted points by a connected path. This is similar to a hysteresis threshold operation with infinite path length
(see hysteresis_threshold). However, this function is not used internally since it does not allow the
extraction of sub-pixel precise contours.
The gist of how to select the thresholds in the description of lines_gauss also holds for this operator. A value
of Sigma = 1.5 there roughly corresponds to a MaskSize of 5 here.
The extracted lines are returned in a topologically sound data structure in Lines. This means that lines are
correctly split at junction points.
lines_facet defines the following attributes for each line point:
’angle’ The angle of the direction perpendicular to the line
’response’ The magnitude of the second derivative
These attributes can be queried via the operator get_contour_attrib_xld.
Attention
The smaller the filter size MaskSize is chosen, the more short, fragmented lines will be extracted. This can lead
to considerably longer execution times.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted lines.
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Size of the facet model mask.
Default Value : 5
List of values : MaskSize ∈ {3, 5, 7, 9, 11}
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Lower threshold for the hysteresis threshold operation.
Default Value : 3
Suggested values : Low ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10}
Typical range of values : 0 ≤ Low ≤ 20
Recommended Increment : 0.5
Restriction : Low ≥ 0
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Upper threshold for the hysteresis threshold operation.
Default Value : 8
Suggested values : High ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10, 12, 15, 18, 20, 25}
Typical range of values : 0 ≤ High ≤ 35
Recommended Increment : 0.5
Restriction : (High ≥ 0) ∧ (High ≥ Low)
. LightDark (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Extract bright or dark lines.
Default Value : "light"
List of values : LightDark ∈ {"dark", "light"}
Example
HALCON 8.0.2
228 CHAPTER 3. FILTER
Complexity
Let A be the number of pixels in the domain of Image. Then the runtime complexity is O(A ∗ MaskSize).
Let S = Width ∗ Height be the number of pixels of Image. Then lines_facet requires at least 55 ∗ S bytes
of temporary memory during execution.
Result
lines_facet returns H_MSG_TRUE if all parameters are correct and no error occurs during execution. If
the input is empty the behaviour can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
lines_facet is reentrant and processed without parallelization.
Possible Successors
gen_polygons_xld
Alternatives
lines_gauss
See also
bandpass_image, dyn_threshold, topographic_sketch
References
A. Busch: "‘Fast Recognition of Lines in Digital Images Without User-Supplied Parameters"’. In H. Ebner, C.
Heipke, K.Eder, eds., "‘Spatial Information from Digital Photogrammetry and Computer Vision"’, International
Archives of Photogrammetry and Remote Sensing, Vol. 30, Part 3/1, pp. 91-97, 1994.
Module
2D Metrology
For the choice of the thresholds High and Low one has to keep in mind that the second directional derivative
depends on the amplitude and width of the line as well as the choice of Sigma. The value of the second derivative
depends linearly on the amplitude, i.e., the larger the amplitude, the larger the response. For the width of the
line there is an approximately inverse exponential dependence: The wider the line is, the smaller the response
gets. This holds analogously for the dependence on Sigma: The larger Sigma is chosen, the smaller the second
derivative will be. This means that for larger smoothing correspondingly smaller values for High and Low have
to be chosen. Two examples help to illustrate this: If 5 pixel wide lines with an amplitude larger than 100 are to be
extracted from an image with a smoothing of Sigma = 1.5, High should be chosen larger than 14. If, on the other
hand, 10 pixel wide lines with an amplitude larger than 100 and a Sigma = 3 are to be detected, High should be
chosen larger than 3.5. For the choice of Low values between 0.25 High and 0.5 High are appropriate.
The extracted lines are returned in a topologically sound data structure in Lines. This means that lines are
correctly split at junction points.
lines_gauss defines the following attributes for each line point if ExtractWidth was set to ’false’:
’angle’ The angle of the direction perpendicular to the line
’response’ The magnitude of the second derivative
If ExtractWidth was set to ’true’ and CorrectPositions to ’false’, the following attributes are defined in
addition to the above ones:
’width_left’ The line width to the left of the line
’width_right’ The line width to the right of the line
Finally, if CorrectPositions was set to ’true’, additionally the following attributes are defined:
’asymmetry’ The asymmetry of the line point
’contrast’ The contrast of the line point
Here, the asymmetry is positive if the asymmetric part, i.e., the part with the weaker gradient, is on the right side of
the line, while it is negative if the asymmetric part is on the left side of the line. All these attributes can be queried
via the operator get_contour_attrib_xld.
Attention √
In general, but in particular if the line width is to be extracted, Sigma ≥ w/ 3 should be selected, where w is
the width (half the diameter) of the lines in the image. As the lowest allowable value Sigma ≥ w/2.5 must be
selected. If, for example, lines with a width of 4 pixels (diameter 8 pixels) are to be extracted, Sigma ≥ 2.3
should be selected.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Lines (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted lines.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Amount of Gaussian smoothing to be applied.
Default Value : 1.5
Suggested values : Sigma ∈ {1, 1.2, 1.5, 1.8, 2, 2.5, 3, 4, 5}
Typical range of values : 0.7 ≤ Sigma ≤ 20
Recommended Increment : 0.1
. Low (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Lower threshold for the hysteresis threshold operation.
Default Value : 3
Suggested values : Low ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10}
Typical range of values : 0 ≤ Low ≤ 20
Recommended Increment : 0.5
Restriction : Low ≥ 0
. High (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Upper threshold for the hysteresis threshold operation.
Default Value : 8
Suggested values : High ∈ {0, 0.5, 1, 2, 3, 4, 5, 8, 10, 12, 15, 18, 20, 25}
Typical range of values : 0 ≤ High ≤ 35
Recommended Increment : 0.5
Restriction : (High ≥ 0) ∧ (High ≥ Low)
HALCON 8.0.2
230 CHAPTER 3. FILTER
Complexity
Let A be the number of pixels in the domain of Image. Then the runtime complexity is O(A ∗ Sigma).
Let S = Width ∗ Height be the number of pixels of Image. Then lines_gauss requires at least 55 ∗ S bytes
of temporary memory during execution.
Result
lines_gauss returns H_MSG_TRUE if all parameters are correct and no error occurs during execution. If
the input is empty the behaviour can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
lines_gauss is reentrant and processed without parallelization.
Possible Successors
gen_polygons_xld
Alternatives
lines_facet
See also
bandpass_image, dyn_threshold, topographic_sketch
References
C. Steger: “Extracting Curvilinear Structures: A Differential Geometric Approach”. In B. Buxton, R. Cipolla, eds.,
“Fourth European Conference on Computer Vision”, Lecture Notes in Computer Science, Volume 1064, Springer
Verlag, pp. 630-641, 1996.
C. Steger: “Extraction of Curved Lines from Images”. In “13th International Conference on Pattern Recognition”,
Volume II, pp. 251-255, 1996.
C. Steger: “An Unbiased Detector of Curvilinear Structures”. Technical Report FGBV-96-03, Forschungsgruppe
Bildverstehen (FG BV), Informatik IX, Technische Universit"at M"unchen, July 1996.
Module
2D Metrology
3.10 Match
exhaustive_match ( const Hobject Image, const Hobject RegionOfInterest,
const Hobject ImageTemplate, Hobject *ImageMatch, const char *Mode )
whereby X[i][j] indicates the grayvalue in the ith column and jth row of the image X. (l, c) is the centre of
the region of ImageTemplate. u and v are chosen so that all points of the template will be reached, i, j
run accross the RegionOfInterest. At the image frame only those parts of ImageTemplate will be
considered which lie inside the image (i.e. u and v will be restricted correspondingly). Range of values: 0 -
255 (best fit).
’dfd’ Calculating the average “displaced frame difference”:
P
u,v |Image[i − u][j − v] − ImageTemplate[l − u][c − v]|
ImageMatch[i][j] =
AREA(ImageT emplate)
The terms are the same as in ’norm_correlation’. AREA ( X ) means the area of the region X. Range of value
0 (best fit) - 255.
To calculate the normalized correlation as well as the “displaced frame difference” is (with regard to the
area of ImageTemplate) very time consuming. Therefore it is important to restrict the input region
(RegionOfInterest if possible, i.e. to apply the filter only in a very confined “region of interest”.
As far as quality is concerned, both modes return comparable results, whereby the mode ’dfd’ is faster by a factor
of about 3.5.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image.
. RegionOfInterest (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Area to be searched in the input image.
. ImageTemplate (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
This area will be “matched” by Image within the RegionOfInterest.
. ImageMatch (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte
Result image: values of the matching criterion.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Desired matching criterion.
Default Value : "dfd"
List of values : Mode ∈ {"norm_correlation", "dfd"}
Example
read_image(&Image,"monkey");
disp_image(Image,WindowHandle);
HALCON 8.0.2
232 CHAPTER 3. FILTER
Result
If the parameter values are correct, the operator exhaustive_match returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
exhaustive_match is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
draw_region, draw_rectangle1
Possible Successors
local_max, threshold
Alternatives
exhaustive_match_mg
Module
Foundation
The operator exhaustive_match_mg therefore is not simply a filter, but can also be considered as a member
of the class of region transformations.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image.
. ImageTemplate (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
The domain of this image will be matched with Image.
. ImageMatch (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte
Result image and result region: values of the matching criterion within the determined “region of interest”.
Number of elements : ImageMatch = Image
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Desired matching criterion.
Default Value : "dfd"
List of values : Mode ∈ {"norm_correlation", "dfd"}
. Level (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Startlevel in the resolution pyramid (highest resolution: Level 0).
Default Value : 1
List of values : Level ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8}
Restriction : (Level < ld(width(Image))) ∧ (Level < ld(height(Image)))
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Threshold to determine the “region of interest”.
Default Value : 30
Suggested values : Threshold ∈ {5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95,
100, 105, 110, 115, 120, 125, 130, 135, 140, 145, 150, 155, 160, 165, 170, 175, 180, 185, 190, 195, 200, 205,
210, 215, 220, 225, 230, 235, 240, 245, 250}
Typical range of values : 0 ≤ Threshold ≤ 255
Minimum Increment : 1
Recommended Increment : 5
Example
read_image(&Image,"monkey");
disp_image(Image,WindowHandle);
draw_rectangle2(WindowHandle,&Row,&Column,&Phi,&Length1,&Length2);
gen_rectangle2(&Rectangle,Row,Column,Phi,Length1,Length2);
reduce_domain(Image,Rectangle,&Template);
exhaustive_match_mg(Image,Template,&ImageMatch,’dfd’1,30);
invert_image(ImageMatch,&ImageInvert);
local_max(ImageInvert,&BestFit);
disp_region(BestFit,WindowHandle);
Result
If the parameter values are correct, the operator exhaustive_match_mg returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
exhaustive_match_mg is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
draw_region, draw_rectangle1
Possible Successors
threshold, local_max
Alternatives
exhaustive_match
See also
gen_gauss_pyramid
Module
Foundation
HALCON 8.0.2
234 CHAPTER 3. FILTER
gen_gauss_pyramid(Image,Pyramid,"weighted",0.5);
count_obj(Pyramid,&num);
for (i=1; i<=num; i++)
{
select_obj(Pyramid,&Single,i);
disp_image(Single,WindowHandle);
clear(Single);
}
Parallelization Information
gen_gauss_pyramid is reentrant and automatically parallelized (on channel level).
Possible Successors
image_to_channels, count_obj, select_obj, copy_obj
Alternatives
zoom_image_size, zoom_image_factor
See also
affine_trans_image
Module
Foundation
Parallelization Information
monotony is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
binomial_filter, gauss_image, median_image, mean_image, smooth_image,
invert_image
Possible Successors
threshold, exhaustive_match, disp_image
Alternatives
local_max, topographic_sketch, corner_response
Module
Foundation
3.11 Misc
convol_image ( const Hobject Image, Hobject *ImageResult,
const char *FilterMask, const char *Margin )
HALCON 8.0.2
236 CHAPTER 3. FILTER
All image points are convolved with the filter mask. If an overflow or underflow occurs, the resulting gray value
is clipped. Hence, if filters that result in negative output values are used (e.g., derivative filters) the input image
should be of type int2. If a filename is given in FilterMask the filter mask is read from a text file with the
following structure:
hMask sizei
hInverse weight of the maski
hMatrixi
The first line contains the size of the filter mask, given as two numbers separated by white space (e.g., 3 3 for
3 × 3). Here, the first number defines the height of the filter mask, while the second number defines its width. The
next line contains the inverse weight of the mask, i.e., the number by which the convolution of a particular image
point is divided. The remaining lines contain the filter mask as integer numbers (separated by white space), one
line of the mask per line in the file. The file must have the extension “.fil”. This extension must not be passed to
the operator. If the filter mask is to be computed from a tuple, the tuple given in FilterMask must also satisfy
the structure described above. However, in this case the line feed is omitted.
For example, lets assume we want to use the following filter mask:
1 2 1
1
16
2 4 2
1 2 1
If the filter mask should be generated from a file, then the file should look like this:
33
16
121
242
121
In contrast, if the filter mask should be generated from a tuple, then the following tuple must be passed in
FilterMask:
[3,3,16,1,2,1,2,4,2,1,2,1]
Parameter
Expand the domain of an image and set the gray values in the expanded domain.
expand_domain_gray expands the border gray values of the domain outwards. The width of the expansion
is set by the parameter ExpansionRange. All filters in HALCON use gray values of the pixels outside the
domain depending on the filter width. This may lead to undesirable side effects especially in the border region
of the domain. For example, if the foreground (domain) and the background of the image differ strongly in
brightness, the result of a filter operation may lead to undesired darkening or brightening at the border of the
domain. In order to avoid this drawback, the domain is expanded by expand_domain_gray in a preliminary
stage, copying the gray values of the border pixels to the outside of the domain. In addition, the domain itself is
also expanded to reflect the newly set pixels. Therefore, in many cases it is reasonable to reduce the domain again
( reduce_domain or change_domain) after using expand_domain_gray and call the filter operation
afterwards. ExpansionRange should be set to the half of the filter width.
Parameter
. InputImage (input_object) . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image with domain to be expanded.
. ExpandedImage (output_object) . . . . . . . . image(-array) ; Hobject * : byte / int1 / int2 / uint2 / int4 / real
Output image with new gray values in the expanded domain.
. ExpansionRange (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Radius of the gray value expansion, measured in pixels.
Default Value : 2
Suggested values : ExpansionRange ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16}
Restriction : ExpansionRange ≥ 1
Example (Syntax: HDevelop)
read_image(Fabrik, ’fabrik.tif’);
gen_rectangle2(Rectangle_Label,243,320,-1.55,62,28);
reduce_domain(Fabrik, Rectangle_Label, Fabrik_Label);
/* Character extraction without gray value expansion: */
mean_image(Fabrik_Label,Label_Mean_normal,31,31);
dyn_threshold(Fabrik_Label,Label_Mean_normal,Characters_normal,10,’dark’);
dev_display(Fabrik);
dev_display(Characters_normal);
/* The characters in the border region are not extracted ! */
stop();
/* Character extraction with gray value expansion: */
expand_domain_gray(Fabrik_Label, Label_expanded,15);
reduce_domain(Label_expanded,Rectangle_Label, Label_expanded_reduced);
mean_image(Label_expanded_reduced,Label_Mean_expanded,31,31);
dyn_threshold(Fabrik_Label,Label_Mean_expanded,Characters_expanded,10,’dark’);
dev_display(Fabrik);
dev_display(Characters_expanded);
/* Now, even in the border region the characters are recognized */
Complexity
Let L the perimeter of the domain. Then the runtime complexity is approximately O(L) ∗ ExpansionRange.
Result
expand_domain_gray returns H_MSG_TRUE if all parameters are correct. If necessary, an exception han-
dling is raised.
Parallelization Information
expand_domain_gray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
reduce_domain
Possible Successors
reduce_domain, mean_image, dyn_threshold
See also
reduce_domain, mean_image
Module
Foundation
HALCON 8.0.2
238 CHAPTER 3. FILTER
Calculate the lowest possible gray value on an arbitrary path to the image border for each point in the image.
gray_inside determines the “cheapest” path to the image border for each point in the image, i.e., the path on
which the lowest gray values have to be overcome. The resulting image contains the difference of the gray value
of the particular point and the maximum gray value on the path. Bright areas in the result image therefore signify
that these areas (which are typically dark in the original image) are surrounded by bright areas. Dark areas in the
result image signify that there are only small gray value differences between them and the image border (which
doesn’t mean that they are surrounded by dark areas; a small “gap” of dark values suffices). The value 0 (black) in
the result image signnifies that only darker or equally bright pixels exist on the path to the image border.
The operator is implemented by first segmenting into basins and watersheds the image using the watersheds
operator. If the image is regarded as a gray value mountain range, basins are the places where water accumulates
and the mountain ridges are the watersheds. Then, the watersheds are distributed to adjacent basins, thus leaving
only basins. The border of the domain (region) of the original image is now searched for the lowest gray value,
and the region in which it resides is given its result values. If the lowest gray value resides on the image border,
all result values can be calculated immediately using the gray value differences to the darkest point. If the smalles
found gray value lies in the interior of a basin, the lowest possible gray value has to be determined from the already
processed adjacent basins in order to compute the new values. An 8-neighborhood is used to determine adjacency.
The found region is subtracted from the regions yet to process, and the whole process is repeated. Thus, the image
is “stripped” form the outside.
Analogously to watersheds, it is advisable to apply a smoothing operation before calling watersheds, e.g.,
binomial_filter or gauss_image, in order to reduce the amount of regions that result from the watershed
algorithm, and thus to speed up the processing time.
Parameter
read_image(Image,"coin");
gauss_image(Image,&GaussImage,11);
open_window (0,0,512,512,0,"visible","",&WindowHandle);
gray_inside(GaussImage,Result);
disp_image(Result,WindowHandle);
Result
gray_inside always returns H_MSG_TRUE.
Parallelization Information
gray_inside is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image, mean_image, median_image
Possible Successors
select_shape, area_center, count_obj
See also
watersheds
Module
Foundation
Result
gray_skeleton returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_skeleton is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
mean_image
Alternatives
nonmax_suppression_amp, nonmax_suppression_dir, local_max
See also
skeleton, gray_dilation_rect
Module
Foundation
HALCON 8.0.2
240 CHAPTER 3. FILTER
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Image whose gray values are to be transformed.
. ImageResult (output_object) . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Transformed image.
. Lut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Table containing the transformation.
Example
Result
The operator lut_trans returns the value H_MSG_TRUE if the parameters are correct. Otherwise an exception
is raised.
Parallelization Information
lut_trans is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Module
Foundation
MaskSize Exponent
255 X |g(i) − g(−i)|
sym := 255 −
MaskSize i=1
255
read_image(Image,’monkey’)
symmetry(Image,ImageSymmetry,70,0.0,0.5)
threshold(ImageSymmetry,SymmPoints,170,255)
Result
If the parameter values are correct the operator symmetry returns the value H_MSG_TRUE The be-
havior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
symmetry is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
threshold
Module
Foundation
HALCON 8.0.2
242 CHAPTER 3. FILTER
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte
Image for which the topographic primal sketch is to be computed.
. Sketch (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte
Label image containing the 11 classes.
Example
Complexity
Let n be the number of pixels in the image. Then O(n) operations are performed.
Result
topographic_sketch returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
topographic_sketch is reentrant and automatically parallelized (on tuple level, channel level).
Possible Successors
threshold
References
R. Haralick, L. Shapiro: “Computer and Robot Vision, Volume I”; Reading, Massachusetts, Addison-Wesley;
1992; Kapitel 8.13.
Module
Foundation
3.12 Noise
T_add_noise_distribution ( const Hobject Image, Hobject *ImageNoise,
const Htuple Distribution )
Example
read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
set_d(PerSalt,30.0,0);
set_d(PerPepper,30.0,0);
T_sp_distribution(PerSalt,PerPepper,&Dist);
T_add_noise_distribution(Image,&ImageNoise,Dist);
disp_image(ImageNoise,WindowHandle);
Result
add_noise_distribution returns H_MSG_TRUE if all parameters are correct. If the input is empty the
behaviour can be set via set_system(’no_object_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
add_noise_distribution is reentrant and automatically parallelized (on tuple level, channel level, domain
level).
Possible Predecessors
gauss_distribution, sp_distribution, noise_distribution_mean
Alternatives
add_noise_white
See also
sp_distribution, gauss_distribution, noise_distribution_mean, add_noise_white
Module
Foundation
read_image(&Image,"fabrik");
HALCON 8.0.2
244 CHAPTER 3. FILTER
disp_image(Image,WindowHandle);
add_noise_white(Image,&ImageNoise,90.0);
disp_image(ImageNoise,WindowHandle);
Result
add_noise_white returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
add_noise_white is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
add_noise_distribution
See also
add_noise_distribution, noise_distribution_mean, gauss_distribution,
sp_distribution
Module
Foundation
read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
set_d(Sigma,30.0,0);
T_gauss_distribution(Sigma,&Dist);
T_add_noise_distribution(Image,&ImageNoise,Dist);
disp_image(ImageNoise,WindowHandle);
Parallelization Information
gauss_distribution is reentrant and processed without parallelization.
Possible Successors
add_noise_distribution
Alternatives
sp_distribution, noise_distribution_mean
See also
sp_distribution, add_noise_white, noise_distribution_mean
Module
Foundation
HALCON 8.0.2
246 CHAPTER 3. FILTER
read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
create_tuple(&PerSalt,1);
set_d(PerSalt,30.0,0);
create_tuple(&PerPepper,1);
set_d(PerPepper,30.0,0);
T_sp_distribution(PerSalt,PerPepper,&Dist);
T_add_noise_distribution(Image,&ImageNoise,Dist);
disp_image(ImageNoise,WindowHandle);
Parallelization Information
sp_distribution is reentrant and processed without parallelization.
Possible Successors
add_noise_distribution
Alternatives
gauss_distribution, noise_distribution_mean
See also
gauss_distribution, noise_distribution_mean, add_noise_white
Module
Foundation
3.13 Optical-Flow
optical_flow_mg computes the optical flow between two images. The optical flow represents information
about the movement between two consecutive images of a monocular image sequence. The movement in the
images can be caused by objects that move in the world or by a movement of the camera (or both) between the
acquisition of the two images. The projection of these 3D movements into the 2D image plane is called the optical
flow.
The two consecutive images of the image sequence are passed in Image1 and Image2. The computed optical
flow is returned in VectorField. The vectors in the vector field VectorField represent the movement in the
image plane between Image1 and Image2. The point in Image2 that corresponds to the point (r, c) in Image1
is given by (r0 , c0 ) = (r + u(r, c), c + v(r, c)), where u(r, c) and v(r, c) denote the value of the row and column
components of the vector field image VectorField at the point (r, c).
The parameter Algorithm allows the selection of three different algorithms for computing the optical flow. All
three algorithms are implemented by using multigrid solvers to ensure an efficient solution of the underlying partial
differential equations.
For Algorithm = ’fdrig’, the method proposed by Brox, Bruhn, Papenberg, and Weickert is used. This approach
is flow-driven, robust, isotropic, and uses a gradient constancy term.
For Algorithm = ’ddraw’, a robust variant of the method proposed by Nagel and Enkelmann is used. This
approach is data-driven, robust, anisotropic, and uses warping (in contrast to the original approach).
For Algorithm = ’clg’ the combined local-global method proposed by Bruhn, Weickert, Feddern, Kohlberger,
and Schnörr is used.
In all three algorithms, the input images can first be smoothed by a Gaussian filter with a standard deviation of
SmoothingSigma (see derivate_gauss).
All three approaches are variational approaches that compute the optical flow as the minimizer of a suitable energy
functional. In general, the energy functionals have the following form:
where w = (u, v, 1) is the optical flow vector field to be determined (with a time step of 1 in the third coordinate).
The image sequence is regarded as a continuous function f (x), where x = (r, c, t) and (r, c) denotes the position
and t the time. Furthermore, ED (w) denotes the data term, while ES (w) denotes the smoothness term, and α is a
regularization parameter that determines the smoothness of the solution. The regularization parameter α is passed
in FlowSmoothness. While the data term encodes assumptions about the constancy of the object features in
consecutive images, e.g., the constancy of the gray values or the constancy of the first spatial derivative of the
gray values, the smoothness term encodes assumptions about the (piecewise) smoothness of the solution, i.e., the
smoothness of the vector field to be determined.
The FDRIG algorithm is based on the minimization of an energy functional that contains the following assump-
tions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (r + u, c + v, t + 1) = f (r, c, t). This can be written more compactly as
f (x + w) = f (x) using vector notation.
Constancy of the spatial gray value derivatives: It is assumed that corresponding pixels in consecutive images of an
image sequence additionally have have the same spatial gray value derivatives, i.e, that ∇2 f (x + u, y + v, t + 1) =
∇2 f (x, y, t) also holds, where ∇2 f = (∂x f, ∂y f ). This can be written more compactly as ∇2 f (x+w) = ∇2 f (x).
In contrast to the gray value constancy, the gradient constancy has the advantage that it is invariant to additive global
illumination changes.
Large displacements: It is assumed that large displacements, i.e., displacements larger than one pixel, occur. Under
this assumption, it makes sense to consciously abstain from using the linearization of the constancy assumptions
in the model that is typically proposed in the literature.
Statistical robustness in the data term: To reduce the influence of outliers, i.e., points that violate the constancy
assumptions, they are penalized in a statistically robust manner, i.e., the customary
√ non-robust quadratical penal-
ization ΨD (s2 ) = s2 is replaced by a linear penalization via ΨD (s2 ) = s2 + 2 , where = 0.001 is a fixed
regularization constant.
Preservation of discontinuities in the flow field I: The solution is assumed to be piecewise smooth. While the actual
2 2
smoothness is achieved by penalizing the first
2
√ derivatives of the flow |∇2 u| + |∇2 v| , the use of a statistically
2 2
robust (linear) penalty function ΨS (s ) = s + with = 0.001 provides the desired preservation of edges in
the movement in the flow field to be determined. This type of smoothness term is called flow-driven and isotropic.
HALCON 8.0.2
248 CHAPTER 3. FILTER
Taking into account all of the above assumptions, the energy functional of the FDRIG algorithm can be written as
Z
2 2
EFDRIG (w) = |f (x + w) − f (x)| + γ |∇2 f (x + w) − ∇2 f (x)| drdc
ΨD
| {z } | {z }
gray value constancy gradient constancy
Z
+α ΨS |∇2 u(x)|2 + |∇2 v(x)|2 drdc
| {z }
smoothness assumption
Here, α is the regularization parameter passed in FlowSmoothness, while γ is the gradient constancy weight
passed in GradientConstancy. These two parameters, which constitute the model parameters of the FDRIG
algorithm, are described in more detail below.
The DDRAW algorithm is based on the minimization of an energy functional that contains the following assump-
tions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (x + w) = f (x).
Large displacements: It is assumed that large displacements, i.e., displacements larger than one pixel, occur. Under
this assumption, it makes sense to consciously abstain from using the linearization of the constancy assumptions
in the model that is typically proposed in the literature.
Statistical robustness in the data term: To reduce the influence of outliers, i.e., points that violate the constancy
assumptions, they are penalized in a statistically robust manner, i.e., the customary
√ non-robust quadratical penal-
ization ΨD (s2 ) = s2 is replaced by a linear penalization via ΨD (s2 ) = s2 + 2 , where = 0.001 is a fixed
regularization constant.
Preservation of discontinuities in the flow field II: The solution is assumed to be piecewise smooth. In contrast to
the FDRIG algorithm, which allows discontinuities everywhere, the DDRAW algorithm only allows discontinuities
at the edges in the original image. Here, the local smoothness is controlled in such a way that the flow field is sharp
across image edges, while it is smooth along the image edges. This type of smoothness term is called data-driven
and anisotropic.
All assumptions of the DDRAW algorithm can be combined into the following energy functional:
Z
2
EDDRAW (w) = ΨD |f (x + w) − f (x)| drdc
| {z }
gray value constancy
Z
∇2 u(x)> PNE (∇2 f (x)) ∇2 u(x) + ∇2 v(x)> PNE (∇2 f (x)) ∇2 v(x) drdc ,
+α
| {z }
smoothness assumption
where PNE (∇2 f (x)) is a normalized projection matrix orthogonal to ∇2 f (x), for which
holds. This matrix ensures that the smoothness of the flow field is only assumed along the image edges. In
contrast, no assumption is made with respect to the smoothness across the image edges, resulting in the fact
that discontinuities in the solution may occur across the image edges. In this respect, S = 0.001 serves as a
regularization parameter that prevents the projection matrix PNE (∇2 f (x)) from becoming singular. In contrast to
the FDRIG algorithm, there is only one model parameter for the DDRAW algorithm: the regularization parameter
α. As mentioned above, α is described in more detail below.
As for the two approaces described above, the CLG algorithm uses certain assumptions:
Constancy of the gray values: It is assumed that corresponding pixels in consecutive images of an image sequence
have the same gray value, i.e., that f (x + w) = f (x).
Small displacements: In contrast to the two approaches above, it is assumed that only small displacements can
occur, i.e., displacements in the order of a few pixels. This facilitates a linearization of the constancy assumptions
in the model, and leads to the approximation f (x) + ∇3 f (x)> w(x) = f (x), i.e., ∇3 f (x)> w(x) = 0 should
hold. Here, ∇3 f (x) denotes the gradient in the spatial as well as the temporal domain.
Local constancy of the solution: Furthermore, it is assumed that the flow field to be computed is locally constant.
This facilitates the integration of the image data in the data term over the respective neighborhood of each pixel.
This, in turn, increases the robustness of the algorithm against noise. Mathematically, this can be achieved by
reformulating the quadratic data term as (∇3 f (x)> w(x))2 = w(x)> ∇3 f (x)∇3 f (x)> w(x). By performing a
local Gaussian-weighted integration over a neighborhood specified by the ρ (passed in IntegrationSigma),
the following data term is obtained: w(x)> Gρ ∗(∇3 f (x)∇3 f (x)> ) w(x). Here, Gρ ∗. . . denotes a convolution of
the 3 × 3 matrix ∇3 f (x)∇3 f (x)> with a Gaussian filter with a standard deviation of ρ (see derivate_gauss).
General smoothness of the flow field: Finally, the solution is assumed to be smooth everywhere in the image. This
particular type of smoothness term is called homogeneous.
All of the above assumptions can be combined into the following energy functional:
Z Z
w(x)> Gρ ∗ (∇3 f (x)∇3 f (x)> ) w(x) drdc + α |∇2 u(x)|2 + |∇2 v(x)|2 drdc ,
ECLG (w) =
| {z } | {z }
gray value constancy smoothness assumption
The corresponding model parameters are the regularization parameter α as well as the integration scale ρ (passed
in IntegrationSigma), which determines the size of the neighborhood over which to integrate the data term.
These two parameters are described in more detail below.
To compute the optical flow vector field for two consecutive images of an image sequence with the FDRIG,
DDRAW, or CLG algorithm, the solution that best fulfills the assumptions of the respective algorithm must be
determined. From a mathematical point of view, this means that a minimization of the above energy functionals
should be performed. For the FDRIG and DDRAW algorithms, so called coarse-to-fine warping strategies play an
important role in this minimization, because they enable the calculation of large displacements. Thus, they are a
suitable means to handle the omission of the linearization of the constancy assumptions numerically in these two
approaches.
To calculate large displacements, coarse-to-fine warping strategies use two concepts that are closely interlocked:
The successive refinement of the problem (coarse-to-fine) and the successive compensation of the current image
pair by already computed displacements (warping). Algorithmically, such coarse-to-fine warping strategies can be
described as follows:
1. First, both images of the current image pair are zoomed down to a very coarse resolution level.
2. Then, the optical flow vector field is computed on this coarse resolution.
3. The vector field is required on the next resolution level: It is applied there to the second image of the image
sequence, i.e., the problem on the finer resolution level is compensated by the already computed optical flow field.
This step is also known as warping.
4. The modified problem (difference problem) is now solved on the finer resolution level, i.e., the optical flow
vector field is computed there.
5. The steps 3-4 are repeated until the finest resolution level is reached.
6. The final result is computed by adding up the vector fields from all resolution levels.
This incremental computation of the optical flow vector field has the following advantage: While the coarse-to-fine
strategy ensures that the displacements on the finest resolution level are very small, the warping strategy ensures
that the displacements remain small for the incremental displacements (optical flow vector fields of the difference
problems). Since small displacements can be computed much more accurately than larger displacements, the
accuracy of the results typically increases significantly by using such a coarse-to-fine warping strategy. However,
instead of having to solve a single correspondence problem, an entire hierarchy of these problems must now be
solved. For the CLG algorithm, such a coarse-to-fine warping strategy is unnecessary since the model already
assumes small displacements.
The maximum number of resolution levels (warping levels), the resolution ratio between two consecutive resolution
levels, as well as the finest resolution level can be specified for the FDRIG as well as the DDRAW algorithm.
Details can be found below.
The minimization of functionals is mathematically very closely related to the minimization of functions: Like
the fact that the zero crossing of the first derivative is a necessary condition for the minimum of a function, the
fulfillment of the so called Euler-Lagrange equations is a necessary condition for the minimizing function of a
HALCON 8.0.2
250 CHAPTER 3. FILTER
functional (the minimizing function corresponds to the desired optical flow vector field in this case). The Euler-
Lagrange equations are partial differential equations. By discretizing these Euler-Lagrange equations using finite
differences, large sparse nonlinear equation systems result for the FDRIG and DDRAW algorithms. Because
coarse-to-fine warping strategies are used, such an equation system must be solved for each resolution level, i.e.,
for each warping level. For the CLG algorithm, a single sparse linear equation system must be solved.
To ensure that the above nonlinear equation systems can be solved efficiently, the FDRIG and DDRAW use bidi-
rectional multigrid methods. From a numerical point of view, these strategies are among the fastest methods for
solving large linear and nonlinear equation systems. In contrast to conventional nonhierarchical iterative methods,
e.g., the different linear and nonlinear Gauss-Seidel variants, the multigrid methods have the advantage that correc-
tions to the solution can be determined efficiently on coarser resolution levels. This, in turn, leads to a significantly
faster convergence. The basic idea of multigrid methods additionally consists of hierarchically computing these
correction steps, i.e., the computation of the error on a coarser resolution level itself uses the same strategy and
efficiently computes its error (i.e., the error of the error) by correction steps on an even coarser resolution level.
Depending on whether one or two error correction steps are performed per cycle, a so called V or W cycle is
obtained. The corresponding strategies for stepping through the resolution hierarchy are as follows for two to four
resolution levels:
Fine
V-Cycles W-Cycles
1 u u u u u u u u u u u u
AAs A s s As s AAs A s s s As s s
2 A A
A A A
s As s As A
s As s s As s s
A
3 A
AAs AAsAA
s AsAAs
4
Coarse
Here, iterations on the original problem are denoted by large markers, while small markers denote iterations on
error correction problems.
Algorithmically, a correction cycle can be described as follows:
1. In the first step, several (few) iterations using an interative linear or nonlinear basic solver are performed (e.g.,
a variant of the Gauss-Seidel solver). This step is called pre-relaxation step.
2. In the second step, the current error is computed to correct the current solution (the solution after step 1).
For efficiency reasons, the error is calculated on a coarser resolution level. This step, which can be performed
iteratively several times, is called coarse grid correction step.
3. In a final step, again several (few) iterations using the interative linear or nonlinear basic solver of step 1 are
performed. This step is called post-relaxation step.
In addition, the solution can be initialized in a hierarchical manner. Starting from a very coarse variant of the
original (non)linear equation system, the solution is successively refined. To do so, interpolated solutions of
coarser variants of the equation system are used as the initialization of the next finer variant. On each resolution
level itself, the V or W cycles described above are used to efficiently solve the (non)linear equation system on that
resolution level. The corresponding multigrid methods are called full multigrid methods in the literature. The full
multigrid algorithm can be visualized as follows:
Coarse
This example represents a full multigrid algorithm that uses two W correction cycles per resolution level of the
hierarchical initialization. The interpolation steps of the solution from one resolution level to the next arew denoted
by i and the two W correction cycles by w1 and w2 . Iterations on the original problem are denoted by large markers,
while small markers denote iterations on error correction problems.
In the multigrid implementation of the FDRIG, DDRAW, and CLG algorithm, the following parameters can be
set: whether a hierarchical initialization is performed; the number of coarse grid correction steps; the maximum
number of correction levels (resolution levels); the number of pre-relaxation steps; the number of post-relaxation
steps. These parameters are described in more detail below.
The basic solver for the FDRIG algorithm is a point-coupled fixed-point variant of the linear Gauss-Seidel algo-
rithm. The basic solver for the DDRAW algorithm is an alternating line-coupled fixed-point variant of the same
type. The number of fixed-point steps can be specified for both algorithms with a further parameter. The basic
solver for the CLG algorithm is a point-coupled linear Gauss-Seidel algorithm. The transfer of the data between
the different resolution levels is performed by area-based interpolation and area-based averaging, respectively.
After the algorithms have been described, the effects of the individual parameters are discussed in the following.
The input images, along with their domains (regions of interest) are passed in Image1 and Image2. The com-
putation of the optical flow vector field VectorField is performed on the smallest surrounding rectangle of the
intersection of the domains of Image1 and Image2. The domain of VectorField is the intersection of the
two domains. Hence, by specifying reduced domains for Image1 and Image2, the processing can be focused
and runtime can potentially be saved. It should be noted, however, that all methods compute a global solution of
the optical flow. In particular, it follows that the solution on a reduced domain need not (and cannot) be identical
to the resolution on the full domain restricted to the reduced domain.
SmoothingSigma specifies the standard deviation of the Gaussian kernel that is used to smooth both input
images. The larger the value of SmoothingSigma, the larger the low-pass effect of the Gaussian kernel, i.e., the
smoother the preprocessed image. Usually, SmoothingSigma = 0.8 is a suitable choice. However, other values
in the interval [0, 2] are also possible. Larger standard deviations should only be considered if the input images are
very noisy. It should be noted that larger values of SmoothingSigma lead to slightly longer execution times.
IntegrationSigma specifies the standard deviation ρ of the Gaussian kernel Gρ that is used for the local
integration of the neighborhood information of the data term. This parameter is used only in the CLG algorithm and
has no effect on the other two algorithms. Usually, IntegrationSigma = 1.0 is a suitable choice. However,
other values in the interval [0, 3] are also possible. Larger standard deviations should only be considered if the
input images are very noisy. It should be noted that larger values of IntegrationSigma lead to slightly longer
execution times.
FlowSmoothness specifies the weight α of the smoothness term with respect to the data term. The larger the
value of FlowSmoothness, the smoother the computed optical flow field. It should be noted that choosing
FlowSmoothness too small can lead to unusable results, even though statistically robust penalty functions are
used, in particular if the warping strategy needs to predict too much information outside of the image. For byte
images with a gray value range of [0, 255], values of FlowSmoothness around 20 for the flow-driven FDRIG
algorithm and around 1000 for the data-driven DDRAW algorithm and the homogeneous CLG algorithm typically
yield good results.
GradientConstancy specifies the weight γ of the gradient constancy with respect to the gray value constancy.
This parameter is used only in the FDRIG algorithm. For the other two algorithms, it does not influence the results.
For byte images with a gray value range of [0, 255], a value of GradientConstancy = 5 is typically a good
choice, since then both constancy assumptions are used to the same extent. For large changes in illumination, how-
ever, significantly larger values of GradientConstancy may be necessary to achieve good results. It should be
noted that for large values of the gradient constancy weight the smoothness parameter FlowSmoothness must
also be chosen larger.
The parameters of the multigrid solver and for the coarse-to-fine warping strategy can be specified with the
generic parameters MGParamName and MGParamValue. Usually, it suffices to use one of the four default
parameter sets via MGParamName = ’default_parameters’ and MGParamValue = ’very_accurate’, ’accurate’,
’fast_accurate’, or ’fast’. The default parameter sets are described below. If the parameters should be speci-
fied individually, MGParamName and MGParamValue must be set to tuples of the same length. The values
corresponding to the parameters specified in MGParamName must be specified at the corresponding position in
MGParamValue.
MGParamName = ’warp_zoom_factor’ can be used to specify the resolution ratio between two consecutive warp-
ing levels in the coarse-to-fine warping hierarchy. ’warp_zoom_factor’ must be selected from the open interval
(0, 1). For performance reasons, ’warp_zoom_factor’ is typically set to 0.5, i.e., the number of pixels is halved in
HALCON 8.0.2
252 CHAPTER 3. FILTER
each direction for each coarser warping level. This leads to an increase of 33% in the calculations that need to be
performed with respect to an algorithm that does not use warping. Values for ’warp_zoom_factor’ close to 1 can
lead to slightly better results. However, they require a disproportionately larger computation time, e.g., 426% for
’warp_zoom_factor’ = 0.9.
MGParamName = ’warp_levels’ can be used to restrict the warping hierarchy to a maximum number of levels.
For ’warp_levels’ = 0, the largest possible number of levels is used. If the image size does not allow to use
the specified number of levels (taking the resolution ratio ’warp_zoom_factor’ into account), the largest possible
number of levels is used. Usually, ’warp_levels’ should be set to 0.
MGParamName = ’warp_last_level’ can be used to specify the number of warping levels for which the flow
increment should no longer be computed. Usually, ’warp_last_level’ is set to 1 or 2, i.e., a flow increment is
computed for each warping level, or the finest warping level is skipped in the computation. Since in the latter case
the computation is performed on an image of half the resolution of the original image, the gained computation
time can be used to compute a more accurate solution, e.g., by using a full multigrid algorithm with additional
iterations. The more accurate solution is then interpolated to the full resolution.
The three parameters that specify the coarse-to-fine warping strategy are only used in the FDRIG and DDRAW
algorithms. They are ignored for the CLG algorithm.
MGParamName = ’mg_solver’ can be used to specify the general multigrid strategy for solving the (non)linear
equation system (in each warping level). For ’mg_solver’ = ’multigrid’, a normal multigrid algorithm (without
coarse-to-fine initialization) is used, while for ’mg_solver’ = ’full_multigrid’ a full multigrid algorithm (with
coarse-to-fine initialization) is used. Since a resolution reduction of 0.5 is used between two consecutive levels of
the coarse-to-fine initialization (in contrast to the resolution reduction in the warping strategy, this value is hard-
coded into the algorithm), the use of a full multigrid algorithm results in an increase of the computation time by
approximately 33% with respect to the normal multigrid algorithm. Using ’mg_solver’ to ’full_multigrid’ typically
yields numerically more accurate results than ’mg_solver’ = ’multigrid’.
MGParamName = ’mg_cycle_type’ can be used to specify whether a V or W correction cycle is used per multigrid
level. Since a resolution reduction of 0.5 is used between two consecutive levels of the respective correction cycle,
using a W cycle instead of a V cycle increases the computation time by approximately 50%. Using ’mg_cycle_type’
= ’w’ typically yields numerically more accurate results than ’mg_cycle_type’ = ’v’.
MGParamName = ’mg_levels’ can be used to restrict the multigrid hierarchy for the coarse-to-fine initialization
as well as for the actual V or W correction cycles. For ’mg_levels’ = 0, the largest possible number of levels is
used. If the image size does not allow to use the specified number of levels, the largest possible number of levels
is used. Usually, ’mg_levels’ should be set to 0.
MGParamName = ’mg_cycles’ can be used to specify the total number of V or W correction cycles that are being
performed. If a full multigrid algorithm is used, ’mg_cycles’ refers to each level of the coarse-to-fine initialization.
Usually, one or two cycles are sufficient to yield a sufficiently accurate solution of the equation system. Typically,
the larger ’mg_cycles’, the more accurate the numerical results. This parameter enters almost linearly into the
computation time, i.e., doubling the number of cycles leads approximately to twice the computation time.
MGParamName = ’mg_pre_relax’ can be used to specify the number of iterations that are performed on each
level of the V or W correction cycles using the iterative basic solver before the actual error correction is performed.
Usually, one or two pre-relaxation steps are sufficient. Typically, the larger ’mg_pre_relax’, the more accurate the
numerical results.
MGParamName = ’mg_post_relax’ can be used to specify the number of iterations that are performed on each
level of the V or W correction cycles using the iterative basic solver after the actual error correction is performed.
Usually, one or two post-relaxation steps are sufficient. Typically, the larger ’mg_post_relax’, the more accurate
the numerical results.
Like when increasing the number of correction cycles, increasing the number of pre- and post-relaxation steps
increases the computation time asymptotically linearly. However, no additional restriction and prolongation oper-
ations (zooming down and up of the error correction images) are performed. Consequently, a moderate increase in
the number of relaxation steps only leads to a slight increase in the computation times.
MGParamName = ’mg_inner_iter’ can be used to specify the number of iterations to solve the linear equation
systems in each fixed-point iteration of the nonlinear basic solver. Usually, one iteration is sufficient to achieve a
sufficient convergence speed of the multigrid algorithm. The increase in computation time is slightly smaller than
for the increase in the relaxation steps. This parameter only influences the FDRIG and DDRAW algorithms since
for the CLG algorithm no nonlinear equation system needs to be solved.
As described above, usually it is sufficient to use one of the default parameter sets for the parameters described
above by using MGParamName = ’default_parameters’ and MGParamValue = ’very_accurate’, ’accurate’,
’fast_accurate’, or ’fast’. If necessary, individual parameters can be modified after the default parameter set has
been chosen by specifying a subset of the above parameters and corresponding values after ’default_parameters’ in
MGParamName and MGParamValue (e.g., MGParamName = [’default_parameters’,’warp_zoom_factor’] and
MGParamValue = [’accurate’,0.6]).
The default parameter sets use the following values for the above parameters:
’default_parameters’ = ’very_accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 1,
’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 2,
’mg_post_relax’ = 2, ’mg_inner_iter’ = 1.
’default_parameters’ = ’accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 1,
’mg_solver’ = ’multigrid’, ’mg_cycle_type’ = ’v’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 1,
’mg_post_relax’ = 1, ’mg_inner_iter’ = 1.
’default_parameters’ = ’fast_accurate’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 2,
’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 2,
’mg_post_relax’ = 2, ’mg_inner_iter’ = 1.
’default_parameters’ = ’fast’: ’warp_zoom_factor’ = 0.5, ’warp_levels’ = 0, ’warp_last_level’ = 2, ’mg_solver’
= ’multigrid’, ’mg_cycle_type’ = ’v’, ’mg_levels’ = 0, ’mg_cycles’ = 1, ’mg_pre_relax’ = 1, ’mg_post_relax’ =
1, ’mg_inner_iter’ = 1.
It should be noted that for the CLG algorithm the two modes ’fast_accurate’ and ’fast’ are identical to the modes
’very_accurate’ and ’accurate’ since the CLG algorithm does not use a coarse-to-fine warping strategy.
Parameter
HALCON 8.0.2
254 CHAPTER 3. FILTER
Result
If the parameter values are correct, the operator optical_flow_mg returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
optical_flow_mg is reentrant and automatically parallelized (on tuple level).
Possible Successors
threshold, vector_field_length
See also
unwarp_image_vector_field
References
T. Brox, A. Bruhn, N. Papenberg, and J. Weickert: High accuracy optic flow estimation based on a theory for
warping. In T. Pajdla and J. Matas, editors, Computer Vision - ECCV 2004, volume 3024 of Lecture Notes in
Computer Science, pages 25–36. Springer, Berlin, 2004.
A. Bruhn, J. Weickert, C. Feddern, T. Kohlberger, and C. Schnörr: Variational optical flow computation in real-
time. IEEE Transactions on Image Processing, 14(5):608-615, May 2005.
H.-H. Nagel and W. Enkelmann: An investigation of smoothness constraints for the estimation of displacement
vector fields from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(5):565-
593, September 1986.
Ulrich Trottenberg, Cornelis Oosterlee, Anton Schüller: Multigrid. Academic Press, Inc., San Diego, 2000.
Module
Foundation
vector field image represents an inverse transformation from the destination image of the vector field to the source
image.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : byte / uint2 / real
Input image
. VectorField (input_object) . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject : vector_field
Input vector field
. ImageUnwarped (output_object) . . . . . . . . . . singlechannel-image(-array) ; Hobject * : byte / uint2 / real
Unwarped image.
Example (Syntax: HDevelop)
Result
If the parameter values are correct, the operator unwarp_image_vector_field returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
unwarp_image_vector_field is reentrant and automatically parallelized (on domain level, tuple level).
Possible Predecessors
optical_flow_mg
Module
Foundation
HALCON 8.0.2
256 CHAPTER 3. FILTER
Possible Predecessors
optical_flow_mg
Possible Successors
threshold
Module
Foundation
3.14 Points
corner_response ( const Hobject Image, Hobject *ImageCorner,
Hlong Size, double Weight )
2
R(x, y) = A(x, y) · B(x, y) − C 2 (x, y) − W eight · (A(x, y) + B(x, y))
A(x, y) = W (u, v) ∗ (∇x I(x, y))2
B(x, y) = W (u, v) ∗ (∇y I(x, y))2
C(c, y) = W (u, v) ∗ (∇x I(x, y)∇y I(x, y))
where I is the input image and R the output image of the filter. The operator gauss_image is used for smoothing
(W ), the operator sobel_amp is used for calculating the derivative (∇).
The corner response function is invariant with regard to rotation. In order to achieve a suitable dependency of the
function R(x, y) on the local gradient, the parameter Weight must be set to 0.04. With this, only gray value
corners will return positive values for R(x, y), while straight edges will receive negative values. The output image
type is identical to the input image type. Therefore, the negative output values are set to 0 if byte images are
used as input images. If this is not desired, the input image should be converted into a real or int2 image with
convert_image_type.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / real
Input image.
. ImageCorner (output_object) . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject * : byte / int2 / real
Result of the filtering.
Number of elements : ImageCorner = Image
. Size (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Desired filtersize of the graymask.
Default Value : 3
List of values : Size ∈ {3, 5, 7, 9, 11}
. Weight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Weighting.
Default Value : 0.04
Typical range of values : 0.0 ≤ Weight ≤ 0.3
Minimum Increment : 0.001
Recommended Increment : 0.01
Example
read_image(&Fabrik,"fabrik");
corner_response(Fabrik,&CornerResponse,3,0.04);
local_max(CornerResponse,&LocalMax);
disp_image(Fabrik,WindowHandle);
set_color(WindowHandle,"red");
disp_region(LocalMax,WindowHandle);
Parallelization Information
corner_response is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
local_max, threshold
See also
gauss_image, sobel_amp, convert_image_type
References
C.G. Harris, M.J. Stephens, “A combined corner and edge detector”’; Proc. of the 4th Alvey Vision Conference;
August 1988; pp. 147-152.
H. Breit, “Bestimmung der Kameraeigenbewegung und Gewinnung von Tiefendaten aus monokularen Bildfol-
gen”; Diplomarbeit am Lehrstuhl f"ur Nachrichtentechnik der TU M"unchen; 30. September 1990.
Module
Foundation
The parameter FilterType selects whether dark, light, or all dots in the image should be enhanced. The
PixelShift can be used either to increase the contrast of the output image (PixelShift > 0) or to dampen
the values in extremly bright areas that would be cut off otherwise (PixelShift = −1).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. DotImage (output_object) . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Output image.
. Diameter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Diameter of the dots to be enhanced.
Default Value : 5
List of values : Diameter ∈ {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23}
. FilterType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Enhance dark, light, or all dots.
Default Value : "light"
List of values : FilterType ∈ {"dark", "light", "all"}
HALCON 8.0.2
258 CHAPTER 3. FILTER
is calculated, where Ix,c and Iy,c are the first derivatives of each image channel and S stands for a smoothing.
If Smoothing is ’gauss’, the derivatives are computed with Gaussian derivatives of size SigmaGrad and the
smoothing is performed by a Gaussian of size SigmaInt. If Smoothing is ’mean’, the derivatives are computed
with a 3 × 3 Sobel filter (and hence SigmaGrad is ignored) and the smoothing is performed by a SigmaInt ×
SigmaInt mean filter. Then
inhomogeneity = TraceM
DetM
isotropy = 4 ·
(TraceM )2
is the degree of the isotropy of the texture in the image. Image points that have an inhomogeneity greater or equal to
ThreshInhom and at the same time an isotropy greater or equal to ThreshShape are subsequently examined
further.
In the second step, two optimization functions are calculated for the resulting points. Essentially, these optimiza-
tion functions average for each point the distances to the edge directions (for junction points) and the gradient
directions (for area points) within an observation window around the point. If Smoothing is ’gauss’, the aver-
aging is performed by a Gaussian of size SigmaPoints, if Smoothing is ’mean’, the averaging is performed
by a SigmaPoints × SigmaPoints mean filter. The local minima of the optimization functions determine
the extracted points. Their subpixel precise position is returned in (RowJunctions, ColJunctions) and
(RowArea, ColArea).
In addition to their position, for each extracted point the elements CoRRJunctions, CoRCJunctions, and
CoCCJunctions (and CoRRArea, CoRCArea, and CoCCArea, respectively) of the corresponding covariance
matrix are returned. This matrix facilitates conclusions about the precision of the calculated point position. To
obtain the actual values, it is necessary to estimate the amount of noise in the input image and to multiply all
components of the covariance matrix with the variance of the noise. (To estimate the amount of noise, apply
intensity to homgeneous image regions or plane_deviation to image regions, where the gray values
form a plane. In both cases the amount of noise is returned in the parameter Deviation.) This is illustrated by the
example program
%HALCONROOT%\examples\hdevelop\Filter\Points\ points_foerstner_ellipses.dev .
It lies in the nature of this operator that corners often result in two distinct points: One junction point, where the
edges of the corner actually meet, and one area point inside the corner. Such doublets will be eliminated automati-
cally, if EliminateDoublets is ’true’. To do so, each pair of one junction point and one area point is examined.
If the points lie within each others’ observation window of the optimization function, for both points the precision
of the point position is calculated and the point with the lower precision is rejected. If EliminateDoublets is
’false’, every detected point is returned.
Attention
Note that only odd values for SigmaInt and SigmaPoints are allowed, if Smoothing is ’mean’. Even
values automatically will be replaced by the next larger odd value.
Parameter
HALCON 8.0.2
260 CHAPTER 3. FILTER
C. Fuchs: “Extraktion polymorpher Bildstrukturen und ihre topologische und geometrische Gruppierung”. Volume
502, Series C, Deutsche Geodätische Kommission, München, 1998.
Module
Foundation
where Gσ stands for a Gaussian smoothing of size SigmaSmooth and Ix,c and Iy,c are the first derivatives of
each image channel, computed with Gaussian derivatives of size SigmaGrad. The resulting points are the positive
local extrema of
If necessary, they can be restricted to points with a minimum filter response of Threshold. The coordinates of
the points are calculated with subpixel accuracy.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2 / real
Input image.
. SigmaGrad (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Amount of smoothing used for the calculation of the gradient.
Default Value : 0.7
Suggested values : SigmaGrad ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Typical range of values : 0.7 ≤ SigmaGrad ≤ 50.0
Recommended Increment : 0.1
Restriction : SigmaGrad > 0.0
. SigmaSmooth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Amount of smoothing used for the integration of the gradients.
Default Value : 2.0
Suggested values : SigmaSmooth ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Typical range of values : 0.7 ≤ SigmaSmooth ≤ 50.0
Recommended Increment : 0.1
Restriction : SigmaSmooth > 0.0
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Weight of the squared trace of the squared gradient matrix.
Default Value : 0.04
Suggested values : Alpha ∈ {0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08}
Typical range of values : 0.001 ≤ Alpha ≤ 0.1
Minimum Increment : 0.001
Recommended Increment : 0.01
Restriction : Alpha > 0.0
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Minimum filter response for the points.
Default Value : 0.0
Restriction : Threshold ≥ 0.0
HALCON 8.0.2
262 CHAPTER 3. FILTER
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
HALCON 8.0.2
264 CHAPTER 3. FILTER
3.15 Smoothing
read_image(&Image,"fabrik");
anisotrope_diff(Image,&Aniso,80,1,5,8);
sub_image(Image,Aniso,&Sub,2.0,127.0);
disp_image(Sub,WindowHandle);
Complexity
For each pixel: O(Iterations ∗ 18).
Result
If the parameter values are correct the operator anisotrope_diff returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
anisotrope_diff is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
read_image, grab_image
Possible Successors
regiongrowing, threshold, sub_image, dyn_threshold, auto_threshold
Alternatives
sigma_image, rank_image
See also
smooth_image, binomial_filter, gauss_image, sigma_image, rank_image,
eliminate_min_max
References
P. Perona, J. Malik: “Scale-space and edge detection using anisotropic diffusion”, IEEE transaction on pattern
analysis and machine intelligence, Vol. 12, No. 7, July 1990.
Module
Foundation
ut = div(g(|∇u|2 , c)∇u)
with the initial value u = u0 defined by Image at a time t0 . The equation is iterated Iterations times in
time steps of length Theta, so that the output image ImageAniso contains the gray value function at the time
t0 + Iterations · Theta.
The goal of the anisotropic diffusion is the elimination of image noise in constant image patches while preserv-
ing the edges in the image. The distinction between edges and constant patches is achieved using the threshold
Contrast on the size of the gray value differences between adjacent pixels. Contrast is referred to as the
contrast parameter and abbreviated with the letter c.
The variable diffusion coefficient g can be chosen to follow different monotonically decreasing functions with
values between 0 and 1 and determines the response of the diffusion process to an edge. With the parameter Mode,
the following functions can be selected:
1
g1 (x, c) = p
1 + 2 cx2
HALCON 8.0.2
266 CHAPTER 3. FILTER
Choosing the function g1 by setting Mode to ’parabolic’ guarantees that the associated differential equation is
parabolic, so that a well-posedness theory exists for the problem and the procedure is stable for an arbitrary step
size Theta. In this case however, there remains a slight diffusion even across edges of a height larger than c.
1
g2 (x, c) =
1 + cx2
The choice of ’perona-malik’ for Mode, as used in the publication of Perona and Malik, does not possess the
theoretical properties of g1 , but in practice it has proved to be sufficiently stable and is thus widely used. The
theoretical instability results in a slight sharpening of strong edges.
c8
g3 (x, c) = 1 − exp(−C )
x4
The function g3 with the constant C = 3.31488, proposed by Weickert, and selectable by setting Mode to ’weick-
ert’ is an improvement of g2 with respect to edge sharpening. The transition between smoothing and sharpening
happens very abruptly at x = c2 .
Parameter
as follows:
1 m−1 n−1
bij =
2n+m−2 i j
Here, i = 0, . . . , m − 1 and√
j = 0, . . . , n − 1. The binomial filter performs approximately the same smoothing as
a Gaussian filter with σ = n − 1/2, where for simplicity it is assumed that m = n. In detail, the relationship
between n and σ is:
n σ
3 0.7523
5 1.0317
7 1.2505
9 1.4365
11 1.6010
13 1.7502
15 1.8876
17 2.0157
19 2.1361
21 2.2501
23 2.3586
25 2.4623
27 2.5618
29 2.6576
31 2.7500
33 2.8395
35 2.9262
37 3.0104
If different values are chosen for MaskHeight and MaskWidth, the above relation between n and σ still holds
and refers to the amount of smoothing in the row and column directions.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. ImageBinomial (output_object) . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Filter width.
Default Value : 5
List of values : MaskWidth ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37}
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Filter height.
Default Value : 5
List of values : MaskHeight ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37}
HALCON 8.0.2
268 CHAPTER 3. FILTER
Result
If the parameter values are correct the operator binomial_filter returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
binomial_filter is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, grab_image
Possible Successors
regiongrowing, threshold, sub_image, dyn_threshold, auto_threshold
Alternatives
gauss_image, smooth_image, derivate_gauss, isotropic_diffusion
See also
mean_image, anisotropic_diffusion, sigma_image, gen_lowpass
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Image to smooth.
. FilteredImage (output_object) . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : byte / uint2
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of filter mask.
Default Value : 3
Suggested values : MaskWidth ∈ {3, 5, 7, 9}
Typical range of values : 3 ≤ MaskWidth ≤ width(Image)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of filter mask.
Default Value : 3
Suggested values : MaskHeight ∈ {3, 5, 7, 9}
Typical range of values : 3 ≤ MaskHeight ≤ width(Image)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. Gap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Gap between local maximum/minimum and all other gray values of the neighborhood.
Default Value : 1.0
Suggested values : Gap ∈ {1.0, 2.0, 5.0, 10.0}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Replacement rule (1 = next minimum/maximum, 2 = average, 3 =median).
Default Value : 3
List of values : Mode ∈ {1, 2, 3}
Result
eliminate_min_max returns H_MSG_TRUE if all parameters are correct. If the input is empty
eliminate_min_max returns with an error message.
Parallelization Information
eliminate_min_max is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
wiener_filter, wiener_filter_ni
See also
mean_sp, mean_image, median_image, median_weighted, binomial_filter,
gauss_image, smooth_image
References
M. Imme:“A Noise Peak Elimination Filter”; S. 204-211 in CVGIP Graphical Models and Image Processing, Vol.
53, No. 2, March 1991
M. Lückenhaus:“Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse”; Diplomarbeit; Tech-
nische Universität München, Institut für Informatik; Lehrstuhl Prof. Radig; 1995.
Module
Foundation
HALCON 8.0.2
270 CHAPTER 3. FILTER
The operator eliminate_sp replaces all gray values outside the indicated gray value intervals (MinThresh
to MaxThresh) with the neighboring mean values. Only those neighboring pixels which also fall within the gray
value interval are used for averaging. If no such pixel is present in the vicinity the original gray value is used. The
gray values in the input image falling within the gray value interval are also adopted without change.
Attention
If even values instead of odd values are given for MaskHeight or MaskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image.
. ImageFillSP (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / uint2
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of filter mask.
Default Value : 3
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11}
Typical range of values : 3 ≤ MaskWidth ≤ 512 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of filter mask.
Default Value : 3
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11}
Typical range of values : 3 ≤ MaskHeight ≤ 512 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MinThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Minimum gray value.
Default Value : 1
Suggested values : MinThresh ∈ {1, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
. MaxThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum gray value.
Default Value : 254
Suggested values : MaxThresh ∈ {5, 7, 9, 11, 15, 23, 31, 43, 61, 101, 200, 230, 250, 254}
Restriction : MinThresh ≤ MaxThresh
Example
read_image(&Image,"mreut");
disp_image(Image,WindowHandle);
eliminate_sp(Image,&ImageMeansp,3,3,101,201);
disp_image(ImageMeansp,WindowHandle);
Parallelization Information
eliminate_sp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
mean_sp, mean_image, median_image, eliminate_min_max
See also
binomial_filter, gauss_image, smooth_image, anisotropic_diffusion, sigma_image,
eliminate_min_max
Module
Foundation
read_image(&Image,"video_bild");
fill_interlace(Image,&New,"odd");
sobel_amp(New,&Sobel,"sum_abs",3);
Complexity
For each pixel: O(2).
Result
If the parameter values are correct the operator fill_interlace returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
fill_interlace is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, grab_image
Possible Successors
sobel_amp, edges_image, regiongrowing, diff_of_gauss, threshold, dyn_threshold,
auto_threshold, mean_image, binomial_filter, gauss_image,
anisotropic_diffusion, sigma_image, median_image
See also
median_image, binomial_filter, gauss_image, crop_part
Module
Foundation
HALCON 8.0.2
272 CHAPTER 3. FILTER
3 (0.65)
5 (0.87)
7 (1.43)
9 (1.88)
11 (2.31)
For border treatment the gray values of the images are reflected at the image borders.
The operator binomial_filter can be used as an alternative to gauss_image. binomial_filter
is significantly faster than gauss_image. It should be noted that the mask size in binomial_filter does
not lead to the same amount of smoothing as the mask size in gauss_image. Corresponding mask sizes can be
determined based on the respective values of the Gaussian smoothing parameter sigma.
Parameter
gauss_image(Input,&Gauss,7,);
regiongrowing(Gauss,&Segments,7,7,5,100,);
Complexity
For each pixel: O(Size ∗ 2).
Result
If the parameter values are correct the operator gauss_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
gauss_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, grab_image
Possible Successors
regiongrowing, threshold, sub_image, dyn_threshold, auto_threshold
Alternatives
binomial_filter, smooth_image, derivate_gauss, isotropic_diffusion
See also
mean_image, anisotropic_diffusion, sigma_image, gen_lowpass
Module
Foundation
The gauss filter was conventionally implemented with filter masks (the other three are recursive filters). In the case
of the gauss filter the filter coefficients (of the one-dimensional impulse answer f (n) with n ≥ 0) are returned in
Coeffs in addition to the filter size.
Parameter
info_smooth(’deriche2’,0.5,Size,Coeffs);
smooth_image(Input,&Smooth,’deriche2’,7);
Result
If the parameter values are correct the operator info_smooth returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
info_smooth is reentrant and processed without parallelization.
Possible Predecessors
read_image
Possible Successors
smooth_image
See also
smooth_image
Module
Foundation
HALCON 8.0.2
274 CHAPTER 3. FILTER
ut = ∆u
on the gray value function u with the initial value u = u0 defined by the gray values of Image at a time t0 . This
equation is then solved up to a time t0 + 12 Sigma2 , which is equivalent to the above convolution, using an iterative
procedure for parabolic partial differential equations. The computational complexity is proportional to the value
of Iterations and independent of Sigma in this case. For small values of Iterations, the computational
accuracy is very low, however. For this reason, choosing Iterations < 3 is not recommended.
For smaller values of Sigma, the convolution implementation is typically the faster method. Since the runtime of
the partial differential equation solver only depends on the number of iterations and not on the value of Sigma, it
is typically faster for large values of Sigma if few iterations are chosen (e.g., Iterations = 3 ).
Parameter
Smooth by averaging.
The operator mean_image carries out a linear smoothing with the gray values of all input images (Image). The
filter matrix consists of ones (evaluated equally) and has the size MaskHeight × MaskWidth. The result of the
convolution is divided by MaskHeight × MaskWidth. For border treatment the gray values are reflected at the
image edges.
For mean_image special optimizations are implemented that use SIMD technology. The actual application
of these special optimizations is controlled by the system parameter ’mmx_enable’ (see set_system). If
’mmx_enable’ is set to ’true’ (and the SIMD instruction set is available), the internal calculations are performed
using SIMD technology. Note that SIMD technology performs best on large, compact input regions. Depending on
the input region and the capabilities of the hardware the execution of mean_image might even take significantly
more time with SIMD technology than without.
At any rate, it is advantageous for the performance of mean_image to choose the input region of Image such
that any border treatment is avoided.
Attention
If even values instead of odd values are given for MaskHeight or MaskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter
. Image (input_object) . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real / vec-
tor_field
Image to be smoothed.
. ImageMean (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 / int4 /
real / vector_field
Smoothed image.
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of filter mask.
Default Value : 9
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
Typical range of values : 1 ≤ MaskWidth ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of filter mask.
Default Value : 9
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 15, 23, 31, 43, 61, 101}
Typical range of values : 1 ≤ MaskHeight ≤ 501
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
Example
read_image(&Image,"fabrik");
mean_image(Image,&Mean,3,3);
disp_image(Mean,WindowHandle);
Complexity
For each pixel: O(15).
Result
If the parameter values are correct the operator mean_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
mean_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
reduce_domain, rectangle1_domain
HALCON 8.0.2
276 CHAPTER 3. FILTER
Possible Successors
dyn_threshold, regiongrowing
Alternatives
binomial_filter, gauss_image, smooth_image
See also
anisotropic_diffusion, sigma_image, convol_image, gen_lowpass
Module
Foundation
compose3(Channel1,Channel2,Channel3,&MultiChannel);
mean_n(MultiChannel,&Mean);
Parallelization Information
mean_n is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
compose2, compose3, compose4, add_channels
Possible Successors
disp_image
See also
count_channels
Module
Foundation
The operator mean_sp is used to suppress extreme gray values (salt and pepper noise = white and black dots).
Attention
If even values instead of odd values are given for MaskHeight or MaskWidth, the routine uses the next larger
odd values instead (this way the center of the filter mask is always explicitly determined).
Parameter
read_image(&Image,"mreut");
disp_image(Image,WindowHandle);
mean_sp(Image,&ImageMeansp,3,3,101,201);
disp_image(ImageMeansp,WindowHandle);
Parallelization Information
mean_sp is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
mean_image, median_image, median_separate, eliminate_min_max
See also
anisotropic_diffusion, sigma_image, binomial_filter, gauss_image, smooth_image,
eliminate_min_max
Module
Foundation
HALCON 8.0.2
278 CHAPTER 3. FILTER
The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels of the objects once. For each of these pixels all neighboring pixels covered by the
mask are sorted in an ascending sequence according to their gray values. Thus, each of these sorted gray value
sequences contains exactly as many gray values as the mask has pixels. From these sequences the median is
selected and entered as resulting gray value at the corresponding output image.
Parameter
. Image (input_object) . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Image to be filtered.
. ImageMedian (output_object) . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2 / int4
/ real
Median filtered image.
. MaskType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of median mask.
Default Value : "circle"
List of values : MaskType ∈ {"circle", "rectangle"}
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Radius of median mask.
Default Value : 1
Suggested values : Radius ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 15, 19, 25, 31, 39, 47, 59}
Typical range of values : 1 ≤ Radius ≤ 101
Minimum Increment : 1
Recommended Increment : 2
. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Example
read_image(&Image,"fabrik");
median_image(Image,&Median,"circle",3,"continued");
disp_image(MedianWeighted,WindowHandle);
√ Complexity
For each pixel: O( F ∗ 5) with F = area of MaskType.
Result
If the parameter values are correct the operator median_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
median_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
rank_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 318-319
Module
Foundation
HALCON 8.0.2
280 CHAPTER 3. FILTER
read_image(&Image,"fabrik");
median_separate(Image,&MedianSeparate,5,5,3);
disp_image(MedianSeparate,WindowHandle);
Complexity
For each pixel: O(40).
Parallelization Information
median_separate is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
texture_laws, sobel_amp, deviation_image
Possible Successors
learn_ndim_norm, learn_ndim_box, median_separate, regiongrowing, auto_threshold
Alternatives
median_image
See also
rank_image
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation
’gauss’ (MaskSize = 3)
1 2 1
2 4 2
1 2 1
’inner’ (MaskSize = 3)
1 1 1
1 3 1
1 1 1
The operator median_weighted means that, contrary to median_image, gray value corners remain.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2
Image to be filtered.
. ImageWMedian (output_object) . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2
Median filtered image.
. MaskType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of median mask.
Default Value : "inner"
List of values : MaskType ∈ {"inner", "gauss"}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
mask size.
Default Value : 3
List of values : MaskSize ∈ {3}
Example
read_image(&Image,"fabrik");
median_weighted(Image,&MedianWeighted,"gauss",3);
disp_image(MedianWeighted,WindowHandle);
Complexity
For each pixel: O(F ∗ log F ) with F = area of MaskType.
Parallelization Information
median_weighted is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
median_image, trimmed_mean, sigma_image
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation
The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels once.
HALCON 8.0.2
282 CHAPTER 3. FILTER
Parameter
read_image(&Image,"fabrik");
draw_region(&Region,WindowHandle);
midrange_image(Image,Region,&Midrange,"mirrored");
disp_image(Midrange,WindowHandle);
√ Complexity
For each pixel: O( F ∗ 5) with F = area of Mask.
Result
If the parameter values are correct the operator midrange_image returns the value H_MSG_TRUE.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
midrange_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, draw_region, gen_circle, gen_rectangle1
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
sigma_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect,
gray_range_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 319
Module
Foundation
The specified mask is moved over the image to be filtered in such a way that the reference point of the mask touches
all pixels once. At each position a histogram is calculated from the gray values of all pixels covered by the mask.
By specifying Rank = 1 the lowest (= darkest) gray value appearing in the histogram is selected and entered as
resulting gray value in the output image ImageRank; if Rank corresponds to the number of pixels of the filter
mask, i.e., its area, the brightest gray value is selected. This behavior is idential to the erosion/dilation operators in
gray morphology ( gray_erosion, gray_dilation). If you use a rank that is equal to half of the pixels of
the filter mask you get the same behavior as for the the median filter ( median_image).
You can use rank_image to eliminate noise, to eliminate structures with a given orientation (use
gen_rectangle2 to create the mask region), or as an advanced gray morphologic operator that is more ro-
bust against noise. In this case you will not use 1 or the mask area as rank values, but a slightly higher or lower
value, respectively.
Several border treatments can be chosen for filtering (Margin):
Parameter
. Image (input_object) . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Image to be filtered.
. Mask (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject : byte
Region serving as filter mask.
. ImageRank (output_object) . . . . . . multichannel-image(-array) ; Hobject * : byte / int2 / uint2 / int4 / real
Filtered image.
. Rank (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rank of the output gray value in the sorted sequence of input gray values inside the filter mask. Typical value
(median): area(mask) / 2.
Default Value : 5
Suggested values : Rank ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31}
Typical range of values : 1 ≤ Rank ≤ 512
Minimum Increment : 1
Recommended Increment : 2
. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Example
read_image(&Image,"fabrik");
draw_region(&Region,WindowHandle);
rank_image(Image,Region,&ImageRank,5,"mirrored");
disp_image(ImageRank,WindowHandle);
√ Complexity
For each pixel: O( F ∗ 5) with F = area of Mask.
Result
If the parameter values are correct the operator rank_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
rank_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, draw_region, gen_circle, gen_rectangle1
HALCON 8.0.2
284 CHAPTER 3. FILTER
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
sigma_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 318-320
Module
Foundation
Example
read_image(&Image,"fabrik");
sigma_image(Image,&ImageSigma,5,5,3);
disp_image(ImageSigma,WindowHandle);
Complexity
For each pixel: O(MaskHeight× MaskWidth).
Result
If the parameter values are correct the operator sigma_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
sigma_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
anisotropic_diffusion, rank_image
See also
smooth_image, binomial_filter, gauss_image, mean_image
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 325
Module
Foundation
Alpha(0 deriche10 )
Alpha(0 deriche20 ) =
2
Alpha( deriche10 )
0
Alpha(0 shen0 ) =
2
1.77
Alpha(0 gauss0 ) =
Alpha(0 deriche10 )
HALCON 8.0.2
286 CHAPTER 3. FILTER
Parameter
info_smooth(’deriche2’,0.5,Size,Coeffs);
smooth_image(Input,&Smooth,’deriche2’,7);
Result
If the parameter values are correct the operator smooth_image returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
smooth_image is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
binomial_filter, gauss_image, mean_image, derivate_gauss, isotropic_diffusion
See also
info_smooth, median_image, sigma_image, anisotropic_diffusion
References
R.Deriche: “Fast Algorithms for Low-Level Vision”; IEEE Transactions on Pattern Analysis and Machine Intelli-
gence; PAMI-12, no. 1; S. 78-87; 1990.
Module
Foundation
The indicated mask (= region of the mask image) is put over the image to be filtered in such a way that the center
of the mask touches all pixels once. For each of these pixels all neighboring pixels covered by the mask are sorted
in an ascending sequence according to their gray values. Thus, each of these sorted gray value sequences contains
exactly as many gray values as the mask has pixels. If F is the area of the mask the average of these sequences is
calculated as follows: The first (F - Number)/2 gray values are ignored. Then the following Number gray values
are summed up and divided by Number. Again the remaining (F - Number)/2 gray values are ignored.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte / int2 / uint2 / int4 / real
Image to be filtered.
. Mask (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Image whose region serves as filter mask.
. ImageTMean (output_object) . . . . multichannel-image(-array) ; Hobject * : byte / int2 / uint2 / int4 / real
Filtered output image.
. Number (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of averaged pixels. Typical value: Surface(Mask) / 2.
Default Value : 5
Suggested values : Number ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31}
Typical range of values : 1 ≤ Number ≤ 401
Minimum Increment : 1
Recommended Increment : 2
. Margin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double
Border treatment.
Default Value : "mirrored"
Suggested values : Margin ∈ {"mirrored", "cyclic", "continued", 0, 30, 60, 90, 120, 150, 180, 210, 240,
255}
Example
read_image(&Image,"fabrik");
draw_region(&Region,WindowHandle);
trimmed_mean(Image,Region,&TrimmedMean,5,"mirrored");
disp_image(TrimmedMean,WindowHandle);
Result
If the parameter values are correct the operator trimmed_mean returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
trimmed_mean is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image, draw_region, gen_circle, gen_rectangle1
Possible Successors
threshold, dyn_threshold, regiongrowing
Alternatives
sigma_image, median_weighted, median_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect
References
R. Haralick, L. Shapiro; “Computer and Robot Vision”; Addison-Wesley, 1992, Seite 320
Module
Foundation
HALCON 8.0.2
288 CHAPTER 3. FILTER
3.16 Texture
deviation_image ( const Hobject Image, Hobject *ImageDeviation,
Hlong Width, Hlong Height )
read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
deviation_image(Image,&Deviation,9,9);
disp_image(Deviation,WindowHandle);
Result
deviation_image returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
deviation_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
entropy_image, entropy_gray
See also
convol_image, texture_laws, intensity
Module
Foundation
read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
entropy_image(Image,&Entropy1,9,9);
disp_image(Entropy1,WindowHandle);
Result
entropy_image returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
entropy_image is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
disp_image
Alternatives
entropy_gray
See also
energy_gabor, entropy_gray
Module
Foundation
HALCON 8.0.2
290 CHAPTER 3. FILTER
l = [ 1 2 1]
e = [−1 0 1]
s = [−1 2 −1]
l = [ 1 4 6 4 1]
e = [−1 −2 0 2 1]
s = [−1 0 2 0 −1]
r = [ 1 −4 6 −4 1]
w = [−1 2 0 −2 1]
l = [ 1 6 15 20 15 6 1]
e = [−1 −4 −5 0 5 4 1]
s = [−1 −2 1 4 1 −2 −1]
r = [−1 −2 −1 4 −1 −2 −1]
w = [−1 0 3 0 −3 0 1]
o = [−1 6 −15 20 −15 6 −1]
For most of the filters the resulting gray values must be modified by a Shift. This makes the different textures in
the output image more comparable to each other, provided suitable filters are used.
The name of the filter is composed of the letters of the two vectors used, where the first letter denotes convolution
in the column direction while the second letter denotes convolution in the row direction.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / int2 / uint2
Images to which the texture transformation is to be applied.
. ImageTexture (output_object) . . . . . . . . . . . (multichannel-)image(-array) ; Hobject * : byte / int2 / uint2
Texture images.
. FilterTypes (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Desired filters (name or number).
Default Value : "el"
Suggested values : FilterTypes ∈ {"ll", "le", "ls", "lr", "lw", "lo", "el", "ee", "es", "er", "ew", "eo", "sl",
"se", "ss", "sr", "sw", "so", "rl", "re", "rs", "rr", "rw", "ro", "wl", "we", "ws", "wr", "ww", "wo", "ol", "oe",
"os", "or", "ow", "oo"}
. Shift (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Shift to reduce the gray value dynamics.
Default Value : 2
List of values : Shift ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
. FilterSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Size of the filter kernel.
Default Value : 5
List of values : FilterSize ∈ {3, 5, 7}
Example
Result
texture_laws returns H_MSG_TRUE if all parameters are correct. If the input is empty the behaviour can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
texture_laws is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Successors
mean_image, binomial_filter, gauss_image, median_image, histo_2dim,
learn_ndim_norm, learn_ndim_box, threshold
Alternatives
convol_image
See also
class_2dim_sup, class_ndim_norm
References
Laws, K.I. “Textured image segmentation”; Ph.D. dissertation, Dept. of Engineering, Univ. Southern California,
1980
Module
Foundation
3.17 Wiener-Filter
gen_psf_defocus ( Hobject *Psf, Hlong PSFwidth, Hlong PSFheight,
double Blurring )
HALCON 8.0.2
292 CHAPTER 3. FILTER
result image of gen_psf_defocus encloses an spatial domain impulse response of the specified blurring. Its
representation presumes the origin in the upper left corner. This results in the following disposition of an N xM
sized image:
• first rectangle ("‘upper left"’): (image coordinates xb = 0..(N/2) − 1, yb = 0..(M/2) − 1)
- conforms to the fourth quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 0..N/2 and y = 0.. − M/2
• second rectangle ("‘upper right"’): (image coordinates xb = N/2..N − 1, yb = 0..(M/2) − 1)
- conforms to the third quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 and y = −1.. − M/2
• third rectangle ("‘lower left"’): (image coordinates xb = 0..(N/2) − 1, yb = M/2..M − 1)
- conforms to the first quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 1..N/2 and y = M/2..0
• fourth rectangle ("‘lower right"’): (image coordinates xb = N/2..N − 1, yb = M/2..M − 1)
- conforms to the second quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 und y = M/2..1
This representation conforms to that of the impulse response parameter of the HALCON-operator
wiener_filter. So one can use gen_psf_defocus to generate an impulse response for Wiener filter-
ing.
Parameter
. Psf (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Impulse response of uniform out-of-focus blurring.
. PSFwidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of result image.
Default Value : 256
Suggested values : PSFwidth ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ PSFwidth
. PSFheight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of result image.
Default Value : 256
Suggested values : PSFheight ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ PSFheight
. Blurring (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Degree of Blurring.
Default Value : 5.0
Suggested values : Blurring ∈ {1.0, 5.0, 10.0, 15.0, 18.0}
Result
gen_psf_defocus returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
gen_psf_defocus is reentrant and processed without parallelization.
Possible Predecessors
simulate_motion, gen_psf_motion
Possible Successors
simulate_defocus, wiener_filter, wiener_filter_ni
See also
simulate_defocus, gen_psf_motion, simulate_motion, wiener_filter,
wiener_filter_ni
References
Reginald L. Lagendijk, Jan Biemond: Iterative Identification and Restoration of Images, Kluwer Academic Pub-
lishers Boston/Dordrecht/London, 1991
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995.
Module
Foundation
The blurring affects all part of the image uniformly. Blurring controls the extent of blurring. It specifies the
number of pixels (lying one after another) that are affetcetd by the blurring. This number is determined by velocity
of the motion and exposure time. If Blurring is a negative number, an adequate blurring in reverse direction
is simulated. If Angle is a negative number, it is interpreted clockwise. If Angle exceeds 360 or falls below
-360, it is transformed modulo(360) in an adequate number between [0..360] resp. [−360..0]. The result image
of gen_psf_motion encloses an spatial domain impulse response of the specified blurring. Its representation
presumes the origin in the upper left corner. This results in the following disposition of an N xM sized image:
• first rectangle ("‘upper left"’): (image coordinates xb = 0..(N/2) − 1, yb = 0..(M/2) − 1)
- conforms to the fourth quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 0..N/2 and y = 0.. − M/2
• second rectangle ("‘upper right"’): (image coordinates xb = N/2..N − 1, yb = 0..(M/2) − 1)
- conforms to the third quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 and y = −1.. − M/2
• third rectangle ("‘lower left"’): (image coordinates xb = 0..(N/2) − 1, yb = M/2..M − 1)
- conforms to the first quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 1..N/2 and y = M/2..0
• fourth rectangle ("‘lower right"’): (image coordinates xb = N/2..N − 1, yb = M/2..M − 1)
- conforms to the second quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 und y = M/2..1
This representation conforms to that of the impulse response parameter of the HALCON-operator
wiener_filter. So one can use gen_psf_motion to generate an impulse response for Wiener filtering
a motion blurred image.
Parameter
. Psf (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Impulse response of motion-blur.
. PSFwidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Width of impulse response image.
Default Value : 256
Suggested values : PSFwidth ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ PSFwidth
. PSFheight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Height of impulse response image.
Default Value : 256
Suggested values : PSFheight ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ PSFheight
HALCON 8.0.2
294 CHAPTER 3. FILTER
Result
simulate_defocus returns H_MSG_TRUE if all parameters are correct. If the input is empty
simulate_defocus returns with an error message.
Parallelization Information
simulate_defocus is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_defocus, simulate_motion, gen_psf_motion
Possible Successors
wiener_filter, wiener_filter_ni
See also
gen_psf_defocus, simulate_motion, gen_psf_motion
References
Reginald L. Lagendijk, Jan Biemond: Iterative Identification and Restoration of Images, Kluwer Academic Pub-
lishers Boston/Dordrecht/London, 1991
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995.
Module
Foundation
The simulated blurring affects all part of the image uniformly. Blurring controls the extent of blurring. It
specifies the number of pixels (lying one after another) that are affetcetd by the blurring. This number is determined
by velocity of the motion and exposure time. If Blurring is a negative number, an adequate blurring in reverse
direction is simulated. If Angle is a negative number, it is interpreted clockwise. If Angle exceeds 360 or falls
below -360, it is transformed modulo(360) in an adequate number between [0..360] resp. [−360..0].
Parameter
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
image to be blurred.
. MovedImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
motion blurred image.
. Blurring (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
extent of blurring.
Default Value : 20.0
Suggested values : Blurring ∈ {5.0, 10.0, 20.0, 30.0, 40.0}
HALCON 8.0.2
296 CHAPTER 3. FILTER
So wiener_filter needs a smoothed version of the input image to estimate the power spectral density of
noise and original image. One can use one of the smoothing HALCON-filters (e.g. eliminate_min_max)to
get this version. wiener_filter needs further the impulse response that describes the specific degradation.
This impulse response (represented in spatial domain) must fit into an image of HALCON image type ’real’.
There exist two HALCON-operators for generation of an impulse response for motion blur and out-of-focus (see
gen_psf_motion, gen_psf_defocus). The representation of the impulse response presumes the origin in
the upper left corner. This results in the following disposition of an N xM sized image:
• estimation of the power spectrum density of the original image by using the smoothed version of the corrupted
image,
• estimation of the power spectrum density of each pixel by subtracting smoothed version from unsmoothed
version,
• building the Wiener filter kernel with the quotient of power spectrum densities of noise and original image
and with the impulse response,
• processing the convolution of image and Wiener filter frequency response.
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Corrupted image.
. Psf (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : real
impulse response (PSF) of degradation (in spatial domain).
. FilteredImage (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4
/ real
Smoothed version of corrupted image.
. RestoredImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : real
Restored image.
Example
HALCON 8.0.2
298 CHAPTER 3. FILTER
Result
wiener_filter returns H_MSG_TRUE if all parameters are correct. If the input is empty wiener_filter
returns with an error message.
Parallelization Information
wiener_filter is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_motion, simulate_motion, simulate_defocus, gen_psf_defocus
Alternatives
wiener_filter_ni
See also
simulate_motion, gen_psf_motion, simulate_defocus, gen_psf_defocus
References
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995
Azriel Rosenfeld, Avinash C. Kak: Digital Picture Processing, Computer Science and Aplied Mathematics, Aca-
demic Press New York/San Francisco/London 1982
Module
Foundation
wiener_filter_ni estimates the noise term as follows: The user defines a region that is suitable for noise
estimation within the image (homogeneous as possible, as edges or textures aggravate noise estimation). After
smoothing within this region by an (unweighted) median filter and subtracting smoothed version from unsmoothed,
the average noise amplitude of the region is processed within wiener_filter_ni. This amplitude together
with the average gray value within the region allows estimating the quotient of the power spectral densities of
noise and original image (in contrast to wiener_filter wiener_filter_ni assumes a rather constant
quotient within the whole image). The user can define width and height of the rectangular (median-)filter mask to
influence the noise estimation (MaskWidth, MaskHeight). wiener_filter_ni needs further the impulse
response that describes the specific degradation. This impulse response (represented in spatial domain) must fit
into an image of HALCON image type ’real’. There exist two HALCON-operators for generation of an impulse
response for motion blur and out-of-focus (see gen_psf_motion, gen_psf_defocus). The representation
of the impulse response presumes the origin in the upper left corner. This results in the following disposition of an
N xM sized image:
• first rectangle ("‘upper left"’): (image coordinates xb = 0..(N/2) − 1, yb = 0..(M/2) − 1)
- conforms to the fourth quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 0..N/2 and y = 0.. − M/2
• second rectangle ("‘upper right"’): (image coordinates xb = N/2..N − 1, yb = 0..(M/2) − 1)
- conforms to the third quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 and y = −1.. − M/2
• third rectangle ("‘lower left"’): (image coordinates xb = 0..(N/2) − 1, yb = M/2..M − 1)
- conforms to the first quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = 1..N/2 and y = M/2..0
• fourth rectangle ("‘lower right"’): (image coordinates xb = N/2..N − 1, yb = M/2..M − 1)
- conforms to the second quadrant of the Cartesian coordinate system, encloses values of the impulse response
at position x = −N/2.. − 1 und y = M/2..1
wiener_filter works as follows:
• estimating the quotient of the power spectrum densities of noise and original image,
• building the Wiener filter kernel with the quotient of power spectrum densities of noise and original image
and with the impulse response,
• processing the convolution of image and Wiener filter frequency response.
HALCON 8.0.2
300 CHAPTER 3. FILTER
Result
wiener_filter_ni returns H_MSG_TRUE if all parameters are correct. If the input is empty
wiener_filter_ni returns with an error message.
Parallelization Information
wiener_filter_ni is reentrant and processed without parallelization.
Possible Predecessors
gen_psf_motion, simulate_motion, simulate_defocus, gen_psf_defocus
Alternatives
wiener_filter
See also
simulate_motion, gen_psf_motion, simulate_defocus, gen_psf_defocus
References
M. L"uckenhaus:"‘Grundlagen des Wiener-Filters und seine Anwendung in der Bildanalyse"’; Diplomarbeit; Tech-
nische Universit"at M"unchen, Institut f"ur Informatik; Lehrstuhl Prof. Radig; 1995
Azriel Rosenfeld, Avinash C. Kak: Digital Picture Processing, Computer Science and Aplied Mathematics, Aca-
demic Press New York/San Francisco/London 1982
Module
Foundation
Graphics
4.1 Drawing
draw_region(&Obj,WindowHandle) ;
drag_region1(Obj,&New,WindowHandle) ;
disp_region(New,WindowHandle) ;
position(Obj,_,Row1,Column1,_,_,_,_) ;
position(New,_,Row2,Column2,_,_,_,_) ;
disp_arrow(WindowHandle,Row1,Column1,Row2,Column2,1.0) ;
fwrite_string("Transformation: ") ;
fwrite_string(Row2-Row1) ;
fwrite_string(", ") ;
fwrite_string(Column2-Column1) ;
fnew_line() ;
301
302 CHAPTER 4. GRAPHICS
Result
drag_region1 returns H_MSG_TRUE, if a region is entered, the window is valid and the needed drawing mode
(see set_insert) is available. If necessary, an exception handling is raised. You may determine the behavior
after an empty input with set_system(’no_object_result’,<Result>).
Parallelization Information
drag_region1 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
get_mposition, move_region
See also
set_insert, set_draw, affine_trans_image
Module
Foundation
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert,
affine_trans_image
Alternatives
get_mposition, move_region, drag_region1, drag_region3
See also
set_insert, set_draw, affine_trans_image
Module
Foundation
HALCON 8.0.2
304 CHAPTER 4. GRAPHICS
Possible Predecessors
open_window, get_mposition
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert,
affine_trans_image
Alternatives
get_mposition, move_region, drag_region1, drag_region2
See also
set_insert, set_draw, affine_trans_image
Module
Foundation
read_image(&Image,"affe") ;
draw_circle(WindowHandle,&Row,&Column,&Radius) ;
gen_circle(&Circle,Row,Column,Radius) ;
reduce_domain(Image,Circle,&GrayCircle) ;
disp_image(GrayCircle,WindowHandle) ;
Result
draw_circle returns H_MSG_TRUE if the window is valid and the needed drawing mode (see set_insert)
is available. If necessary, an exception handling is raised.
Parallelization Information
draw_circle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_circle_mod, draw_ellipse, draw_region
See also
gen_circle, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation
read_image(&Image,"affe") ;
draw_circle_mod(WindowHandle,20,20,15,&Row,&Column,&Radius) ;
gen_circle(&Circle,Row,Column,Radius) ;
reduce_domain(Image,Circle,&GrayCircle) ;
disp_image(GrayCircle,WindowHandle) ;
Result
draw_circle_mod returns H_MSG_TRUE if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_circle_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_circle, draw_ellipse, draw_region
HALCON 8.0.2
306 CHAPTER 4. GRAPHICS
See also
gen_circle, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation
read_image(&Image,"affe") ;
draw_ellipse(WindowHandle,&Row,&Column,&Phi,&Radius1,&Radius2) ;
gen_ellipse(&Ellipse,Row,Column,Phi,Radius1,Radius2) ;
reduce_domain(Image,Ellipse,&GrayEllipse) ;
sobel_amp(GrayEllipse,&Sobel,"sum_abs",3) ;
disp_image(Sobel,WindowHandle) ;
Result
draw_ellipse returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_ellipse is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_ellipse_mod, draw_circle, draw_region
See also
gen_ellipse, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation
HALCON 8.0.2
308 CHAPTER 4. GRAPHICS
Example
read_image(&Image,"affe") ;
draw_ellipse_mod(WindowHandle,RowIn,ColumnIn,PhiIn,Radius1In,Radius2In,&Row,&Column,&Phi
gen_ellipse(&Ellipse,Row,Column,Phi,Radius1,Radius2) ;
reduce_domain(Image,Ellipse,&GrayEllipse) ;
sobel_amp(GrayEllipse,&Sobel,"sum_abs",3) ;
disp_image(Sobel,WindowHandle) ;
Result
draw_ellipse_mod returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_ellipse_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_ellipse, draw_circle, draw_region
See also
gen_ellipse, draw_rectangle1, draw_rectangle2, draw_polygon, set_insert
Module
Foundation
Draw a line.
draw_line returns the parameter for a line, which has been created interactively by the user in the window.
To create a line you have to press the left mouse button determining a start point of the line. While keeping the
button pressed you may “drag” the line in any direction. After another mouse click in the middle of the created
line you can move it. If you click on one end point of the created line, you may move this point. Pressing the right
mousebutton terminates the procedure.
After terminating the procedure the line is not visible in the window any longer.
Parameter
get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_line(WindowHandle,&Row1,&Column1,&Row2,&Column2) ;
set_part(WindowHandle,Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fwrite_string(Row2) ;
fwrite_string(",") ;
fwrite_string(Column2) ;
fwrite_string(")") ;
fnew_line(:::) ;
Result
draw_line returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see set_insert)
is available. If necessary, an exception handling is raised.
Parallelization Information
draw_line is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
See also
draw_line_mod, gen_rectangle1, draw_circle, draw_ellipse, set_insert
Module
Foundation
Draw a line.
draw_line_mod returns the parameter for a line, which has been created interactively by the user in the window.
To create a line are expected the coordinates of the start point Row1In, Column1In and of the end point
Row2In,Column2In. If you click on one end point of the created line, you may move this point. After an-
other mouse click in the middle of the created line you can move it.
Pressing the right mousebutton terminates the procedure.
After terminating the procedure the line is not visible in the window any longer.
Parameter
HALCON 8.0.2
310 CHAPTER 4. GRAPHICS
get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_line_mod(WindowHandle,10,20,55,124,&Row1,&Column1,&Row2,&Column2) ;
set_part(WindowHandle,Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fwrite_string(Row2) ;
fwrite_string(",") ;
fwrite_string(Column2) ;
fwrite_string(")") ;
fnew_line(:::) ;
Result
draw_line_mod returns H_MSG_TRUE, if the window is valid and the needed drawing mode is available. If
necessary, an exception handling is raised.
Parallelization Information
draw_line_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_line, draw_ellipse, draw_region
See also
gen_circle, draw_rectangle1, draw_rectangle2
Module
Foundation
HALCON 8.0.2
312 CHAPTER 4. GRAPHICS
Directly after calling draw_nurbs_interp, you can add interpolation points by clicking with the left mouse
button in the window at the desired positions. If enough points are specified (at least Degree − 1), a NURBS
curve that goes through all specified points (in the order of their generation) is computed and displayed.
When there are three points or more, the first and the last point will be marked with an additional square. By
clicking on them you can close the curve or open it again. You delete the point appended last by pressing the Ctrl
key.
The tangents (i.e. the first derivative of the curve) of the first and the last point are displayed as lines. They can be
modified by dragging their ends using the mouse.
Existing points can be moved by dragging them with the mouse. Further new points on the curve can be inserted
by a left click on the desired position on the curve.
By pressing the Shift key, you can switch into the transformation mode. In this mode you can rotate, move, and
scale the curve as a whole, but only if you set the parameters Rotate, Move, and Scale, respectively, to true.
Instead of the pick points and the two tangents, 3 symbols are displayed with the curve: a cross in the middle and
an arrow to the right if Rotate is set to true, and a double-headed arrow to the upper right if Scale is set to true.
You can
• move the curve by clicking the left mouse button on the cross in the center and then dragging it to the new
position,
• rotate it by clicking with the left mouse button on the arrow and then dragging it, till the curve has the right
direction, and
• scale it by dragging the double arrow. To keep the ratio, the parameter KeepRatio has to be set to true.
By pressing the Shift key again you can switch back to the edit mode. Pressing the right mouse button terminates
the procedure.
The appearance of the curve while drawing is determined by the line width, size, and color set via the operators
set_color, set_colored, set_line_width, and set_line_style. The tangents and all handles
are displayed in the second color set by set_color or set_colored. Their line width is fixed to 1 and their
line style is fixed to a drawn-through line.
Attention
In contrast to draw_nurbs, each point specified by the user influences the whole curve. Thus, if one point is
moved, the whole curve can and will change. To minimize this effects, it is recommended to use a small degree
(3-5) and to place the points such that they are approximately equally spaced. In general, uneven degrees will
perform slightly better than even degrees.
Parameter
HALCON 8.0.2
314 CHAPTER 4. GRAPHICS
The input curve is specified by the interpolation points (RowsIn, ColsIn), its degree Degree and the
tangents TangentsIn, such that draw_nurbs_interp_mod can be applied to the output data of
draw_nurbs_interp.
You can modify the curve in two ways: by editing the interpolation points, e.g., by inserting or moving points, or
by transforming the curve as a whole, e.g., by rotating moving or scaling it. Note that you can only edit the curve
if Edit is set to true. Similarly, you can only rotate, move or scale it if Rotate, Move, and Scale, respectively,
are set to true.
draw_nurbs_interp_mod starts in the transformation mode. In this mode, the curve is displayed together
with 3 symbols: a cross in the middle and an arrow to the right if Rotate is set to true, and a double-headed
arrow to the upper right if Scale is set to true. To switch into the edit mode, press the Shift key; by pressing it
again, you can switch back into the transformation mode.
Transformation Mode
• To move the curve, click with the left mouse button on the cross in the center and then drag it to the new
position, i.e., keep the mouse button pressed while moving the mouse.
• To rotate it, click with the left mouse button on the arrow and then drag it, till the curve has the right direction.
• Scaling is achieved by dragging the double arrow. To keep the ratio, the parameter KeepRatio has to be
set to true.
Edit Mode
In this mode, the curve is displayed together with its interpolation points and the start and end tangent. Start and
end point are marked by an additional square. You can perform the following modifications:
• To append new points, click with the left mouse button in the window and a new point is added at this position.
• You can delete the point appended last by pressing the Ctrl key.
• To move a point, drag it with the mouse.
• To insert a point on the curve, click on the desired position on the curve.
• To close respectively open the curve, click on the first or on the last point.
Pressing the right mouse button terminates the procedure.
The appearance of the curve while drawing is determined by the line width, size and color set via the operators
set_color, set_colored, set_line_width, and set_line_style. The tangents and all handles
are displayed in the second color set by set_color or set_colored. Their line width is fixed to 1 and their
line style is fixed to a drawn-through line.
Attention
In contrast to draw_nurbs, each point specified by the user influences the whole curve. Thus, if one point is
moved, the whole curve can and will change. To minimize this effects, it is recommended to use a small degree
(3-5) and to place the points such that they are approximately equally spaced. In general, uneven degrees will
perform slightly better than even degrees.
Parameter
. ContOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject *
Contour of the modified curve.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window identifier.
. Rotate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable rotation?
Default Value : "true"
List of values : Rotate ∈ {"true", "false"}
. Move (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable moving?
Default Value : "true"
List of values : Move ∈ {"true", "false"}
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable scaling?
Default Value : "true"
List of values : Scale ∈ {"true", "false"}
HALCON 8.0.2
316 CHAPTER 4. GRAPHICS
draw_nurbs_mod returns the contour ContOut and control information (Rows, Cols, and Weights)
of a NURBS curve of degree Degree, which has been interactively modified by the user in the win-
dow WindowHandle. For additional information concerning NURBS curves, see the documentation of
gen_contour_nurbs_xld. To use the control information Rows, Cols, and Weights in a subsequent
call to the operator gen_contour_nurbs_xld, the knot vector Knots should be set to ’auto’.
The input NURBS curve is specified by its control polygon (RowsIn, ColsIn), its weight vector WeightsIn
and its degree Degree. The knot vector is assumed to be uniform (i.e. ’auto’ in gen_contour_nurbs_xld).
You can modify the curve in two ways: by editing the control polygon, e.g., by inserting or moving control points,
or by transforming the contour as a whole, e.g., by rotating moving or scaling it. Note that you can only edit the
control polygon if Edit is set to true. Similarly, you can only rotate, move or scale it if Rotate, Move, and
Scale, respectively, are set to true.
draw_nurbs_mod starts in the transformation mode. In this mode, the curve is displayed together with 3 sym-
bols: a cross in the middle and an arrow to the right if Rotate is set to true, and a double-headed arrow to the
upper right if Scale is set to true. To switch into the edit mode, press the Shift key; by pressing it again, you can
switch back into the transformation mode.
Transformation Mode
• To move the curve, click with the left mouse button on the cross in the center and then drag it to the new
position, i.e., keep the mouse button pressed while moving the mouse.
• To rotate it, click with the left mouse button on the arrow and then drag it, till the curve has the right direction.
• Scaling is achieved by dragging the double arrow. To keep the ratio, the parameter KeepRatio has to be
set to true.
Edit Mode
In this mode, the curve is displayed together with its control polygon. Start and end point are marked by an
additional square and the point which was handled last is surrounded by a circle representing its weight. You can
perform the following modifications:
• To append control points, click with the left mouse button in the window and a new point is added at this
position.
• You can delete the point appended last by pressing the Ctrl key.
• To move a point, drag it with the mouse.
• To insert a point on the control polygon, click on the desired position on the polygon.
• To close respectively open the curve, click on the first or on the last control point.
• You can modify the weight of a control point by first clicking on the point itself (if it is not already the point
which was modified or created last) and then dragging the circle around the point.
Pressing the right mouse button terminates the procedure.
The appearance of the curve while drawing is determined by the line width, size and color set via the operators
set_color, set_colored, set_line_width, and set_line_style. The control polygon and all
handles are displayed in the second color set by set_color or set_colored. Their line width is fixed to 1
and their line style is fixed to a drawn-through line.
Parameter
. ContOut (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject *
Contour of the modified curve.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window identifier.
. Rotate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable rotation?
Default Value : "true"
List of values : Rotate ∈ {"true", "false"}
. Move (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Enable moving?
Default Value : "true"
List of values : Move ∈ {"true", "false"}
HALCON 8.0.2
318 CHAPTER 4. GRAPHICS
Draw a point.
draw_point returns the parameter for a point, which has been created interactively by the user in the window.
To create a point you have to press the left mouse button. While keeping the button pressed you may “drag” the
point in any direction. Pressing the right mousebutton terminates the procedure.
After terminating the procedure the point is not visible in the window any longer.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .point.y ; double *
Row index of the point.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double *
Column index of the point.
Example
get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_point(WindowHandle,&Row1,&Column1) ;
disp_line(WindowHandle,Row1-2,Column1,Row1+2,Column1) ;
disp_line(WindowHandle,Row1,Column1-2,Row1,Column1+2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fnew_line(:::) ;
Result
draw_point returns H_MSG_TRUE, if the window is valid and the needed drawing mode is available. If
necessary, an exception handling is raised.
Parallelization Information
draw_point is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
See also
draw_point_mod, draw_circle, draw_ellipse, set_insert
Module
Foundation
Draw a point.
draw_point_mod returns the parameter for a point, which has been created interactively by the user in the
window.
To create a point are expected the coordinates RowIn and ColumnIn. While keeping the button pressed you may
“drag” the point in any direction. Pressing the right mousebutton terminates the procedure.
After terminating the procedure the point is not visible in the window any longer.
HALCON 8.0.2
320 CHAPTER 4. GRAPHICS
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. RowIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; double
Row index of the point.
. ColumnIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double
Column index of the point.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .point.y ; double *
Row index of the point.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double *
Column index of the point.
Example
get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_point_mod(WindowHandle,&Row1,&Column1) ;
disp_line(WindowHandle,Row1-2,Column1,Row1+2,Column1) ;
disp_line(WindowHandle,Row1,Column1-2,Row1,Column1+2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fnew_line(:::) ;
Result
draw_point_mod returns H_MSG_TRUE, if the window is valid and the needed drawing mode is available. If
necessary, an exception handling is raised.
Parallelization Information
draw_point_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_line, set_colored, set_line_width, set_draw, set_insert
See also
draw_point, draw_circle, draw_ellipse, set_insert
Module
Foundation
To put gray values on the created PolygonRegion for further processing, you may use the procedure
reduce_domain.
Attention
The painted contour is not closed automatically, in particular it is not “filled up” either.
Output object’s gray values are not defined.
Parameter
draw_polygon(&Polygon,WindowHandle) ;
shape_trans(Polygon,&Filled,"convex") ;
disp_region(Filled,WindowHandle) ;
Result
If the window is valid, draw_polygon returns H_MSG_TRUE. If necessary, an exception handling is raised.
Parallelization Information
draw_polygon is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw
Alternatives
draw_region, draw_circle, draw_rectangle1, draw_rectangle2, boundary
See also
reduce_domain, fill_up, set_color
Module
Foundation
HALCON 8.0.2
322 CHAPTER 4. GRAPHICS
get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_rectangle1(WindowHandle,&Row1,&Column1,&Row2,&Column2) ;
set_part(WindowHandle,Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fwrite_string(Row2) ;
fwrite_string(",") ;
fwrite_string(Column2) ;
fwrite_string(")") ;
fnew_line(:::) ;
Result
draw_rectangle1 returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_rectangle1 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle1_mod, draw_rectangle2, draw_region
See also
gen_rectangle1, draw_circle, draw_ellipse, set_insert
Module
Foundation
To create a rectangle are expected the parameters Row1In, Column1In,Row2In und Column2In. After a
mouse click in the middle of the created rectangle you can move it. A click close to one side “grips” it to modify
the rectangle’s dimension in perpendicular direction to this side. If you click on one corner of the created rectangle,
you may move this corner. Pressing the right mousebutton terminates the procedure.
After terminating the procedure the rectangle is not visible in the window any longer.
Parameter
get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Width-1,Height-1) ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_rectangle1_mod(WindowHandle,Row1In,Column1In,Row2In,Column2In,&Row1,&Column1,&Row2,
set_part(WindowHandle,Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
fwrite_string("Clipping = (") ;
fwrite_string(Row1) ;
fwrite_string(",") ;
fwrite_string(Column1) ;
fwrite_string("),(") ;
fwrite_string(Row2) ;
fwrite_string(",") ;
fwrite_string(Column2) ;
fwrite_string(")") ;
fnew_line(:::) ;
Result
draw_rectangle1_mod returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see
set_insert) is available. If necessary, an exception handling is raised.
Parallelization Information
draw_rectangle1_mod is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
HALCON 8.0.2
324 CHAPTER 4. GRAPHICS
Alternatives
draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle1, draw_circle, draw_ellipse, set_insert
Module
Foundation
HALCON 8.0.2
326 CHAPTER 4. GRAPHICS
Alternatives
draw_rectangle2, draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle2, draw_circle, draw_ellipse, set_insert
Module
Foundation
read_image(&Image,"fabrik") ;
disp_image(Image,WindowHandle) ;
draw_region(&Region,WindowHandle) ;
reduce_domain(Image,Region,&New) ;
regiongrowing(New,&Segmente,5,5,6,50) ;
set_colored(WindowHandle,12) ;
disp_region(Segmente,WindowHandle) ;
Result
If the window is valid, draw_region returns H_MSG_TRUE. If necessary, an exception handling is raised.
Parallelization Information
draw_region is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw
Alternatives
draw_circle, draw_ellipse, draw_rectangle1, draw_rectangle2
See also
draw_polygon, reduce_domain, fill_up, set_color
Module
Foundation
HALCON 8.0.2
328 CHAPTER 4. GRAPHICS
Result
draw_xld returns H_MSG_TRUE, if the window is valid and the needed drawing mode (see set_insert) is
available. If necessary, an exception handling is raised.
Parallelization Information
draw_xld is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window
Possible Successors
reduce_domain, disp_region, set_colored, set_line_width, set_draw, set_insert
Alternatives
draw_rectangle2, draw_rectangle1, draw_rectangle2, draw_region
See also
gen_rectangle2, draw_circle, draw_ellipse, set_insert
Module
Foundation
• To insert a point, click with the left mouse button in the vicinity of a line and then move the mouse to the
position where you want the new point to be placed.
• To delete a point, select the point which should be deleted with the left mouse button and then press the Ctrl
key.
HALCON 8.0.2
330 CHAPTER 4. GRAPHICS
4.2 Gnuplot
Alternatives
gnuplot_open_pipe
See also
gnuplot_open_pipe, gnuplot_close, gnuplot_plot_image
Module
Foundation
Open a pipe to a gnuplot process for visualization of images and control values.
gnuplot_open_pipe opens a pipe to a gnuplot sub-process with which subsequently images can be
visualized as 3D-plots ( gnuplot_plot_image) or control values can be visualized as 2D-plots (
gnuplot_plot_ctrl). The sub-process must be terminated after displaying the last plot by calling
gnuplot_close. The corresponding identifier for the gnuplot output stream is returned in GnuplotFileID.
Attention
gnuplot_open_pipe is only implemented for Unix because gnuplot for Windows (wgnuplot) cannot be
controlled by an external process.
Parameter
. GnuplotFileID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gnuplot_id ; Hlong *
Identifier for the gnuplot output stream.
Result
gnuplot_open_pipe returns the value H_MSG_TRUE if the sub-process could be created. Otherwise, an
exception handling is raised.
Parallelization Information
gnuplot_open_pipe is processed completely exclusively without parallelization.
Possible Successors
gnuplot_plot_image, gnuplot_plot_ctrl, gnuplot_close
Alternatives
gnuplot_open_file
Module
Foundation
HALCON 8.0.2
332 CHAPTER 4. GRAPHICS
Parallelization Information
gnuplot_plot_ctrl is processed completely exclusively without parallelization.
Possible Predecessors
gnuplot_open_pipe, gnuplot_open_file
Possible Successors
gnuplot_close
See also
gnuplot_open_pipe, gnuplot_open_file, gnuplot_close
Module
Foundation
output to a file, which can be later read by gnuplot. In both cases the gnuplot output stream is identified by
GnuplotFileID. The parameters SamplesX and SamplesY determine the number of data points in the x-
and y-direction, respectively, which gnuplot should use to display the image. They are the equivalent of the gnuplot
variables samples and isosamples. The parameters ViewRotX und ViewRotZ determine the rotation of the plot
with respect to the viewer. ViewRotX is the rotation of the coordinate system about the x-axis, while ViewRotZ
is the rotation of the plot about the z-axis. These two parameters correspond directly to the first two parameters
of the ’set view’ command in gnuplot. The parameter Hidden3D determines whether hidden surfaces should be
removed. This is equivalent to the ’set hidden3d’ command in gnuplot. If a single image is passed to the operator,
it is displayed in a separate plot. If multiple images are passed, they are displayed in the same plot.
Parameter
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Image to be plotted.
. GnuplotFileID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gnuplot_id ; Hlong
Identifier for the gnuplot output stream.
. SamplesX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of samples in the x-direction.
Default Value : 64
Typical range of values : 2 ≤ SamplesX ≤ 10000
Restriction : SamplesX ≥ 2
. SamplesY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of samples in the y-direction.
Default Value : 64
Typical range of values : 2 ≤ SamplesY ≤ 10000
Restriction : SamplesY ≥ 2
. ViewRotX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Rotation of the plot about the x-axis.
Default Value : 60
Typical range of values : 0 ≤ ViewRotX ≤ 180
Minimum Increment : 0.01
Recommended Increment : 10
Restriction : (0 ≤ ViewRotX) ∧ (ViewRotX ≤ 180)
. ViewRotZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Rotation of the plot about the z-axis.
Default Value : 30
Typical range of values : 0 ≤ ViewRotZ ≤ 360
Minimum Increment : 0.01
Recommended Increment : 10
Restriction : (0 ≤ ViewRotZ) ∧ (ViewRotZ ≤ 360)
. Hidden3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Plot the image with hidden surfaces removed.
Default Value : "hidden3d"
List of values : Hidden3D ∈ {"hidden3d", "nohidden3d"}
Result
gnuplot_plot_image returns the value if GnuplotFileID is a valid gnuplot output stream, and if the data
file for the current plot could be opened. Otherwise, an exception handling is raised.
Parallelization Information
gnuplot_plot_image is processed completely exclusively without parallelization.
Possible Predecessors
gnuplot_open_pipe, gnuplot_open_file
Possible Successors
gnuplot_close
See also
gnuplot_open_pipe, gnuplot_open_file, gnuplot_close
Module
Foundation
HALCON 8.0.2
334 CHAPTER 4. GRAPHICS
4.3 LUT
disp_lut ( Hlong WindowHandle, Hlong Row, Hlong Column, Hlong Scale )
T_disp_lut ( const Htuple WindowHandle, const Htuple Row,
const Htuple Column, const Htuple Scale )
set_lut(WindowHandle,"color") ;
disp_lut(WindowHandle,256,256,1) ;
get_mbutton(WindowHandle,_,_,_) ;
set_lut(WindowHandle,"sqrt") ;
disp_lut(WindowHandle,128,128,2) ;
Result
disp_lut returns H_MSG_TRUE if the hardware supports a look-up-table, the window is valid and the param-
eters are correct. Otherwise an exception handling is raised.
Parallelization Information
disp_lut is reentrant, local, and processed without parallelization.
Possible Predecessors
set_lut
See also
open_window, open_textwindow, draw_lut, set_lut, set_fix, set_pixel, write_lut,
get_lut, set_color
Module
Foundation
read_image(&Image,"fabrik") ;
disp_image(Image,WindowHandle) ;
draw_lut(WindowHandle) ;
write_lut(WindowHandle,"my_lut") ;
...
read_image(&Image,"fabrik") ;
set_lut(WindowHandle,"my_lut") ;
Result
draw_lut returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
draw_lut is reentrant, local, and processed without parallelization.
Possible Successors
set_lut_style, set_lut, write_lut, disp_lut
Alternatives
set_fix, set_rgb
See also
write_lut, set_lut, get_lut, disp_lut
Module
Foundation
HALCON 8.0.2
336 CHAPTER 4. GRAPHICS
Parameter
Hue: 0.0
Saturation 1.0
Intensity 1.0
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Hue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Modification of color value.
. Saturation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Modification of saturation.
. Intensity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Modification of intensity.
Result
get_lut_style returns H_MSG_TRUE if the window is valid and the parameter is correct. Otherwise an
exception handling is raised.
Parallelization Information
get_lut_style is reentrant, local, and processed without parallelization.
Possible Successors
set_lut_style, set_lut
See also
set_lut_style
Module
Foundation
HALCON 8.0.2
338 CHAPTER 4. GRAPHICS
Parameter
Colors in S descend from applications that were active before starting HALCON and should not get lost. Graphic
colors in G are used for operators such as disp_region, disp_circle etc. and are set unique within
all look-up-tables. An output in a graphic color has always got the same (color-)look, even if different look-up-
tables are used. set_color and set_rgb set graphic colors. Gray values resp. colors in B are used by
disp_image to display an image. They can change according to the current look-up-table. There exist two
exceptions to this concept:
• set_gray allows setting of colors of the area B for operators such as disp_region,
• set_fix that allows modification of graphic colors.
For common monitors only one look-up-table can be loaded per screen. Whereas set_lut can be activated
separately for each window. There is the following solution for this problem: It will always be activated the
look-up-table that is assigned to the "‘active window"’ (a window is set into the state "‘active"’ by the window
manager).
look-up-table can also be used with truecolor displays. In this case the look-up-table will be simulated in software.
This means, that the look-up-table will be used each time an image is displayed.
WindowsNT specific: if the graphiccard is used in mode different from truecolor, you must display the image after
setting the look-up-taple.
query_lut lists the names of all look-up-tables. They differ from each other in the area used for gray values.
Within this area the following behaiviour is defined:
gray value tables (1-7 image levels)
’default’: Only the two basic colors (generally black and white) are used.
’default’: As ’linear’.
’linear’: Linear increasing of gray values from 0 (black) to 255 (white).
’inverse’: Inverse function of ’linear’.
’sqr’: Gray values increase according to square function.
’inv_sqr’: Inverse function of ’sqr’.
’cube’: Gray values increase according to cubic function.
’inv_cube’: Inverse function of ’cube’.
’sqrt’: Gray values increase according to square-root function.
’inv_sqrt’: Inverse Function of ’sqrt’.
’cubic_root’: Gray values increase according to cubic-root function.
’inv_cubic_root’: Inverse Function of ’cubic_root’.
A look-up-table can be read from a file. Every line of such a file must contain three numbers in the range of 0 to
255, with the first number describing the amount of red, the second the amount of green and the third the amount
of blue of the represented display color. The number of lines can vary. The first line contains information for the
first gray value and the last line for the last value. If there are less lines than gray values, the available information
values are distributed over the whole interval. If there are more lines than gray values, a number of (uniformly
distributed) lines is ignored. The file-name must conform to "‘LookUpTable.lut"’. Within the parameter the
name is specified without file extension. HALCON will search for the file in the current directory and after that in
a specified directory ( see set_system(’lut_dir’,<Pfad>) ). It is also possible to call set_lut with a
tuple of RGB-Values. These will be set directly. The number of parameter values must conform to the number of
pixels currently used within the look-up-table.
Attention
set_lut can only be used with monitors supporting 256 gray levels/colors.
HALCON 8.0.2
340 CHAPTER 4. GRAPHICS
Parameter
Result
set_lut returns H_MSG_TRUE if the hardware supports a look-up-table and the parameter is correct. Otherwise
an exception handling is raised.
Parallelization Information
set_lut is reentrant, local, and processed without parallelization.
Possible Predecessors
query_lut, draw_lut, get_lut
Possible Successors
write_lut
Alternatives
draw_lut, set_fix, set_pixel
See also
get_lut, query_lut, draw_lut, set_fix, set_color, set_rgb, set_hsi, write_lut
Module
Foundation
Hue: Rotation of color space, Hue = 1.9 conforms to a one-time rotation of the color space. No changement: Hue
= 0.0 Complement colors: Hue = 0.5
Saturation: Changement of saturation, No changement: Saturation = 1.0 Gray value image: Saturation = 0.0
Intensity: Changement of intensity, No changement: Intensity = 1.0 Black image: Intensity = 0.0
Changement affects only the part of an look-up-table that is used for diplaying images. The parameter of modifi-
cation remain until the next call of set_lut_style. Calling set_lut has got no effect on these parameters.
Parameter
read_image(&Image,"affe") ;
set_lut(WindowHandle,"color") ;
do{
get_mbutton(WindowHandle,&Row,&Column,&Button) ;
Saturation= Row/300.0 ;
Hue = Column/512.0 ;
set_lut_style(WindowHandle,Hue,Saturation,1.0) ;
}
while(Button > 1) ;
Result
set_lut_style returns H_MSG_TRUE if the window is valid and the parameter is correct. Otherwise an
exception handling is raised.
Parallelization Information
set_lut_style is reentrant, local, and processed without parallelization.
Possible Predecessors
get_lut_style
Possible Successors
set_lut
Alternatives
set_lut, scale_image
See also
get_lut_style
Module
Foundation
HALCON 8.0.2
342 CHAPTER 4. GRAPHICS
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
draw_lut(WindowHandle) ;
write_lut(WindowHandle,"test_lut") ;
Result
write_lut returns H_MSG_TRUE if the window with the required properties (256 colors) is valid and the
parameter (file name) is correct. Otherwise an exception handling is raised.
Parallelization Information
write_lut is reentrant, local, and processed without parallelization.
Possible Predecessors
draw_lut, set_lut
See also
set_lut, draw_lut, set_pixel, get_pixel
Module
Foundation
4.4 Mouse
get_mbutton ( Hlong WindowHandle, Hlong *Row, Hlong *Column,
Hlong *Button )
1: Left button,
2: Middle button,
4: Right button.
The operator waits until a button is pressed in the output window. If more than one button is pressed, the sum of
the individual buttons’ values is returned. The origin of the coordinate system is located in the left upper corner
of the window. The row coordinates increase towards the bottom, while the column coordinates increase towards
the right. For graphics windows, the coordinates of the lower right corner are (image height-1,image width-1) (see
open_window, reset_obj_db), while for text windows they are (window height-1,window width-1) (see
open_textwindow).
Attention
get_mbutton only returns if a mouse button is pressed in the window.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong *
Row coordinate of the mouse position in the window.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong *
Column coordinate of the mouse position in the window.
. Button (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Mouse button(s) pressed.
Result
get_mbutton returns the value H_MSG_TRUE.
Parallelization Information
get_mbutton is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
get_mposition
See also
open_window, open_textwindow
Module
Foundation
0: No button,
1: Left button,
2: Middle button,
4: Right button.
The origin of the coordinate system is located in the left upper corner of the window. The row coordinates increase
towards the bottom, while the column coordinates increase towards the right. For graphics windows, the coor-
dinates of the lower right corner are (image height-1,image width-1) (see open_window, reset_obj_db),
while for text windows they are (window height-1,window width-1) (see open_textwindow).
Attention
get_mposition fails (returns FAIL) if the mouse pointer is not located within the window. In this case, no
values are returned.
HALCON 8.0.2
344 CHAPTER 4. GRAPHICS
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong *
Row coordinate of the mouse position in the window.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong *
Column coordinate of the mouse position in the window.
. Button (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Mouse button(s) pressed or 0.
Result
get_mposition returns the value H_MSG_TRUE. If the mouse pointer is not located within the window,
H_MSG_FAIL is returned.
Parallelization Information
get_mposition is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
get_mbutton
See also
open_window, open_textwindow
Module
Foundation
query_mshape returns the names of all available mouse pointer shapes for the window. These can be used in
the operator set_mshape. If no mouse pointers are available, the empty tuple is returned.
Parameter
HALCON 8.0.2
346 CHAPTER 4. GRAPHICS
4.5 Output
Example
Result
disp_arc returns H_MSG_TRUE.
Parallelization Information
disp_arc is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi
Alternatives
disp_circle, disp_ellipse, disp_region, gen_circle, gen_ellipse
See also
open_window, open_textwindow, set_color, set_draw, set_rgb, set_hsi
Module
Foundation
HALCON 8.0.2
348 CHAPTER 4. GRAPHICS
set_colored(WindowHandle,3) ;
disp_arrow(WindowHandle,10,10,118,118,1.0);
Result
disp_arrow returns H_MSG_TRUE.
Parallelization Information
disp_arrow is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi
Alternatives
disp_line, gen_region_polygon, disp_region
See also
open_window, open_textwindow, set_color, set_draw, set_line_width
Module
Foundation
disp_channel displays an image in the output window. It is possible to display several images with one call.
In this case the images are displayed one after another. If the definition domains of the images overlap only the last
image is visible. The parameter Channel defines the number of the channel that is displayed. For RGB-images
the three color channels have to be used within a tuple parameter. For more information see disp_image.
Parameter
. MultichannelImage (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Multichannel images to be displayed.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window identifier.
. Channel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Number of channel or the numbers of the RGB-channels
Default Value : 1
List of values : Channel ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Example
Result
If the used images contain valid values and a correct output mode is set, disp_channel returns H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
disp_channel is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi
Alternatives
disp_image, disp_color
See also
open_window, open_textwindow, reset_obj_db, set_lut, draw_lut, dump_window
Module
Foundation
HALCON 8.0.2
350 CHAPTER 4. GRAPHICS
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window identifier.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y(-array) ; (Htuple .) double / Hlong
Row index of the center.
Default Value : 64
Suggested values : Row ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x(-array) ; (Htuple .) double / Hlong
Column index of the center.
Default Value : 64
Suggested values : Column ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius(-array) ; (Htuple .) double / Hlong
Radius of the circle.
Default Value : 64
Suggested values : Radius ∈ {0, 64, 128, 256}
Typical range of values : 0 ≤ Radius ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : Radius > 0.0
Example
open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
set_draw(WindowHandle,"fill") ;
set_color(WindowHandle,"white") ;
set_insert(WindowHandle,"not") ;
get_mbutton(WindowHandle,&Row,&Column,&Button) ;
disp_circle(WindowHandle,Row,Column,(Row + Column) mod 50) ;
Result
disp_circle returns H_MSG_TRUE.
Parallelization Information
disp_circle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi
Alternatives
disp_ellipse, disp_region, gen_circle, gen_ellipse
See also
open_window, open_textwindow, set_color, set_draw, set_rgb, set_hsi
Module
Foundation
Attention
Due to the restricted number of available colors the color appearance is usually different from the original.
Parameter
. ColorImage (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Color image to display.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example
Result
If the used image contains valid values and a correct output mode is set, disp_color returns H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
disp_color is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi
Alternatives
disp_channel, disp_obj
See also
disp_image, open_window, open_textwindow, reset_obj_db, set_lut, draw_lut,
dump_window
Module
Foundation
HALCON 8.0.2
352 CHAPTER 4. GRAPHICS
open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
set_draw(WindowHandle,"fill") ;
set_color(WindowHandle,"white") ;
set_insert(WindowHandle,"not") ;
read_image(Image,"affe") ;
draw_region(&Region,WindowHandle) ;
noise_distribution_mean(Region,Image,21,&Distribution) ;
disp_distribution (WindowHandle,Distribution,100,100,3) ;
Parallelization Information
disp_distribution is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi,
noise_distribution_mean, gauss_distribution
See also
gen_region_histo, set_paint, gauss_distribution, noise_distribution_mean
Module
Foundation
Displays ellipses.
disp_ellipse displays one or several ellipses in the output window. An ellipse is described by the center
(CenterRow, CenterCol), the orientation Phi (in radians) and the radii of the major and the minor axis
(Radius1 and Radius2).
The procedures used to control the display of regions (e.g. set_draw, set_gray, set_draw) can also be
used with ellipses. Several ellipses can be displayed with one call by using tuple parameters. For the use of colors
with several ellipses, see set_color.
Attention
The center of the ellipse must be within the window.
Parameter
HALCON 8.0.2
354 CHAPTER 4. GRAPHICS
set_color(WindowHandle,"red") ;
draw_region(&MyRegion,WindowHandle) ;
elliptic_axis(MyRegion,&Ra,&Rb,&Phi) ;
area_center(MyRegion,_,&Row,&Column) ;
disp_ellipse(WindowHandle,Row,Column,Phi,Ra,Rb);
Result
disp_ellipse returns H_MSG_TRUE, if the parameters are correct. Otherwise an exception handling is raised.
Parallelization Information
disp_ellipse is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi,
elliptic_axis, area_center
Alternatives
disp_circle, disp_region, gen_ellipse, gen_circle
See also
open_window, open_textwindow, set_color, set_rgb, set_hsi, set_draw,
set_line_width
Module
Foundation
disp_image displays the gray values of an image in the output window. The gray value pixels of the defi-
nition domain ( set_comprise(WindowHandle,’object’)) or of the whole image ( set_comprise
(WindowHandle,’image’)) are used. Restriction to the definition domain is the default.
For the display of gray value images the number of gray values is usually reduced. This is due to the fact that colors
have to be reserved for the display of graphics (e.g. set_color) and the window manager. Also depending on
the number of bitplanes on the used output device often less than 256 colors (eight bitplanes) are available. The
number of "’colors"’ actually reserved for the display of gray values can be queried by get_system. Before
opening the first window this value can be modified by set_system. For instance for 8 bitplanes 200 real gray
values are the default.
The reduction of the number of gray values does not pose problems as long as only gray value information is
displayed, humans cannot distinguish 256 different shades of gray. If certain gray values are used for the rep-
resentation of region information (which is not the style commonly used in HALCON ), confusions might be
the result, since different numerical values are displayed on the screen with the same gray value. The procedure
label_to_region should be used on these images in order to transform the label data into HALCON objects.
If images of type ’int2’, ’int4’, ’real’ or ’complex’ are displayed, the smallest and largest gray value is computed.
Afterwards the pixel data is rescaled according to the number of available gray values (depending on the output
device. e.g. 200). It is possible that some pixels have a very different value than the other pixels. This might lead to
the display of an (almost) completely white or black image. In order to decide if the current image is a binary image
min_max_gray can be used. If neccessary the image can be transformed or converted by scale_image and
convert_image_type before it is displayed.
Attention
If a wrong output mode was set by set_paint, the error will be reported when disp_image is used.
Parameter
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Gray value image to display.
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Example
Result
If the used image contains valid values and a correct output mode is set, disp_image returns H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
disp_image is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, scale_image, convert_image_type,
min_max_gray
Alternatives
disp_obj, disp_color
See also
open_window, open_textwindow, reset_obj_db, set_comprise, set_paint, set_lut,
draw_lut, paint_gray, scale_image, convert_image_type, dump_window
Module
Foundation
HALCON 8.0.2
356 CHAPTER 4. GRAPHICS
disp_rectangle1_margin(long WindowHandle,
long Row1, long Column1,
long Row2, long Column2)
{
disp_line(WindowHandle,Row1,Column1,Row1,Column2) ;
disp_line(WindowHandle,Row1,Column2,Row2,Column2) ;
disp_line(WindowHandle,Row2,Column2,Row2,Column1) ;
disp_line(WindowHandle,Row2,Column1,Row1,Column1) ;
}
Result
disp_line returns H_MSG_TRUE.
Parallelization Information
disp_line is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_arrow, disp_rectangle1, disp_rectangle2, disp_region, gen_region_polygon,
gen_region_points
See also
open_window, open_textwindow, set_color, set_rgb, set_hsi, set_insert,
set_line_width
Module
Foundation
Result
If the used object is valid and a correct output mode is set, disp_obj returns H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
disp_obj is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, scale_image, convert_image_type,
min_max_gray
Alternatives
disp_color, disp_image, disp_xld, disp_region
HALCON 8.0.2
358 CHAPTER 4. GRAPHICS
See also
open_window, open_textwindow, reset_obj_db, set_comprise, set_paint, set_lut,
draw_lut, paint_gray, scale_image, convert_image_type, dump_window
Module
Foundation
Displays a polyline.
disp_polygon displays a polyline with the row coordinates Row and the column coordinates Column in the
output window. The parameters Row and Column have to be provided as tuples. Straight lines are drawn between
the given points. The start and the end of the polyline are not connected.
The procedures used to control the display of regions (e.g. set_color, set_gray, set_draw,
set_line_width) can also be used with polylines.
Attention
The given coordinates must lie within the window.
Parameter
/* display a rectangle */
disp_rectangle1_margin1(long WindowHandle,
long Row1, long Column1,
long Row2, long Column2)
{
Htuple Row, Col;
create_tuple(&Row,4) ;
create_tuple(&Col,4) ;
set_i(Row,Row1,0) ;
set_i(Col,Column1,0) ;
set_i(Row,Row1,1) ;
set_i(Col,Column2,1) ;
set_i(Row,Row2,2) ;
set_i(Col,Column2,2) ;
set_i(Row,Row2,3) ;
set_i(Col,Column1,3) ;
set_i(Row,Row1,4) ;
set_i(Col,Column1,4) ;
T_disp_polygon(WindowHandle,Row,Col) ;
Result
disp_polygon returns H_MSG_TRUE.
Parallelization Information
disp_polygon is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_line, gen_region_polygon, disp_region
See also
open_window, open_textwindow, set_color, set_rgb, set_hsi, set_insert,
set_line_width
Module
Foundation
HALCON 8.0.2
360 CHAPTER 4. GRAPHICS
set_color(WindowHandle,"green") ;
draw_region(&MyRegion,WindowHandle) ;
smallest_rectangle1(MyRegion,&R1,&C1,&R2,&C2) ;
disp_rectangle1(WindowHandle,R1,C1,R2,C2) ;
Result
disp_rectangle1 returns H_MSG_TRUE.
Parallelization Information
disp_rectangle1 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_rectangle2, gen_rectangle1, disp_region, disp_line, set_shape
See also
open_window, open_textwindow, set_color, set_draw, set_line_width
Module
Foundation
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window identifier.
. CenterRow (input_control) . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; (Htuple .) double / Hlong
Row index of the center.
Default Value : 48
Suggested values : CenterRow ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ CenterRow ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. CenterCol (input_control) . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; (Htuple .) double / Hlong
Column index of the center.
Default Value : 64
Suggested values : CenterCol ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ CenterCol ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; (Htuple .) double / Hlong
Orientation of rectangle in radians.
Default Value : 0.0
Suggested values : Phi ∈ {0.0, 0.785398, 1.570796, 3.1415926, 6.283185}
Typical range of values : 0.0 ≤ Phi ≤ 6.283185 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
. Length1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth(-array) ; (Htuple .) double / Hlong
Half of the length of the longer side.
Default Value : 48
Suggested values : Length1 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Length1 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Length2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight(-array) ; (Htuple .) double / Hlong
Half of the length of the shorter side.
Default Value : 32
Suggested values : Length2 ∈ {0, 64, 128, 256, 511}
Typical range of values : 0 ≤ Length2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Length2 < Length1
Example
set_color(WindowHandle,"green") ;
draw_region(&MyRegion,WindowHandle) ;
elliptic_axis(MyRegion,&Ra,&Rb,&Phi) ;
area_center(MyRegion,_,&Row,&Column) ;
disp_gen_rectangle2(WindowHandle,Row,Column,Phi,Ra,Rb) ;
Result
disp_rectangle2 returns H_MSG_TRUE, if the parameters are correct. Otherwise an exception handling is
raised.
Parallelization Information
disp_rectangle2 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_draw, set_color, set_colored,
set_line_width
Alternatives
disp_region, gen_rectangle2, disp_rectangle1, set_shape
HALCON 8.0.2
362 CHAPTER 4. GRAPHICS
See also
open_window, open_textwindow, disp_region, set_color, set_draw, set_line_width
Module
Foundation
/* Symbolic representation: */
set_draw(WindowHandle,"margin") ;
set_color(WindowHandle,"red") ;
set_shape(WindowHandle,"ellipse") ;
disp_region(SomeSegments,WindowHandle) ;
set_i(Par,12,0) ;
set_i(Par,3,1) ;
T_set_line_style(WindowHandle,Par) ;
disp_region(Segments,WindowHandle) ;
Result
disp_region returns H_MSG_TRUE.
Parallelization Information
disp_region is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_rgb, set_lut, set_hsi, set_shape, set_line_style, set_insert,
set_fix, set_draw, set_color, set_colored, set_line_width
Alternatives
disp_obj, disp_arrow, disp_line, disp_circle, disp_rectangle1, disp_rectangle2,
disp_ellipse
See also
open_window, open_textwindow, set_color, set_colored, set_draw, set_shape,
set_paint, set_gray, set_rgb, set_hsi, set_pixel, set_line_width, set_line_style,
set_insert, set_fix, paint_region, dump_window
Module
Foundation
4.6 Parameters
get_comprise ( Hlong WindowHandle, char *Mode )
T_get_comprise ( const Htuple WindowHandle, Htuple *Mode )
HALCON 8.0.2
364 CHAPTER 4. GRAPHICS
Possible Successors
set_comprise, disp_image, disp_image
See also
set_comprise, disp_image, disp_color
Module
Foundation
See also
set_fix
Module
Foundation
HALCON 8.0.2
366 CHAPTER 4. GRAPHICS
draw_region(&Region,WindowHandle) ;
draw_region(&Icon,WindowHandle) ;
set_icon(Icon) ;
set_shape(WindowHandle,"icon") ;
disp_region(Region,WindowHandle) ;
get_icon(&OldIcon) ;
disp_region(OldIcon,WindowHandle) ;
Result
get_icon always returns H_MSG_TRUE.
Parallelization Information
get_icon is reentrant and processed without parallelization.
Possible Predecessors
set_icon
Possible Successors
disp_region
Module
Foundation
get_line_approx returns a parameter that controls the approximation error for region contour display in the
window. It is used by the procedure disp_region. Approximation controls the polygon approximation
for contour display (0 ⇔ no approximation). Approximation is only important for displaying the contour of
objects, especially if a line style was set with set_line_style.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Approximation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Hlong *
Current approximation error for contour display.
Result
get_line_approx returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_line_approx is reentrant and processed without parallelization.
Possible Successors
set_line_approx, set_line_style, disp_region
See also
get_region_polygon, set_line_approx, set_line_style, disp_region
Module
Foundation
HALCON 8.0.2
368 CHAPTER 4. GRAPHICS
Parameter
get_part returns the upper left and lower right corner of the image part shown in the window. The image part
can be changed with the procedure set_part (Default is the whole image).
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong *
Row index of the image part’s upper left corner.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong *
Column index of the image part’s upper left corner.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y ; Hlong *
Row index of the image part’s lower right corner.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x ; Hlong *
Column index of the image part’s lower right corner.
Result
get_part returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
get_part is reentrant and processed without parallelization.
Possible Successors
set_part, disp_region, disp_image
See also
set_part, disp_image, disp_region, disp_color
Module
Foundation
HALCON 8.0.2
370 CHAPTER 4. GRAPHICS
Module
Foundation
Parallelization Information
get_rgb is reentrant and processed without parallelization.
Possible Successors
set_rgb, disp_region, disp_image
See also
set_rgb
Module
Foundation
HALCON 8.0.2
372 CHAPTER 4. GRAPHICS
Htuple Colors,ColorsAtWindow,WindowHandleTuple ;
create_tuple(&WindowHandleTuple,1) ;
open_window(0,0,1,1,"root","invisible","",&WindowHandle) ;
set_i(WindowHandleTuple, WindowHandle, 0) ;
T_query_all_colors(WindowHandleTuple,&Colors) ;
/* interactive selection from Colors, provide als result ActColors */
set_system("graphic_colors",ActColors) ;
T_query_color(WindowHandleTuple,&ColorsAtWindow) ;
close_window(WindowHandle) ;
for (i=0; i<length_tuple(ColorsAtWindow); i++)
printf("Color #%s = %s\n",i,get_s(ColorsAtWindow,i)) ;
Result
query_all_colors always returns H_MSG_TRUE
Parallelization Information
query_all_colors is reentrant, local, and processed without parallelization.
Possible Successors
set_system, set_color, disp_region
See also
query_color, set_system, set_color, disp_region, open_window, open_textwindow
Module
Foundation
Result
query_color returns H_MSG_TRUE, if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_color is reentrant, local, and processed without parallelization.
Possible Successors
set_color, disp_region
See also
query_all_colors, set_color, disp_region, open_window, open_textwindow
Module
Foundation
Htuple Colors ;
regiongrowing(Image,&Seg,5,5,6,100) ;
T_query_colored(&Colors) ;
set_colored(WindowHandle,get_i(Colors,1)) ;
disp_region(Seg,WindowHandle) ;
Result
query_colored always returns H_MSG_TRUE.
Parallelization Information
query_colored is reentrant and processed without parallelization.
Possible Successors
set_colored, set_color, disp_region
Alternatives
query_color
See also
set_colored, set_color
Module
Foundation
HALCON 8.0.2
374 CHAPTER 4. GRAPHICS
Parameter
query_line_width returns the minimal (Min) and maximal (Max) values of widths of region border which
can be displayed. Setting of the border width is done with set_line_width. Border width is used by operators
like disp_region, disp_line, disp_circle, disp_rectangle1, disp_rectangle2 etc. if the
drawing mode is "‘margin"’ ( set_draw(::WindowHandle,’margin’:)).
Parameter
. Min (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Displayable minimum width.
. Max (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Displayable maximum width.
Result
query_line_width returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_line_width is reentrant and processed without parallelization.
Possible Successors
get_line_width, set_line_width, set_line_style, disp_line
See also
disp_circle, disp_line, disp_rectangle1, disp_rectangle2, disp_region,
set_line_width, get_line_width, set_line_style
Module
Foundation
HALCON 8.0.2
376 CHAPTER 4. GRAPHICS
Parameter
. DisplayShape (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
region display mode names.
Result
query_shape returns H_MSG_TRUE, if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
query_shape is reentrant and processed without parallelization.
Possible Successors
get_shape, set_shape, disp_region
See also
set_shape, get_shape, disp_region
Module
Foundation
T_set_color(WindowHandleTuple,Colors) ;
disp_circle(WindowHandle,(double)100.0,(double)200.0,(double)100.0) ;
disp_circle(WindowHandle,(double)200.0,(double)300.0,(double)100.0) ;
disp_circle(WindowHandle,(double)300.0,(double)100.0,(double)100.0) ;
Result
set_color returns H_MSG_TRUE if the window is valid and the passed colors are displayable on the screen.
Otherwise an exception handling is raised.
Parallelization Information
set_color is reentrant, local, and processed without parallelization.
Possible Predecessors
query_color
Possible Successors
disp_region
Alternatives
set_rgb, set_hsi
See also
get_rgb, disp_region, set_fix, set_paint
Module
Foundation
HALCON 8.0.2
378 CHAPTER 4. GRAPHICS
open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
read_image(&Image,"fabrik") ;
threshold(Image,&Seg,100,255) ;
set_system("init_new_image","false") ;
sobel_amp(Image,&Sob,"sum_abs",3) ;
disp_image(Sob,WindowHandle) ;
get_comprise(&Mode) ;
fwrite_string("Current mode for gray values: ") ;
fwrite_string(Mode) ;
fnew_line() ;
set_comprise(WindowHandle,"image") ;
get_mbutton(WindowHandle,_,_,_) ;
disp_image(Sob,WindowHandle) ;
fwrite_string("Current mode for gray values: image") ;
fnew_line() ;
Result
set_comprise returns H_MSG_TRUE if Mode is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_comprise is reentrant and processed without parallelization.
Possible Predecessors
get_comprise
Possible Successors
disp_image
See also
get_comprise, disp_image, disp_color
Module
Foundation
set_draw defines the region fill mode. If Mode is set to ’fill’, output regions are filled, if set to ’margin’, only
contours are displayed. Setting Mode only affects the valid window. It is used by procedures with region output like
disp_region, disp_circle, disp_rectangle1, disp_rectangle2, disp_arrow etc. It is also
used by procedures with grayvalue output for some grayvalue output modes (e.g. ’histogram’, see set_paint).
If the mode is ’margin’, the contour can be affected with set_line_width, set_line_approx and
set_line_style.
Attention
If the output mode is ’margin’ and the line width is more than one, objects may not be displayed.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Fill mode for region output.
Default Value : "fill"
List of values : Mode ∈ {"fill", "margin"}
Result
set_draw returns H_MSG_TRUE if Mode is correct and the window is valid. Otherwise an exception handling
is raised.
Parallelization Information
set_draw is reentrant, local, and processed without parallelization.
Possible Predecessors
get_draw
Possible Successors
disp_region
See also
get_draw, disp_region, set_paint, disp_image, set_line_width, set_line_style
Module
Foundation
HALCON 8.0.2
380 CHAPTER 4. GRAPHICS
Parallelization Information
set_fix is reentrant, local, and processed without parallelization.
Possible Predecessors
get_fix
Possible Successors
set_pixel, set_rgb
See also
get_fix, set_pixel, set_rgb, set_color, set_hsi, set_gray
Module
Foundation
Htuple GrayValues ;
create_tuple(&GrayValues,2) ;
set_i(GrayValues,100,0) ;
set_i(GrayValues,200,0) ;
T_set_gray(WindowHandle,GrayValues) ;
disp_circle(WindowHandle,(double)100.0,(double)200.0,(double)100.0) ;
disp_circle(WindowHandle,(double)200.0,(double)300.0,(double)100.0) ;
disp_circle(WindowHandle,(double)300.0,(double)100.0,(double)100.0) ;
Result
set_gray returns H_MSG_TRUE if GrayValues is displayable and the window is valid. Otherwise an ex-
ception handling is raised.
Parallelization Information
set_gray is reentrant, local, and processed without parallelization.
Possible Successors
disp_region
See also
get_pixel, set_color
Module
Foundation
H = (2πHue)/255
√
I = ( 6Intensity)/255 √
M 1 = (sin (H)Saturation)/(255 √6)
M 2 = (cos (H)Saturation)/(255 2)
√
R = (2M 1 + I)/(4√6)
G = (−M 1 + M 2 + I)/(4√6
B = (−M 1 − M 2 + I)/(4 6)
Red = R ∗ 255
Green = G ∗ 255
Blue = B ∗ 255
If only one combination is passed, all output will take place in that color. If a tuple of colors is passed, the output
color of regions and geometric objects is modulo to the number of colors. HALCON always begins output with
the first color passed. Note, that the number of output colors depends on the number of objects that are displayed
in one procedure call. If only single objects are displayed, they always appear in the first color, even if the consist
of more than one connected components.
Selected colors are used until the next call of set_color, set_pixel, set_rgb or set_gray. Colors
are relevant to windows, i.e. only the colors of the valid window can be set. Region output colors are used by
operatores like disp_region, disp_line, disp_rectangle1, disp_rectangle2, disp_arrow,
etc. It is also used by procedures with grayvalue output in certain output modes (e.g. ’3D-plot’,’histogram’,
’contourline’, etc. See set_paint).
Attention
The selected intensities may not be available for the selected hues. In that case, the intensities will be lowered
automatically.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; (Htuple .) Hlong
Window_id.
. Hue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Hue for region output.
Default Value : 30
Typical range of values : 0 ≤ Hue ≤ 255
Restriction : (0 ≤ Hue) ∧ (Hue ≤ 255)
HALCON 8.0.2
382 CHAPTER 4. GRAPHICS
Result
set_icon returns H_MSG_TRUE if exactly one region is passed. Otherwise an exception handling is raised.
Parallelization Information
set_icon is reentrant and processed without parallelization.
Possible Predecessors
gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, draw_region
Possible Successors
set_shape, disp_region
Module
Foundation
There may not be all functions available, depending on the physical display. However, "‘copy"’ is always available.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the display function.
Default Value : "copy"
List of values : Mode ∈ {"copy", "xor", "complement"}
Result
set_insert returns H_MSG_TRUE if the paramter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_insert is reentrant, local, and processed without parallelization.
Possible Predecessors
query_insert, get_insert
Possible Successors
disp_region
See also
get_insert, query_insert
Module
Foundation
HALCON 8.0.2
384 CHAPTER 4. GRAPHICS
Parameter
/* Calling */
set_line_approx(WindowHandle,Approximation) ;
set_draw(WindowHandle,"margin") ;
disp_region(Obj,WindowHandle) ;
/* correspond with */
Htuple Approximation,Row,Col, WindowHandleTuple ;
create_tuple(&Approximation,1) ;
set_i(Approximation,0,0) ;
create_tuple(&WindowHandleTuple,1) ;
set_i(WindowHandleTuple,WindowHandle, 0) ;
T_get_region_polygon(Obj,Approximation,&Row,&Col) ;
T_disp_polygon(WindowHandleTuple,Row,Col) ;
Result
set_line_approx returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_line_approx is reentrant and processed without parallelization.
Possible Predecessors
get_line_approx
Possible Successors
disp_region
Alternatives
get_region_polygon, disp_polygon
See also
get_line_approx, set_line_style, set_draw, disp_region
Module
Foundation
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window_id.
. Style (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Contour pattern.
Default Value : []
Example
Htuple LineStyle ;
Result
set_line_style returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_line_style is reentrant, local, and processed without parallelization.
Possible Predecessors
get_line_style
Possible Successors
disp_region
See also
get_line_style, set_line_approx, disp_region
Module
Foundation
HALCON 8.0.2
386 CHAPTER 4. GRAPHICS
Attention
The line width is important if the output mode was set to ’margin’ (see set_draw). If the line width is greater
than one, regions may not always be displayed correctly.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Line width for region output in contour mode.
Default Value : 1
Restriction : (Width ≥ 1) ∧ (Width ≤ 2000)
Result
set_line_width returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_line_width is reentrant and processed without parallelization.
Possible Predecessors
query_line_width, get_line_width
Possible Successors
disp_region
See also
get_line_width, query_line_width, set_draw, disp_region
Module
Foundation
Gray images can also be interpreted as 3d data, depending on the grayvalue. To view these 3d plots, select the
modes ’contourline’, ’3D-plot’ or ’3D-plot_hidden’.
Three-channel images are interpreted as RGB images. They can be displayed in three different modes. Two of
them can be optimized by Floyd-Steinberg dithering.
Vector field images can be viewed as ’vector_field’.
All available painting modes can be queried with query_paint.
Paramters for modes that need more than one parameter can be passed the following ways:
• Only the name of the mode is passed: the defaults or the most recently used values are used, respectively.
Example: set_paint(WindowHandle,’contourline’)
• All values are passed: all output characteristics can be set. Example: set_paint
(WindowHandle,[’contourline’,10,1])
• Only the first n values are passed: only the passed values are changed. Example: set_paint
(WindowHandle,[’contourline’,10])
• Some of the values are replaced by an asterisk (’*’): The value of the replaced parameters is not changed.
Example: set_paint(WindowHandle,[’contourline’,’*’,1])
If the current mode is ’default’, HALCON chooses a suitable algorithm for the output of 2- and 3-channel images.
No set_paint call is necessary in this case.
Apart from set_paint there are other operators that affect the output of grayvalues. The most important of
them are set_part, set_part_style, set_lut and set_lut_style. Some output modes display
grayvalues using region output (e.g. ’histogram’,’contourline’,’3D-plot’, etc.). In these modes, paramters set with
set_color, set_rgb, set_hsi, set_pixel, set_shape, set_line_width and set_insert
influence grayvalue output. This can lead to unexpected results when using set_shape(’convex’) and
set_paint(WindowHandle,’histogram’). Here the convex hull of the histogram is displayed.
Modes:
• one-channel images:
’default’ optimal display on given hardware
’gray’ grayvalue output
’mean’ mean grayvalue
’dither4_1’ binary image, dithering matrix 4x4
’dither4_2’ binary image, dithering matrix 4x4
’dither4_3’ binary image, dithering matrix 4x4
’dither8_1’ binary image, dithering matrix 8x8
’floyd_steinberg’ binary image, optimal grayvalue simulation
[’threshold’,Threshold ]
’threshold’ binary image, threshold: 128 (default)
[’threshold’,200 ] binary image, any threshold: (here: 200)
[’histogram’,Line,Column,Scale ]
’histogram’ grayvalue output as histogram.
position default: max. size, in the window center
[’histogram’,256,256,2 ] grayvalue output as histogram, any parameter values.
positioning: window center (here (256,256))
size: (here 2, half the max. size)
[’component_histogram’,Line,Column,Scale ]
’component_histogram’ output as histogram of the connection components.
Positioning: default
[’component_histogram’,256,256,1 ] output as histogram of the connection components.
Positioning: (here (256, 256))
Scaling: (here 1, max. size)
[’row’,Line,Scale ]
’row’ output of the grayvalue profile along the given line.
line: image center (default)
Scaling: 50
HALCON 8.0.2
388 CHAPTER 4. GRAPHICS
[’row’,100,20 ] output of the grayvalue profile of line 100 with a scaling of 0.2 (20
[’column’,Column,Scale ]
’column’ output of the grayvalue profile along the given column.
column: image center (default)
Scaling: 50
[’column’,100,20 ] output of the grayvalue profile of column 100 with a scaling of 0.2 (20
[’contourline’,Step,Colored ]
’contourline’ grayvalue output as contour lines: the grayvalue difference per line is defined with the
parameter ’Step’ (default: 30, i.e. max. 8 lines for 256 grayvalues). The line can be displayed in
a given color (see set_color) or in the grayvalue they represent. This behaviour is defined with the
parameter ’Colored’ (0 = color, 1 = grayvalues). Default is color.
[’contourline’,15,1 ] grayvalue output as contour lines with a step of 15 and gray output.
[’3D-plot’, Step, Colored, EyeHeight, EyeDistance, ScaleGray, LinePos, ColumnPos]
’3D-plot’ grayvalues are interpreted as 3d data: the greater the value, the ’higher’ the assumed moun-
tain. Lines with step 2 (second paramter value) are drawn along the x- and y-axes. The third pa-
rameter (Colored) determines, if the output should be in color (default) or grayvalues. To define the
projection of the 3d data, use the parameters EyeHeight and EyeDistance. The projection parameters
take values from 0 to 255. ScaleGray defines a factor, by which the grayvalues are multiplied for
’height’ interpretation (given in percent. 100EyeHeight and EyeDistance the image can be shifted
out of place. Use RowPos and ColumnPos to move the whole output. Values from -127 to 127 are
possible.
[’3D-plot’, 5, 1, 110, 160, 150, 70, -10 ] line step: 5 pixel
Colored: yes (1)
EyeHeight: 110
EyeDistance: 160
ScaleGray: 1.5 (150)
RowPos: 70 pixel down
ColumnPos: 10 pixel right
[’3D-plot_hidden’, Step, Colored, EyeHeight, EyeDistance, ScaleGray, LinePos, ColumnPos]
’3D-plot_hidden’ like ’3D-plot’, but computes hidden lines.
• Two-channel images:
’default’ output the first channel.
• Three-channel images:
’default’ output as RGB image with ’median_cut’.
’television’ color addition algorithm for RGB images: (three components necessary for disp_image).
Images are displayed via a fixed color lookup table. Fast, but non-optimal color resolution. Only recom-
mended on bright screens.
’grid_scan’ grid-scan algorithm for RGB images (three components necessary for disp_image). An
optimized color lookup table is generated for each image. Slower than ’television’. Disadvantages:
Hard color boundaries (no dithering). Different color lookup table for every image.
’grid_scan_floyd_steinberg’ grid-scan with Floyd-Steinberg dithering for smooth color boundaries.
’median_cut’ median-cut algorithm for RGB images (three components necessary for disp_image).
Similar to grid-scan. Disadvantages: Hard color boundaries (no dithering). Different color lookup table
for every image.
’median_cut_floyd_steinberg’ median-cut algorithm with Floyd-Steinberg dithering for smooth color
boundaries.
• Vector field images:
[’vector_field’, Step, MinLengh, ScaleLength ]
’vector_field’ Output a vector field. In this mode, a circle is drawn for each vector at the position of
the pixel. Furthermore, a line segment is drawn with the current vector. The step size for drawing
the vectors, i.e., the distance between the drawn vectors, can be set with the parameter Step. Short
vectors can be suppressed with the third parameter value (MinLength). The fourth parameter value
scales the vector length. It should be noted that by setting ’vector_field’ only the internal param-
eters Step, MinLengh, and ScaleLength are changed. The current display mode is not changed.
Vector field images are always displayed as vector field, no matter which mode is selected with
set_paint.
[’vector_field’,16,2,3 ] Output of every 16. vector, that is longer than 2 pixel. Each vector is multiplied
by 3 for output.
Attention
• Display of color images (’television’, ’grid_scan’, etc.) changes the color lookup tables.
• If a wrong color mode is set, the error message may appear not until the disp_image call.
• Grayvalue output may be influenced by region output parameters. This can yield unexpected results.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Htuple . Hlong
Window_id.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char * / Hlong
Output mode. Additional parameters possible.
Default Value : "default"
List of values : Mode ∈ {"default", "histogram", "row", "column", "contourline", "3D-plot",
"3D-plot_hidden", "3D-plot_point", "vector_field"}
Example
read_image(&Image,"fabrik") ;
open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
T_query_paint(WindowHandleTuple,Modi) ;
T_fwrite_string(Modi) ;
fnew_line() ;
disp_image(Image,WindowHandle) ;
get_mbutton(WindowHandle,_,_,_) ;
set_s(HilfsTuple1,"red",0) ;
set_i(WindowHandleTuple,WindowHandle,0);
T_set_color(WindowHandleTuple,HilfsTuple1) ;
set_draw(WindowHandle,"margin") ;
set_s(HilfsTuple1,"histogram",0) ;
T_set_paint(WindowHandleTuple,HilfsTuple1) ;
disp_image(Image,WindowHandle) ;
set_s(HilfsTuple1,"blue",0) ;
T_set_color(WindowHandleTuple,HilfsTuple1) ;
set_s(HilfsTuple3,"histogram",0) ;
set_s(HilfsTuple3,100,1) ;
set_s(HilfsTuple3,100,2) ;
T_set_paint(WindowHandleTuple,HilfsTuple3) ;
disp_image(Image,WindowHandle) ;
set_s(HilfsTuple1,"yellow",0) ;
T_set_color(WindowHandleTuple,HilfsTuple1) ;
set_s(HilfsTuple2,"line",0) ;
set_s(HilfsTuple2,100,1) ;
T_set_paint(WindowHandleTuple,HilfsTuple3) ;
disp_image(Image,WindowHandle) ;
get_mbutton(WindowHandle,_,_,_) ;
clear_window(WindowHandle) ;
HALCON 8.0.2
390 CHAPTER 4. GRAPHICS
set_s(HilfsTuple3,"contourline",0) ;
set_s(HilfsTuple3,10,1) ;
set_s(HilfsTuple3,1,2) ;
T_set_paint(WindowHandleTuple,HilfsTuple3) ;
disp_image(Image,WindowHandle) ;
set_lut(WindowHandle,"color") ;
get_mbutton(WindowHandle,_,_,_) ;
clear_window(WindowHandle) ;
set_part(WindowHandle,100,100,300,300) ;
set_s(HilfsTuple1,"3D-plot",0) ;
T_set_paint(WindowHandleTuple,HilfsTuple1) ;
disp_image(Image,WindowHandle) ;
Result
set_paint returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_paint is reentrant, local, and processed without parallelization.
Possible Predecessors
query_paint, get_paint
Possible Successors
disp_image
See also
get_paint, query_paint, disp_image, set_shape, set_rgb, set_color, set_gray
Module
Foundation
Row1 = Column1 = Row2 = Column2 = -1: The window size is choosen as the image part, i.e. no zooming of
the image will be performed.
Row1, Column1 > -1 and Row2 = Column2 = -1: The size of the last displayed image (in this window) is
choosen as the image part, i.e. the image can completely be displayed in the image. For this the image
will be zoomed if necessary.
Parameter
get_system("width",Width) ;
get_system("height",Height) ;
set_part(WindowHandle,0,0,Height-1,Width-1) ;
disp_image(Image,WindowHandle) ;
draw_rectangle1(WindowHandle,&Row1,&Column1,&Row2,&Column2) ;
set_part(WindowHandle,Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
Result
set_part returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
set_part is reentrant and processed without parallelization.
Possible Predecessors
get_part
Possible Successors
set_part_style, disp_image, disp_region
Alternatives
affine_trans_image
See also
get_part, set_part_style, disp_region, disp_image, disp_color
Module
Foundation
HALCON 8.0.2
392 CHAPTER 4. GRAPHICS
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Style (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Interpolation method for image output: 0 (fast, low quality) to 2 (slow, high quality).
Default Value : 0
List of values : Style ∈ {0, 1, 2}
Result
set_part_style returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an
exception handling is raised.
Parallelization Information
set_part_style is reentrant and processed without parallelization.
Possible Predecessors
get_part_style
Possible Successors
set_part, disp_image, disp_region
Alternatives
affine_trans_image
See also
get_part_style, set_part, disp_image, disp_color
Module
Foundation
See also
get_pixel, set_lut, disp_region, disp_image, disp_color
Module
Foundation
HALCON 8.0.2
394 CHAPTER 4. GRAPHICS
’original’: The shape is displayed unchanged. Nevertheless modifications via parameters like set_line_width or
set_line_approx can take place. This is also true for all other modes.
’outer_circle’: Each region is displayed by the smallest surrounding circle. (See smallest_circle.)
’inner_circle’: Each region is displayed by the largest included circle. (See inner_circle.)
’ellipse’: Each region is displayed by an ellipse with the same moments and orientation (See elliptic_axis.)
’rectangle1’: Each region is displayed by the smallest surrounding rectangle parallel to the coordinate axes. (See
smallest_rectangle1.)
’rectangle2’: Each region is displayed by the smallest surrounding rectangle. (See smallest_rectangle2.)
’convex’: Each region is displayed by its convex hull (See convexity.)
’icon’ Each region is displayed by the icon set with set_icon in the center of gravity.
Attention
Caution is advised for grayvalue output procedures with output parameter settings that use region out-
put, e.g. disp_image with set_paint(WindowHandle,’histogram’) and set_shape
(WindowHandle,’convex’). In that case the convex hull of the grayvalue histogram is displayed.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window_id.
. Shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Region output mode.
Default Value : "original"
List of values : Shape ∈ {"original", "convex", "outer_circle", "inner_circle", "rectangle1", "rectangle2",
"ellipse", "icon"}
Example
read_image(&Image,"fabrik");
regiongrowing(Image,&Seg,5,5,6.0,100);
set_colored(WindowHandle,12);
set_shape(WindowHandle,"rectangle2");
disp_region(Seg,WindowHandle);
Result
set_shape returns H_MSG_TRUE if the parameter is correct and the window is valid. Otherwise an exception
handling is raised.
Parallelization Information
set_shape is reentrant and processed without parallelization.
Possible Predecessors
set_icon, query_shape, get_shape
Possible Successors
disp_region
See also
get_shape, query_shape, disp_region
Module
Foundation
4.7 Text
get_font ( Hlong WindowHandle, char *Font )
T_get_font ( const Htuple WindowHandle, Htuple *Font )
get_font(WindowHandle,&CurrentFont) ;
set_font(WindowHandle,MyFont) ;
create_tuple(&String,1) ;
sprintf(buf,"The name of my Font is: %s ",Myfont) ;
set_s(String,buf,0) ;
T_write_string(TupleWindowHandle,String) ;
new_line(WindowHandle) ;
set_font(WindowHandle,CurrentFont) ;
Result
get_font returns H_MSG_TRUE.
Parallelization Information
get_font is reentrant and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, query_font
Possible Successors
set_font
See also
set_font, query_font, open_window, open_textwindow, set_system
Module
Foundation
HALCON 8.0.2
396 CHAPTER 4. GRAPHICS
Parameter
Parallelization Information
get_tposition is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font
Possible Successors
set_tposition, write_string, read_string, read_char
See also
new_line, read_string, set_tposition, write_string, set_check
Module
Foundation
Set the position of the text cursor to the beginning of the next line.
new_line sets the position of the text cursor to the beginning of the next line. The new position depends on the
current font. The left end of the baseline for writing the following text string (not considering descenders) is placed
on this position.
If the next line does not fit into the window the content of the window is scrolled by the height of one line in the
upper direction. In order to reach the correct new cursor position the font used in the next line must be set before
HALCON 8.0.2
398 CHAPTER 4. GRAPHICS
new_line is called. The position is changed by the output or input of text ( write_string, read_string)
or by an explicit change of position by ( set_tposition).
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
Result
new_line returns H_MSG_TRUE if the window is valid. Otherwise an exception handling is raised.
Parallelization Information
new_line is reentrant and processed without parallelization.
Possible Predecessors
open_window, open_textwindow, set_font, write_string
Alternatives
get_tposition, get_string_extents, set_tposition, move_rectangle
See also
write_string, set_font
Module
Foundation
open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
set_check("~text") ;
create_tuple(&Fontlist,1) ;
create_tuple(&String,1) ;
create_tuple(&WindowHandleTuple,1) ;
set_i(WindowHandleTuple,WindowHandle,0) ;
T_query_font(WindowHandleTuple,&Fontlist) ;
set_color(WindowHandle,"white") ;
for(i=0; i<length_tuple(Fontlist); i++) ;
{
charstring = get_s(Fontlist,i) ;
set_font(WindowHandle,charstring) ;
set_s(String,charstring,0) ;
T_write_string(WindowHandleTuple,String) ;
new_line(WindowHandle) ;
}
Result
query_font returns H_MSG_TRUE.
Parallelization Information
query_font is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
set_font, write_string, read_string, read_char
See also
set_font, write_string, read_string, read_char, new_line
Module
Foundation
HALCON 8.0.2
400 CHAPTER 4. GRAPHICS
Attention
The window has to be a text window.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Char (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Input character (if it is not a control character).
. Code (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Code for input character.
Result
read_char returns H_MSG_TRUE if the text window is valid. Otherwise an exception handling is raised.
Parallelization Information
read_char is reentrant, local, and processed without parallelization.
Possible Predecessors
open_textwindow, set_font
Alternatives
read_string, fread_char, fread_string
See also
write_string, set_font
Module
Foundation
Attention
The window has to be a text window.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. InString (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Default string (visible before input).
Default Value : ""
. Length (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Maximum number of characters.
Default Value : 32
Restriction : Length > 0
. OutString (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Read string.
Result
read_string returns H_MSG_TRUE if the text window is valid and a string of maximal length fits within the
right window boundary. Otherwise an exception handling is raised.
Parallelization Information
read_string is reentrant, local, and processed without parallelization.
Possible Predecessors
open_textwindow, set_font
Alternatives
read_char, fread_string, fread_char
See also
set_tposition, new_line, open_textwindow, set_font, set_color
Module
Foundation
-FontName-Height-Width-Italic-Underlined-Strikeout-Bold-CharSet-
where “Italic”, “Underlined”, “Strikeout” and “Bold” can take the values 1 and 0 to activate or de-
activate the corresponding feature. “Charset” can be used to select the character set, if it differs
from the default one. You can use the names of the defines (ANSI_CHARSET, BALTIC_CHARSET,
CHINESEBIG5_CHARSET, DEFAULT_CHARSET, EASTEUROPE_CHARSET, GB2312_CHARSET,
GREEK_CHARSET, HANGUL_CHARSET, MAC_CHARSET, OEM_CHARSET, RUSSIAN_CHARSET,
SHIFTJIS_CHARSET, SYMBOL_CHARSET, JOHAB_CHARSET, HEBREW_CHARSET, ARA-
BIC_CHARSET) or the integer value.
All parameters beside “FontName” und “Height” are optional, however it is only possible to omit parameters from
the end of the string. At the begin and end of the string a minus is required. To use the default setting, a * can be
used for the corresponding feature. Examples:
• -Arial-10-*-1-*-*-1-ANSI_CHARSET-
• -Arial-10-*-1-*-*-1-
• -Arial-10-
Please refer to the Windows documentation (Fonts and Text in the MSDN) for a detailed discussion.
On UNIX environments the Font is specified by a string with the following components:
-FOUNDRY-FAMILY_NAME-WEIGHT_NAME-SLANT-SETWIDTH_NAME-ADD_STYLE_NAME-PIXEL_SIZE
-POINT_SIZE-RESOLUTION_X-RESOLUTION_Y-SPACING-AVERAGE_WIDTH-CHARSET_REGISTRY
-CHARSET_ENCODING,
where FOUNDRY identifies the organisation that supplied the Font. The actual name of Font is given in FAM-
ILY_NAME (e.g. ’courier’). WEIGHT_NAME describes the typographic weight of the Font in human readable
form (e.g. ’medium’, ’semibold’, ’demibold’, or ’bold’). SLANT is one of the following codes:
• r for Roman
HALCON 8.0.2
402 CHAPTER 4. GRAPHICS
• i for Italic
• o for Oblique
• ri for Reverse Italic
• ro for Reverse Oblique
• ot for Other
SET_WIDTH_NAME describes the proportionate width of the font (e.g. ’normal’). ADD_STYLE_NAME iden-
tifies additional typographic style information (e.g. ’serif’ or ’sans serif’) and is empty in most cases.
The PIXEL_SIZE is the height of the Font on the screen in pixel, while POINT_SIZE is the print size the Font
was designed for. RESOLUTION_Y and RESOLUTION_X contain the vertical and horizontal Resolution of the
Font. SPACING may be one of the following three codes:
• p for Proportional,
• m for Monospaced, or
• c for CharCell.
The AVERAGE_WIDTH is the mean of the width of each character in Font. The character set encoded in Font
is described in CHARSET_REGISTRY and CHARSET_ENCODING (e.g. ISO8859-1).
An example of a valid string for Font would be
’-adobe-courier-medium-r-normal–12-120-75-75-m-70-iso8859-1’,
which is a 12px medium weighted courier font. As on Windows systems not all fields have to be specified and a *
can be used instead:
’-adobe-courier-medium-r-*–12-*-*-*-*-*-*-*’.
Please refer to "X Logical Font Description Conventions" for detailed information on individual parameters.
Attention
For different machines the available fonts may differ a lot. Therefore it is suggested to use wildcards, tables of
fonts and/or the operator query_font.
Parameter
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; Hlong
Window identifier.
. Font (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of new font.
Example (Syntax: HDevelop)
Result
set_font returns H_MSG_TRUE if the font name is correct. Otherwise an exception handling is raised.
Parallelization Information
set_font is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
query_font
See also
get_font, query_font, open_textwindow, open_window
Module
Foundation
HALCON 8.0.2
404 CHAPTER 4. GRAPHICS
See also
set_tposition, get_string_extents, open_textwindow, set_font, set_system,
set_check
Module
Foundation
4.8 Window
clear_rectangle ( Hlong WindowHandle, Hlong Row1, Hlong Column1,
Hlong Row2, Hlong Column2 )
HALCON 8.0.2
406 CHAPTER 4. GRAPHICS
Result
If an output window exists and the specified parameters are correct clear_rectangle returns H_MSG_TRUE.
If necessary an exception handling is raised.
Parallelization Information
clear_rectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, set_rgb, set_hsi,
draw_rectangle1
Alternatives
clear_window, disp_rectangle1
See also
open_window, open_textwindow
Module
Foundation
clear_window(WindowHandle) ;
Result
If the output window is valid clear_window returns H_MSG_TRUE. If necessary an exception handling is
raised.
Parallelization Information
clear_window is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
clear_rectangle, disp_rectangle1
See also
open_window, open_textwindow
Module
Foundation
Parameter
HALCON 8.0.2
408 CHAPTER 4. GRAPHICS
read_image(Image,"affe") ;
open_window(0,0,-1,-1,"root","buffer","",&WindowHandle) ;
disp_image(Image,WindowHandle) ;
open_window(0,0,-1,-1,"root","visible","",&WindowHandleDestination) ;
do{
get_mbutton(WindowHandleDestination,&Row,&Column,&Button) ;
copy_rectangle(BufferID,WindowHandleDestination,90,120,390,Row,Column) ;
}
while(Button > 1) ;
close_window(WindowHandleDestination) ;
close_window(WindowHandle) ;
clear_obj(Image) ;
Result
If the output window is valid and if the specified parameters are correct close_window returns H_MSG_TRUE.
If necessary an exception handling is raised.
Parallelization Information
copy_rectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Possible Successors
close_window
Alternatives
move_rectangle, slide_image
See also
open_window, open_textwindow
Module
Foundation
Attention
Under UNIX, the graphics window must be completely visible on the root window, because otherwise the contents
of the window cannot be read due to limitations in X Windows. If larger graphical displays are to be written to a
file, the window type ’pixmap’ can be used.
HALCON 8.0.2
410 CHAPTER 4. GRAPHICS
Parameter
Result
If the appropriate window is valid and the specified parameters are correct dump_window returns
H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
dump_window is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow,
disp_region
Possible Successors
system_call
See also
open_window, open_textwindow, set_system, dump_window_image
Module
Foundation
Result
If the window is valid dump_window_image returns H_MSG_TRUE. If necessary an exception handling is
raised.
Parallelization Information
dump_window_image is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow,
disp_region
See also
open_window, open_textwindow, set_system, dump_window
Module
Foundation
/* Draw a line into a HALCON window under UNIX using X11 calls. */
#include "HalconC.h"
#include <X11/X.h>
#include <X11/Xlib.h>
HALCON 8.0.2
412 CHAPTER 4. GRAPHICS
/* Draw a line into a HALCON window under Windows using GDI calls. */
#include "HalconC.h"
#include "windows.h"
Result
If the window is valid get_os_window_handle returns H_MSG_TRUE. Otherwise, an exception handling
is raised.
Parallelization Information
get_os_window_handle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Module
Foundation
HALCON 8.0.2
414 CHAPTER 4. GRAPHICS
Example
open_window(100,100,200,200,"root","visible","",&WindowHandle) ;
fwrite_string("Move the window with the mouse!") ;
fnew_line() ;
create_tuple(&String,1) ;
do
{
get_mbutton(WindowHandle,_,_,&Button) ;
get_window_extents(WindowHandle,&Row,&Column,&Width,&Height) ;
sprintf(buf,"Row %d Col %d ",Row,Column) ;
set_s(String,buf,0) ;
T_fwrite_string(String) ;
fnew_line() ;
}
while(Button < 4) ;
Result
If the window is valid get_window_extents returns H_MSG_TRUE. If necessary an exception handling is
raised.
Parallelization Information
get_window_extents is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow
See also
set_window_extents, open_window, open_textwindow
Module
Foundation
Result
If a window of type ’pixmap’ exists and it is valid get_window_pointer3 returns H_MSG_TRUE. If neces-
sary an exception handling is raised.
Parallelization Information
get_window_pointer3 is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
See also
open_window, set_window_type
Module
Foundation
Parameter
open_window(100,100,200,200,"root","visible","",&WindowHandle) ;
get_window_type(WindowHandle,&WindowType) ;
fwrite_string("Window type:") ;
sprintf(buf,"%d",WindowType) ;
fwrite_string(buf) ;
fnew_line() ;
Result
If the window is valid get_window_type returns H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
get_window_type is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
HALCON 8.0.2
416 CHAPTER 4. GRAPHICS
See also
query_window_type, set_window_type, get_window_pointer3, open_window,
open_textwindow
Module
Foundation
Result
If the window is valid and the specified parameters are correct move_rectangle returns H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
move_rectangle is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
Alternatives
copy_rectangle
See also
open_window, open_textwindow
Module
Foundation
HALCON:
set\_color(WindowHandle,"green");
disp\_region(WindowHandle,region);
Windows NT:
HPEN* penold;
HPEN penGreen = CreatePen(PS\_SOLID,1,RGB(0,255,0));
pen = (HPEN*)SelectObject(WINHDC,penGreen);
disp\_region(WindowHandle,region);
Interactive operators, for example draw_region, draw_circle or get_mbutton cannot be used in this
window. The following operators can be used:
• Output of gray values: set_paint, set_comprise, ( set_lut and set_lut_style after output)
HALCON 8.0.2
418 CHAPTER 4. GRAPHICS
You may query current set values by calling procedures like get_shape. As some parameters are specified
through the hardware (Resolution/Colors), you may query current available resources by calling operators like
query_color.
The parameter WINHWnd is used to pass the window handle of the Windows NT window, in which output should
be done. The parameter WINHDC is used to pass the device context of the window WINHWnd. This device context
is used in the output routines of HALCON.
The origin of the coordinate system of the window resides in the upper left corner (coordinates: (0,0)). The row
index grows downward (maximum: Height-1), the column index grows to the right (maximal: Width-1).
You may use the value -1 for parameters Width and Height. This means, that the corresponding value is chosen
automatically. In particular, this is important if the aspect ratio of the pixels is not 1.0 (see set_system). If
one of the two parameters is set to -1, it will be chosen through the size which results out of the aspect ratio of the
pixels. If both parameters are set to -1, they will be set to the current image format.
The position and size of a window may change during runtime of a program. This may be achieved by call-
ing set_window_extents, but also through external influences (window manager). For the latter case the
procedure set_window_extents is provided.
Opening a window causes the assignment of a default font. It is used in connection with procedures
like write_string and you may change it by performing set_font after calling open_window.
On the other hand, you have the possibility to specify a default font by calling set_system
(’default_font’,<Fontname>) before opening a window (and all following windows; see also
query_font).
You may set the color of graphics and font, which is used for output procedures like disp_region or
disp_circle, by calling set_rgb, set_hsi, set_gray or set_pixel. Calling set_insert
specifies how graphics is combined with the content of the image repeat memory. Thereto you may achieve by
calling, e.g., set_insert(::’not’:) to eliminate the font after writing text twice at the same position.
The content of the window is not saved, if other windows overlap the window. This must be done in the program
code that handles the Windows NT window in the calling program.
For graphical output ( disp_image, disp_region, etc.) you may adjust the window by calling procedure
set_part in order to represent a logical clipping of the image format. In particular this implies that only this
part (appropriately scaled) of images and regions is displayed. Before you close your window, you have to close
the HALCON-window.
Steps to use new_extern_window:
Attention
Note that parameters as Row, Column, Width and Height are constrained through the output device, i.e., the
size of the Windows NT desktop.
Parameter
HTuple m_tHalconWindow ;
Hobject m_objImage ;
WM_CREATE:
/* here you should create your extern halcon window*/
HTuple tWnd, tDC ;
::set_check("~father") ;
tWnd = (INT)((INT*)&m_hWnd) ;
tDC = (INT)(INT*)GetWindowDC() ;
::new_extern_window(tWnd, tDC, 0, 0, sizeTotal.cx, sizeTotal.cy, &m_tHalconWindow) ;
::set_check("father") ;
WM_PAINT:
/* here you can draw halcon objects */
long l = 0 ;
if (m_thWindow != -1) {
/* don´t forget to set the dc !! */
HTuple tDC((INT)(INT*)&pDC->m_hDC) ;
HTuple tDCNull((INT)(INT*)&l) ;
::set_window_dc(m_tHalconWindow,tDC) ;
::disp_obj(pDoc->m_objImage, m_tHalconWindow) ;
/* release the graphic objects */
::set_window_dc(m_tHalconWindow, tDCNull) ;
}
WM_CLOSE:
/* close the halcon window */
if (m_tHalconWindow != -1) {
::close_window(m_tHalconWindow) ;
}
Result
If the values of the specified parameters are correct new_extern_window returns H_MSG_TRUE. If necessary,
an exception is raised.
Parallelization Information
new_extern_window is reentrant, local, and processed without parallelization.
Possible Predecessors
reset_obj_db
HALCON 8.0.2
420 CHAPTER 4. GRAPHICS
Possible Successors
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Alternatives
open_window, open_textwindow
See also
open_window, disp_region, disp_image, disp_color, set_lut, query_color,
set_color, set_rgb, set_hsi, set_pixel, set_gray, set_part, set_part_style,
query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_window_extents, get_window_extents, set_window_attr,
set_check, set_system
Module
Foundation
<Host>:0.0
.
For windows of type ’X-Window’ and ’WIN32-Window’ the parameter FatherWindow can be used to de-
termine the father window for the window to be opened. In case the control ’father’ is set via set_check,
FatherWindow relates to the ID of a HALCON window, otherwise ( set_check(’∼ father’)) it relates to the
ID of an operating system window. If FatherWindow is passed the value 0 or ’root’, then under Windows and
Unix the desktop and the root window become the father window, respectively. In this case, the value of the control
’father’ (set via set_check) is irrelevant.
Position and size of a window may change during runtime of a program. This may be achieved by calling
set_window_extents, but also through external interferences (window manager). In the latter case the pro-
cedure set_window_extents is provided.
Opening a window causes the assignment of a called default font. It is used in connection with
procedures like write_string and you may overwrite it by performing set_font after calling
open_textwindow. On the other hand you have the possibility to specify a default font by calling
set_system(’default_font’,<Fontname>) before opening a window (and all following windows; see
also query_font).
You may set the color of the font ( write_string, read_string) by calling set_color, set_rgb,
set_hsi, set_gray or set_pixel. Calling set_insert specifies how the text or the graphics, re-
spectively, is combined with the content of the image repeat memory. So you may achieve by calling, e.g.,
set_insert(::’not’:) to eliminate the font after writing text twice at the same position.
Normally every output (e.g., write_string, disp_region, disp_circle, etc.) in a window is termi-
nated by a "‘flush"’. This causes the data to be fully visible on the display after termination of the output procedure.
But this is not necessary in all cases, in particular if there are permanently output tasks or there is a mouse proce-
dure active. Therefore it is more favorable (i.e., more rapid) to store the data until sufficient data is available. You
may stop this behavior by calling set_system(’flush_graphic’,’false’).
The content of windows is saved (in case it is supported by special driver software); i.e., it is preserved, also
if the window is hidden by other windows. But this is not necessary in all cases: If you use a textual window,
e.g., as a parent window for other windows, you may suppress the security mechanism for it and save the nec-
essary memory at the same moment. You achieve this before opening the window by calling set_system
(’backing_store’,’false’).
Difference: graphical window - textual window
• In contrast to graphical windows ( open_window) you may specify more parameters (color, edge) for a
textual window while opening it.
• You may use textual windows only for input of user data ( read_string).
• Using textual windows, the output of images, regions and graphics is "‘clipped"’ at the edges. Whereas
during the use of graphical windows the edges are "‘zoomed"’.
• The coordinate system (e.g., with get_mbutton or get_mposition) consists of display coordinates
independently of image size. The maximum coordinates are equal to the size of the window minus 1. In
contrast to this, graphical windows ( open_window) use always a coordinate system, which corresponds to
the image format.
The parameter Mode specifies the mode of the window. It can have following values:
’visible’: Normal mode for textual windows: The window is created according to the parameters and all inputs
and outputs are possible.
’invisible’: Invisible windows are not displayed in the display. Parameters like Row, Column, BorderWidth,
BorderColor, BackgroundColor and FatherWindow do not have any meaning. Output to these
windows has no effect. Input ( read_string, mouse, etc.) is not possible. You may use these windows
to query representation parameter for an output device without opening a (visible) window. General queries
are, e.g., query_color and get_string_extents.
’transparent’: These windows are transparent: the window itself is not visible (edge and background), but
all the other operations are possible and all output is displayed. Parameters like BorderColor and
BackgroundColor do not have any meaning. A common use for this mode is the creation of mouse
sensitive regions.
’buffer’: These are also not visible windows. The output of images, regions and graphics is not visible on
the display, but is stored in memory. Parameters like Row, Column, BorderWidth, BorderColor,
BackgroundColor and FatherWindow do not have any meaning. You may use buffer windows, if you
prepare output (in the background) and copy it finally with copy_rectangle in a visible window. An-
other usage might be the rapid processing of image regions during interactive manipulations. Textual input
and mouse interaction are not possible in this mode.
HALCON 8.0.2
422 CHAPTER 4. GRAPHICS
Attention
You have to keep in mind that parameters like Row, Column, Width and Height are restricted by the output
device. Is a father window (FatherWindow <> ’root’) specified, then the coordinates are relative to this window.
Parameter
open_textwindow(0,0,900,600,1,"black","slate blue","root","visible",
"",&WindowHandle1) ;
open_textwindow(10,10,300,580,3,"red","blue",Father,"visible",
"",&WindowHandle2) ;
open_window(10,320,570,580,Father,"visible","",&WindowHandle) ;
set_color(WindowHandle,"red") ;
read_image(&Image,"affe") ;
disp_image(Image,WindowHandle) ;
create_tuple(&String,1) ;
do {
get_mposition(WindowHandle,&Row,&Column,&Button) ;
get_grayval(Image,Row,Column,1,&Gray) ;
sprintf(buf,"Position( %d,%d ) ",Row,Column) ;
set_s(String,buf,0) ;
T_fwrite_string(String) ;
new_line(WindowHandle) ;
}
while(Button < 4) ;
close_window(WindowHandle) ;
clear_obj(Image) ;
Result
If the values of the specified parameters are correct open_textwindow returns H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
open_textwindow is reentrant, local, and processed without parallelization.
Possible Predecessors
reset_obj_db
Possible Successors
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Alternatives
open_window
See also
write_string, read_string, new_line, get_string_extents, get_tposition,
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Module
Foundation
HALCON 8.0.2
424 CHAPTER 4. GRAPHICS
open_window opens a new window, which can be used to perform output of gray value data, regions, graphics as
well as to perform textual output. All output ( disp_region, disp_image, etc.) is redirected to this window,
if the same logical window number WindowHandle is used.
The background of the created window is set to black in advance and it has a white border, which is 2 pixels wide
(see also set_window_attr(’border_width’,<Breite>).
Certain parameters used for the editing of output data are assigned to a window. These parameters are considered
during the output itself (e.g., with disp_image or disp_region). They are not specified by an output
procedure, but by "‘configuration procedures"’. If you want to set, e.g., the color red for the output of regions, you
have to call set_color(WindowHandle,’red’) before calling disp_region. These parameters are
always set for the window with the logical window number WindowHandle and remain assigned to a window as
long as they will be overwritten. You may use the following configuration procedures:
• Output of gray values: set_paint, set_comprise, ( set_lut and set_lut_style after output)
• Regions: set_color, set_rgb, set_hsi, set_gray, set_pixel, set_shape,
set_line_width, set_insert, set_line_style, set_draw
• Image clipping: set_part
• Text: set_font
You may query current set values by calling procedures like get_shape. As some parameters are specified
through the hardware (Resolution/Colors), you may query current available ressources by calling query_color.
The origin of the coordinate system of the window resides in the upper left corner (coordinates: (0,0)). The row
index grows downward (maximal: Height-1), the column index grows to the right (maximal: Width-1). You
have to keep in mind, that the range of the coordinate system is independent of the window size. It is specified
only through the image format (see reset_obj_db).
The parameter Machine indicates the name of the computer, which has to open the window. In case of a X-
window, TCP-IP only sets the name, DEC-Net sets in addition a colon behind the name. The "‘server"’ resp. the
"‘screen"’ are not specified. If the empty string is passed the environment variable DISPLAY is used. It indicates
the target computer. At this the name is indicated in common syntax
<Host>:0.0
.
For windows of type ’X-Window’ and ’WIN32-Window’ the parameter FatherWindow can be used to de-
termine the father window for the window to be opened. In case the control ’father’ is set via set_check,
FatherWindow relates to the ID of a HALCON window, otherwise ( set_check(’∼ father’)) it relates to the
ID of an operating system window. If FatherWindow is passed the value 0 or ’root’, then under Windows and
Unix the desktop and the root window become the father window, respectively. In this case, the value of the control
’father’ (set via set_check) is irrelevant.
You may use the value "‘-1"’ for parameters Width and Height. This means, that the according value has
to be specified automatically. In particular this is of importance, if the proportion of pixels is not 1.0 (see
set_system): Is one of the two parameters set to "‘-1"’, it will be specified through the size which results
out of the proportion of pixels. Are both parameters set to "‘-1"’, they will be set to the maximum image format,
which is currently used (further information about the currently used maximum image format can be found in the
description of get_system using "‘width"’ or "‘height"’).
Position and size of a window may change during runtime of a program. This may be achieved by calling
set_window_extents, but also through external interferences (window manager). In the latter case the pro-
cedure set_window_extents is provided.
Opening a window causes the assignment of a called default font. It is used in connection with
procedures like write_string and you may overwrite it by performing set_font after calling
open_window. On the other hand you have the possibility to specify a default font by calling set_system
(’default_font’,<Fontname>) before opening a window (and all following windows; see also
query_font).
You may set the color of graphics and font, which is used for output procedures like disp_region or
disp_circle, by calling set_rgb, set_hsi, set_gray or set_pixel. Calling set_insert
specifies how graphics is combined with the content of the image repeat memory. Thereto you may achieve by
calling, e.g., set_insert(::’not’:) to eliminate the font after writing text twice at the same position.
Normally every output (e.g., disp_image, disp_region, disp_circle, etc.) in a window is terminated
by a called "‘flush"’. This causes the data to be fully visible on the display after termination of the output procedure.
But this is not necessary in all cases, in particular if there are permanently output tasks or if there is a mouse
procedure active. Therefore it is more favorable (i.e., more rapid) to store the data until sufficient data is available.
You may stop this behavior by calling set_system(’flush_graphic’,’false’).
The content of windows is saved (in case it is supported by special driver software); i.e., it is preserved, also if the
window is hidden by other windows. But this is not necessary in all cases: If the content of a window is built up
permanently new ( copy_rectangle), you may suppress the security mechanism for that and hence you can
save the necessary memory. This is done by calling set_system(’backing_store’,’false’) before
opening a window. In doing so you save not only memory but also time to compute. This is significant for the
output of video clips (see copy_rectangle).
For graphical output ( disp_image, disp_region, etc.) you may adjust the window by calling procedure
set_part in order to represent a logical clipping of the image format. In particular this implicates that you
obtain this clipping (with appropriate enlargement) of images and regions only.
Difference: graphical window - textual window
• Using graphical windows the layout is not as variable as concerned to textual windows.
• You may use textual windows for the input of user data only ( read_string).
• During the output of images, regions and graphics a "‘zooming"’ is performed using graphical windows:
Independent on size and side ratio of the window images are transformed in that way, that they are displayed
in the window by filling it completely. On the opposite side using textual windows the output does not care
about the size of the window (only if clipping is necessary).
• Using graphical windows the coordinate system of the window corresponds to the coordinate system of
the image format. Using textual windows, its coordinate system is always equal to the display coordinates
independent on image size.
The parameter Mode determines the mode of the window. It may have following values:
’visible’: Normal mode for graphical windows: The window is created according to the parameters and all input
and output are possible.
’invisible’: Invisible windows are not displayed in the display. Parameters like Row, Column and
FatherWindow do not have any meaning. Output to these windows has no effect. Input ( read_string,
mouse, etc.) is not possible. You may use these windows to query representation parameter for an
output device without opening a (visible) window. Common queries are, e.g., query_color and
get_string_extents.
’transparent’: These windows are transparent: the window itself is not visible (edge and background), but all
the other operations are possible and all output is displayed. A common use for this mode is the creation of
mouse sensitive regions.
’buffer’: These are also not visible windows. The output of images, regions and graphics is not visible on the
display, but is stored in memory. Parameters like Row, Column and FatherWindow do not have any
meaning. You may use buffer windows, if you prepare output (in the background) and copy it finally with
copy_rectangle in a visible window. Another usage might be the rapid processing of image regions
during interactive manipulations. Textual input and mouse interaction are not possible in this mode.
Attention
You may keep in mind that parameters as Row, Column, Width and Height are constrained by the output
device. If you specify a father window (FatherWindow <> ’root’) the coordinates are relative to this window.
Parameter
HALCON 8.0.2
426 CHAPTER 4. GRAPHICS
open_window(0,0,400,-1,"root","visible","",&WindowHandle) ;
read_image(&Image,"fabrik") ;
disp_image(Image,WindowHandle) ;
write_string(WindowHandle,"File: fabrik.ima") ;
new_line(WindowHandle) ;
get_mbutton(WindowHandle,_,_,_) ;
set_lut(WindowHandle,"temperature") ;
set_color(WindowHandle,"blue") ;
write_string(WindowHandle,"temperature") ;
new_line(WindowHandle) ;
write_string(WindowHandle,"Draw Rectangle") ;
new_line(WindowHandle) ;
draw_rectangle1(WindowHandle,&Row1,&Column1,&Row2,&Column2) ;
set_part(Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
new_line(WindowHandle) ;
Result
If the values of the specified parameters are correct open_window returns H_MSG_TRUE. If necessary an
exception handling is raised.
Parallelization Information
open_window is reentrant, local, and processed without parallelization.
Possible Predecessors
reset_obj_db
Possible Successors
set_color, query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_tshape, set_window_extents, get_window_extents, query_color,
set_check, set_system
Alternatives
open_textwindow
See also
disp_region, disp_image, disp_color, set_lut, query_color, set_color, set_rgb,
set_hsi, set_pixel, set_gray, set_part, set_part_style, query_window_type,
get_window_type, set_window_type, get_mposition, set_tposition,
set_window_extents, get_window_extents, set_window_attr, set_check, set_system
Module
Foundation
Parameter
. WindowTypes (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Names of available window types.
Result
query_window_type always returns H_MSG_TRUE.
Parallelization Information
query_window_type is reentrant, local, and processed without parallelization.
Possible Predecessors
reset_obj_db
Module
Foundation
’border_width’ Width of the window border in pixels. Is not implemented under Windows.
’border_color’ Color of the window border. Is not implemented under Windows.
HALCON 8.0.2
428 CHAPTER 4. GRAPHICS
Attention
You have to call set_window_attr before calling open_window.
Parameter
. AttributeName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the attribute that should be modified.
List of values : AttributeName ∈ {"border_width", "border_color", "background_color", "window_title"}
. AttributeValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong
Value of the attribute that should be set.
List of values : AttributeValue ∈ {0, 1, 2, "white", "black", "MyName", "default"}
Result
If the parameters are correct set_window_attr returns H_MSG_TRUE. If necessary an exception handling is
raised.
Parallelization Information
set_window_attr is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, set_draw, set_color, set_colored, set_line_width, open_textwindow
See also
open_window, get_window_attr
Module
Foundation
hWnd = createWINDOW(...) ;
new_extern_window(hwnd, hdc, 0,0,400,-1,WindowHandle) ;
set_device_context(WindowHandle, hdc) ;
read_image(&Image,"fabrik") ;
disp_image(Image,WindowHandle) ;
write_string(WindowHandle,"File: fabrik.ima") ;
new_line(WindowHandle) ;
get_mbutton(WindowHandle,_,_,_) ;
set_lut(WindowHandle,"temperature") ;
set_color(WindowHandle,"blue") ;
write_string(WindowHandle,"temperature") ;
new_line(WindowHandle) ;
write_string(WindowHandle,"Draw Rectangle") ;
new_line(WindowHandle) ;
draw_rectangle1(WindowHandle,&Row1,&Column1,&Row2,&Column2) ;
set_part(Row1,Column1,Row2,Column2) ;
disp_image(Image,WindowHandle) ;
new_line(WindowHandle) ;
Result
If the values of the specified parameters are correct, set_window_dc returns H_MSG_TRUE. If necessary, an
exception is raised.
Parallelization Information
set_window_dc is reentrant, local, and processed without parallelization.
Possible Predecessors
new_extern_window
Possible Successors
disp_image, disp_region
See also
new_extern_window, disp_region, disp_image, disp_color, set_lut, query_color,
set_color, set_rgb, set_hsi, set_pixel, set_gray, set_part, set_part_style,
query_window_type, get_window_type, set_window_type, get_mposition,
set_tposition, set_window_extents, get_window_extents, set_window_attr,
set_check, set_system
Module
Foundation
HALCON 8.0.2
430 CHAPTER 4. GRAPHICS
Parallelization Information
set_window_type is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
See also
open_window, open_textwindow, query_window_type, get_window_type
Module
Foundation
read_image(&Image,"fabrik") ;
sobel_amp(Image,&Amp,"sum_abs",3) ;
open_window(0,0,-1,-1,"root","buffer","",&WindowHandle) ;
disp_image(Amp,WindowHandle) ;
sobel_dir(Image,&Dir,"sum_abs",3) ;
open_window(0,0,-1,-1,"root","buffer","",&WindowHandle) ;
disp_image(Dir,WindowHandle) ;
open_window(0,0,-1,-1,"root","visible","",&WindowHandle) ;
slide_image(Puffer1,Puffer2,WindowHandle) ;
Result
If the both windows exist and one of these windows is valid slide_image returns H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
slide_image is reentrant, local, and processed without parallelization.
Possible Predecessors
open_window, open_textwindow
HALCON 8.0.2
432 CHAPTER 4. GRAPHICS
Alternatives
copy_rectangle, get_mposition
See also
open_window, open_textwindow, move_rectangle
Module
Foundation
Image
5.1 Access
get_grayval ( const Hobject Image, Hlong Row, Hlong Column,
double *Grayval )
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex
Image whose gray value is to be accessed.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) Hlong
Line numbers of pixels to be viewed.
Default Value : 0
Suggested values : Row ∈ {0, 64, 128, 256, 512, 1024}
Typical range of values : 0 ≤ Row ≤ 32768 (lin)
Minimum Increment : 1
Recommended Increment : 1
Restriction : (0 ≤ Row) ∧ (Row < height(Image))
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) Hlong
Column numbers of pixels to be viewed.
Default Value : 0
Suggested values : Column ∈ {0, 64, 128, 256, 512, 1024}
Typical range of values : 0 ≤ Column ≤ 32768 (lin)
Minimum Increment : 1
Recommended Increment : 1
Number of elements : Column = Row
Restriction : (0 ≤ Column) ∧ (Column < width(Image))
433
434 CHAPTER 5. IMAGE
Hobject Bild;
char typ[128];
long width,height;
unsigned char *ptr;
read_image(&Bild,"fabrik");
get_image_pointer1(Bild,(long*)&ptr,typ,&width,&height);
Result
The operator get_image_pointer1 returns the value H_MSG_TRUE if exactly one image was passed.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_image_pointer1 is reentrant and processed without parallelization.
Possible Predecessors
read_image
Alternatives
set_grayval, get_grayval, get_image_pointer3
See also
paint_region, paint_gray
Module
Foundation
Access to the image data pointer and the image data inside the smallest rectangle of the domain of the input image.
The operator get_image_pointer1_rect returns the pointer PixelPointer which points to the
beginning of the image data inside the smallest rectangle of the domain of Image. VerticalPitch
corresponds to the width of the input image Image multiplied with the number of bytes per pixel
(HorizontalBitPitch / 8). Width and Height correspond to the size of the smallest rectangle of the
input region. HorizontalBitPitch is the horizontal distance (in bits) between two neighbouring pixels.
BitsPerPixel is the number of used bits per pixel. get_image_pointer1_rect is symmetrical to
gen_image1_rect.
Attention
The operator get_image_pointer1_rect should only be used for entry into newly created images, since
otherwise the gray values of other images might be overwritten (see relational structure).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2 / int4
Input image (Himage).
. PixelPointer (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong *
Pointer to the image data.
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong *
Width of the output image.
HALCON 8.0.2
436 CHAPTER 5. IMAGE
Hobject image,reg,imagereduced;
char typ[128];
long width,height,vert_pitch,hori_bit_pitch,bits_per_pix, winID;
unsigned char *ptr;
open_window(0,0,512,512,"black",winID);
read_image(&image,"monkey");
draw_region(®,winID);
reduce_domain(image,reg,&imagereduced);
get_image_pointer1_rect(imagereduced,(long*)&ptr,&width,&height,
&vert_pitch,&hori_bit_pitch,&bits_per_pix);
Result
The operator get_image_pointer1_rect returns the value H_MSG_TRUE if exactly one image was
passed. The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_image_pointer1_rect is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image1_rect
Alternatives
set_grayval, get_grayval, get_image_pointer3, get_image_pointer1
See also
paint_region, paint_gray, gen_image1_rect
Module
Foundation
Attention
Only one image can be passed. The operator get_image_pointer3 should only be used for entry into newly
created images, since otherwise the gray values of other images might be overwritten (see relational structure).
Parameter
. ImageRGB (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Input image.
. PointerRed (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong *
Pointer to the pixels of the first channel.
. PointerGreen (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong *
Pointer to the pixels of the second channel.
. PointerBlue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong *
Pointer to the pixels of the third channel.
. Type (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Type of image.
List of values : Type ∈ {"int1", "int2", "uint2", "int4", "byte", "real", "direction", "cyclic", "complex",
"vector_field"}
. Width (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong *
Width of image.
. Height (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong *
Height of image.
Result
The operator get_image_pointer3 returns the value H_MSG_TRUE if exactly one image is passed.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_image_pointer3 is reentrant and processed without parallelization.
Possible Predecessors
read_image
Alternatives
set_grayval, get_grayval, get_image_pointer1
See also
paint_region, paint_gray
Module
Foundation
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex / vector_field
Input image.
. MSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Milliseconds (0..999).
HALCON 8.0.2
438 CHAPTER 5. IMAGE
5.2 Acquisition
close_all_framegrabbers ( )
T_close_all_framegrabbers ( )
HALCON 8.0.2
440 CHAPTER 5. IMAGE
Module
Foundation
Grab images and preprocessed image data from the specified image acquisition device.
The operator grab_data grabs images and preprocessed image data via the image acquisition device specified
by AcqHandle. The desired operational mode of the image acquisition device as well as a suitable image part
can be adjusted via the operator open_framegrabber. Additional interface-specific settings can be specified
via set_framegrabber_param. Depending on the current configuration of the image acquisition device,
the preprocessed image data can be returned in terms of images (Image), regions (Region), XLD contours
(Contours), and control data (Data).
Parameter
Result
If the image acquisition device is open and supports the image acquisition via grab_data, the operator
grab_data returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
grab_data is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, grab_image_start, set_framegrabber_param
Possible Successors
grab_data, grab_data_async, grab_image_start, grab_image, grab_image_async,
set_framegrabber_param, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation
HALCON 8.0.2
442 CHAPTER 5. IMAGE
Grab images and preprocessed image data from the specified image image acquisition device and start the next
asynchronous grab.
The operator grab_data grabs images and preprocessed image data via the image acquisition device specified
by AcqHandle and starts the next asynchronous grab. The desired operational mode of the image acquisition
device as well as a suitable image part can be adjusted via the operator open_framegrabber. Additional
interface-specific settings can be specified via set_framegrabber_param. The segmented image regions
are returned in Region. Depending on the current configuration of the image acquisition device, the preprocessed
image data can be returned in terms of images (Image), regions (Region), XLD contours (Contours), and
control data (Data).
The grab of the next image is finished by calling grab_data_async or grab_image_async. If more
than MaxDelay ms have passed since the asynchronous grab was started, the asynchronously grabbed image is
considered as too old and a new image is grabbed. If a negative value is assigned to MaxDelay this control
mechanism is deactivated.
Please note that if you call the operators grab_image or grab_data after grab_data_async, the asyn-
chronous grab started by grab_data_async is aborted and a new image is grabbed (and waited for).
Parameter
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Grabbed image data.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Pre-processed image regions.
. Contours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *
Pre-processed XLD contours.
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; (Htuple .) Hlong
Handle of the acquisition device to be used.
. MaxDelay (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) double
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms].
Default Value : -1.0
Suggested values : MaxDelay ∈ {-1.0, 20.0, 33.3, 40.0, 66.6, 80.0, 99.9}
. Data (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char * / double * / Hlong *
Pre-processed control data.
Example
Result
If the image acquisition device is open and supports the image acquisition via grab_data_async, the operator
grab_data_async returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
grab_data_async is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, grab_image_start, set_framegrabber_param
Possible Successors
grab_data_async, grab_image_async, set_framegrabber_param, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation
Result
If the image could be acquired successfully, the operator grab_image returns the value H_MSG_TRUE. Oth-
erwise an exception handling is raised.
Parallelization Information
grab_image is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, set_framegrabber_param
Possible Successors
grab_image, grab_image_start, grab_image_async, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation
Grab an image from the specified image acquisition device and start the next asynchronous grab.
HALCON 8.0.2
444 CHAPTER 5. IMAGE
The operator grab_image_async grabs an image via the image acquisition device by AcqHandle and starts
the asynchronous grab of the next image. The desired operational mode of the image acquisition device as well
as a suitable image part can be adjusted via the operator open_framegrabber. Additional interface-specific
settings can be specified via set_framegrabber_param.
The grab of the next image is finished by calling grab_image_async or grab_data_async. If more
than MaxDelay ms have passed since the asynchronous grab was started, the asynchronously grabbed image is
considered as too old and a new image is grabbed. If a negative value is assigned to MaxDelay this control
mechanism is deactivated.
Please note that if you call the operators grab_image or grab_data after grab_image_async, the
asynchronous grab started by grab_image_async is aborted and a new image is grabbed (and waited for).
Parameter
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / int2
Grabbed image.
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; Hlong
Handle of the acquisition device to be used.
. MaxDelay (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms].
Default Value : -1.0
Suggested values : MaxDelay ∈ {-1.0, 20.0, 33.3, 40.0, 66.6, 80.0, 99.9}
Example
Result
If the image acquisition device is open and supports asynchronous grabbing the operator grab_image_start
returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
grab_image_async is reentrant and processed without parallelization.
Possible Predecessors
grab_image_start, open_framegrabber, set_framegrabber_param
Possible Successors
grab_image_async, grab_data_async, set_framegrabber_param, close_framegrabber
See also
grab_image_start, open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation
image part can be adjusted via the operator open_framegrabber. Additional interface-specific settings can
be specified via set_framegrabber_param.
The grab is finished via grab_image_async or grab_data_async. If one of those operators is called
more than MaxDelay ms later, the asynchronously grabbed image is considered as too old and a new image is
grabbed. If a negative value is assigned to MaxDelay this control mechanism is deactivated.
Please note that the operator grab_image_start makes sense only when used together with
grab_image_async or grab_data_async. If you call the operators grab_image or grab_data
instead, the asynchronous grab started by grab_image_start is aborted and a new image is grabbed (and
waited for).
Parameter
Result
If the image acquisition device is open and supports asynchronous grabbing the operator grab_image_start
returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
grab_image_start is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber, set_framegrabber_param
Possible Successors
grab_image_async, grab_data_async, set_framegrabber_param, close_framegrabber
See also
open_framegrabber, info_framegrabber, set_framegrabber_param
Module
Foundation
HALCON 8.0.2
446 CHAPTER 5. IMAGE
’bits_per_channel’: List of all supported values for the parameter ’BitsPerChannel’, see
open_framegrabber.
’camera_type’: Description and list of all supported values for the parameter ’CameraType’, see
open_framegrabber.
’color_space’: List of all supported values for the parameter ’ColorSpace’, see open_framegrabber.
’defaults’: Interface-specific default values in ValueList, see open_framegrabber.
’device’: List of all supported values for the parameter ’Device’, see open_framegrabber.
’external_trigger’: List of all supported values for the parameter ’ExternalTrigger’, see
open_framegrabber.
’field’: List of all supported values for the parameter ’Field’, see open_framegrabber.
’general’: General information (in Information).
’horizontal_resolution’: List of all supported values for the parameter ’HorizontalResolution’, see
open_framegrabber.
’image_height’: List of all supported values for the parameter ’ImageHeight’, see open_framegrabber.
’image_width’: List of all supported values for the parameter ’ImageWidth’, see open_framegrabber.
’info_boards’: Information about actually installed boards or cameras. This data is especially useful for the auto-
detect mechansim of ActivVisionTools and for the Image Acquisition Assistant in HDevelop.
’line_in’: List of all supported values for the parameter ’LineIn’, see open_framegrabber.
’parameters’: List of all interface-specific parameters which are accessible via set_framegrabber_param
or get_framegrabber_param.
’parameters_readonly’: List of all interface-specific parameters which are only accessible via
get_framegrabber_param.
’parameters_writeonly’: List of all interface-specific parameters which are only accessible via
set_framegrabber_param.
’port’: List of all supported values for the parameter ’Port’, see open_framegrabber.
’revision’: Version number of the image acquisition interface.
’start_column’: List of all supported values for the parameter ’StartColumn’, see open_framegrabber.
’start_row’: List of all supported values for the parameter ’StartRow’, see open_framegrabber.
’vertical_resolution’: List of all supported values for the parameter ’VerticalResolution’, see
open_framegrabber.
Please check also the directory doc/html/manuals for documentation about specific image grabber interfaces.
Parameter
. Name (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library
(Linux/UNIX).
Default Value : "File"
Suggested values : Name ∈ {"1394IIDC", "ABS", "BaumerFCAM", "BitFlow", "DahengCAM",
"DahengFG", "DFG-LC", "DirectFile", "DirectShow", "dPict", "DT315x", "DT3162", "eneo", "eXcite",
"FALCON", "File", "FlashBusMV", "FlashBusMX", "GigEVision", "Ginga++", "GingaDG", "INSPECTA",
"INSPECTA5", "iPORT", "Leutron", "LinX", "LuCam", "MatrixVisionAcquire", "MILLite", "mEnableIII",
"mEnableIV", "mEnableVisualApplets", "MultiCam", "Opteon", "p3i2", "p3i4", "PX", "PXC", "PXD",
"PXR", "pylon", "RangerC", "RangerE", "SaperaLT", "SonyXCI", "TAG", "TWAIN", "uEye",
"VRmUsbCam"}
. Query (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Name of the chosen query.
Default Value : "info_boards"
List of values : Query ∈ {"defaults", "general", "info_boards", "parameters", "parameters_readonly",
"parameters_writeonly", "revision", "bits_per_channel", "camera_type", "color_space", "device",
"external_trigger", "field", "generic", "horizontal_resolution", "image_height", "image_width", "port",
"start_column", "start_row", "vertical_resolution"}
. Information (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . char *
Textual information (according to Query).
. ValueList (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char * / Hlong * / double *
List of values (according to Query).
Example
Result
If the parameter values are correct and the specified image acquistion interface is available,
info_framegrabber returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
info_framegrabber is processed completely exclusively without parallelization.
Possible Predecessors
open_framegrabber
Possible Successors
open_framegrabber
See also
open_framegrabber
Module
Foundation
HALCON 8.0.2
448 CHAPTER 5. IMAGE
The operator open_framegrabber returns a handle (AcqHandle) to the opened image acquisition device.
Attention
Due to the multitude of supported image acquisition devices, open_framegrabber contains a large number
of parameters. However, not all parameters are needed for a specific image acquisition device.
Parameter
Result
If the parameter values are correct and the desired image acquisition device could be opened,
open_framegrabber returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
open_framegrabber is processed completely exclusively without parallelization.
HALCON 8.0.2
450 CHAPTER 5. IMAGE
Possible Predecessors
info_framegrabber
Possible Successors
grab_image, grab_data, grab_image_start, grab_image_async, grab_data_async,
set_framegrabber_param
See also
info_framegrabber, close_framegrabber, grab_image
Module
Foundation
Parameter
. AcqHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . framegrabber ; (Htuple .) Hlong
Handle of the acquisition device to be used.
. Param (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Parameter name.
Suggested values : Param ∈ {"color_space", "continuous_grabbing", "external_trigger", "grab_timeout",
"image_height", "image_width", "port", "start_column", "start_row", "volatile"}
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char * / double / Hlong
Parameter value to be set.
Result
If the image acquisition device is open and the specified parameter / parameter value is supported, the operator
set_framegrabber_param returns the value H_MSG_TRUE. Otherwise an exception handling is raised.
Parallelization Information
set_framegrabber_param is reentrant and processed without parallelization.
Possible Predecessors
open_framegrabber
Possible Successors
grab_image, grab_data, grab_image_start, grab_image_async, grab_data_async,
get_framegrabber_param
See also
open_framegrabber, info_framegrabber, get_framegrabber_param
Module
Foundation
5.3 Channel
access_channel ( const Hobject MultiChannelImage, Hobject *Image,
Hlong Channel )
HALCON 8.0.2
452 CHAPTER 5. IMAGE
Parallelization Information
access_channel is reentrant and processed without parallelization.
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
decompose2, decompose3, decompose4, decompose5
See also
count_channels
Module
Foundation
Parameter
HALCON 8.0.2
454 CHAPTER 5. IMAGE
HALCON 8.0.2
456 CHAPTER 5. IMAGE
Parallelization Information
compose5 is reentrant and automatically parallelized (on tuple level).
Possible Successors
disp_image
Alternatives
append_channel
See also
decompose5
Module
Foundation
Module
Foundation
HALCON 8.0.2
458 CHAPTER 5. IMAGE
read_image(&Color,"patras");
count_channels(Color,&num_channels);
for (i=1; i<=num_channels; i++)
{
access_channel(Color,&Channel,i);
disp_image(Channel,WindowHandle);
clear_obj(Channel);
}
Parallelization Information
count_channels is reentrant and processed without parallelization.
Possible Successors
access_channel, append_channel, disp_image
See also
append_channel, access_channel
Module
Foundation
HALCON 8.0.2
460 CHAPTER 5. IMAGE
Module
Foundation
The operator decompose5 converts a 5-channel image into five one-channel images with the same definition
domain.
Parameter
HALCON 8.0.2
462 CHAPTER 5. IMAGE
Parameter
. MultiChannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / uint2 / int4
/ real / complex / vector_field
Multichannel image.
. Image1 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 1.
. Image2 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 2.
. Image3 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 3.
. Image4 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 4.
. Image5 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 5.
. Image6 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 6.
Parallelization Information
decompose6 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
access_channel, image_to_channels
See also
compose6
Module
Foundation
Parameter
. MultiChannelImage (input_object) . . . . . . multichannel-image(-array) ; Hobject : byte / direction /
cyclic / int1 / int2 / uint2 / int4
/ real / complex / vector_field
Multichannel image.
. Image1 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 1.
. Image2 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 2.
. Image3 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 3.
. Image4 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 4.
. Image5 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 5.
. Image6 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 6.
. Image7 (output_object) . . . . . . singlechannel-image(-array) ; Hobject * : byte / direction / cyclic / int1
/ int2 / uint2 / int4 / real / complex / vec-
tor_field
Output image 7.
Parallelization Information
decompose7 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
count_channels
Possible Successors
disp_image
Alternatives
access_channel, image_to_channels
See also
compose7
Module
Foundation
HALCON 8.0.2
464 CHAPTER 5. IMAGE
Parameter
5.4 Creation
copy_image ( const Hobject Image, Hobject *DupImage )
T_copy_image ( const Hobject Image, Hobject *DupImage )
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Image to be copied.
. DupImage (output_object) . . . . . . (multichannel-)image ; Hobject * : byte / direction / cyclic / int1 / int2
/ uint2 / int4 / real / complex / vector_field
Copied image.
Parallelization Information
copy_image is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const
Possible Successors
set_grayval, get_image_pointer1
Alternatives
set_grayval, paint_gray, gen_image_const, gen_image_proto
See also
get_image_pointer1
Module
Foundation
. Image (output_object) . . . . . . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Created image with new image matrix.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"byte", "direction", "cyclic", "int1", "int2", "uint2", "int4", "real"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
. PixelPointer (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to first gray value.
Example
Result
If the parameter values are correct, the operator gen_image1 returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
gen_image1 is reentrant and processed without parallelization.
Possible Predecessors
gen_image_const, get_image_pointer1
HALCON 8.0.2
466 CHAPTER 5. IMAGE
Alternatives
gen_image3, gen_image_const, get_image_pointer1
See also
reduce_domain, paint_gray, paint_region, set_grayval
Module
Foundation
. Image (output_object) . . . . . . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Created HALCON image.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"int1", "int2", "uint2", "int4", "byte", "real", "direction", "cyclic"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
. PixelPointer (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to the first gray value.
. ClearProc (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to the procedure re-releasing the memory of the image when deleting the object.
Default Value : 0
Example
Result
The operator gen_image1_extern returns the value H_MSG_TRUE if the parameter values are correct. Oth-
erwise an exception handling is raised.
Parallelization Information
gen_image1_extern is reentrant and processed without parallelization.
Alternatives
gen_image1, gen_image_const, get_image_pointer1
See also
reduce_domain, paint_gray, paint_region, set_grayval
Module
Foundation
Create an image with a rectangular domain from a pointer on the pixels (with storage management).
The operator gen_image1_rect creates an image of size (VerticalPitch/(HorizontalBitPitch /
8)) * Height. The pixels pointed to by PixelPointer are stored line by line. Since the type of the parameter
PixelPointer is generic (long) a cast must be used for the call. VerticalPitch determines the distance
(in bytes) between pixel m in row n and pixel m in row n+1 inside of memory. All rows of the ’input image’ have
the same vertical pitch. The width of the output image equals VerticalPitch / (HorizontalBitPitch /
8). The height of input and output image are equal. The domain of the output image Image is a rectangle of the
size Width * Height. The parameter HorizontalBitPitch is the horizontal distance (in bits) between two
neighbouring pixels. BitsPerPixel is the number of used bits per pixel.
If DoCopy is set ’true’, the image data pointed to by PixelPointer is copied and memory for the new image is
newly allocated by HALCON . Else the image data is not duplicated and the memory space that PixelPointer
points to must be released when deleting the object Image. This is done by the procedure ClearProc provided
by the caller. This procedure must have the following signature
void ClearProc(void* ptr);
and will be called using __cdecl calling convention when deleting Image. If the memory shall not be released
(in the case of frame grabbers or static memory) a procedure ”without trunk” or the NULL-pointer can be passed.
Analogously to the parameter PixelPointer the pointer has to be passed to the procedure by casting it to
long. If DoCopy is ’true’ then ClearProc is irrelevant. The operator gen_image1_rect is symmetrical to
get_image_pointer1_rect.
HALCON 8.0.2
468 CHAPTER 5. IMAGE
Parameter
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / uint2 / int4
Created HALCON image.
. PixelPointer (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to the first pixel.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Width ≥ 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : Height ≥ 1
. VerticalPitch (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Distance (in bytes) between pixel m in row n and pixel m in row n+1 of the ’input image’.
Restriction : VerticalPitch ≥ (Width · (HorizontalBitPitch/8))
. HorizontalBitPitch (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
Distance between two neighbouring pixels in bits .
Default Value : 8
List of values : HorizontalBitPitch ∈ {8, 16, 32}
. BitsPerPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of used bits per pixel.
Default Value : 8
List of values : BitsPerPixel ∈ {8, 9, 10, 11, 12, 13, 14, 15, 16, 32}
Restriction : BitsPerPixel ≤ HorizontalBitPitch
. DoCopy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Copy image data.
Default Value : "false"
Suggested values : DoCopy ∈ {"true", "false"}
. ClearProc (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to the procedure releasing the memory of the image when deleting the object.
Default Value : 0
Example
image = malloc(640*480);
for (r=0; r<480; r++)
for (c=0; c<640; c++)
image[r*640+c] = c % 255;
gen_image1_rect(new,(long)image,400,480,640,8,8,’false’,(long)free);
}
Result
The operator gen_image1_rect returns the value H_MSG_TRUE if the parameter values are correct. Other-
wise an exception handling is raised.
Parallelization Information
gen_image1_rect is reentrant and processed without parallelization.
Possible Successors
get_image_pointer1_rect
Alternatives
gen_image1, gen_image1_extern
See also
get_image_pointer1_rect
Module
Foundation
. ImageRGB (output_object) . . . . image ; Hobject * : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Created image with new image matrix.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"byte", "direction", "cyclic", "int1", "int2", "uint2", "int4", "real"}
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of image.
Default Value : 512
Suggested values : Width ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Width ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of image.
Default Value : 512
Suggested values : Height ∈ {128, 256, 512, 1024}
Typical range of values : 1 ≤ Height ≤ 512 (lin)
Minimum Increment : 1
Recommended Increment : 10
. PixelPointerRed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to first red value (channel 1).
. PixelPointerGreen (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to first green value (channel 2).
. PixelPointerBlue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pointer ; Hlong
Pointer to first blue value (channel 3).
Example
HALCON 8.0.2
470 CHAPTER 5. IMAGE
main()
{
Hobject rgb;
open_window(0,0,768,525,0,"","",&WindowHandle);
NewRGBImage(&rgb);
disp_color(rgb,WindowHandle);
clear_obj(rgb);
}
Result
If the parameter values are correct, the operator gen_image3 returns the value H_MSG_TRUE. Otherwise an
exception handling is raised.
Parallelization Information
gen_image3 is reentrant and processed without parallelization.
Possible Predecessors
gen_image_const, get_image_pointer1
Possible Successors
disp_color
Alternatives
gen_image1, compose3, gen_image_const
See also
reduce_domain, paint_gray, paint_region, set_grayval, get_image_pointer1,
decompose3
Module
Foundation
gen_image_const(&New,"byte",width,height);
get_image_pointer1(New,(long*)&pointer,type,&width,&height);
for (row=0; row<height-1; row++)
for (col=0; col<width-1; col++)
pointer[row*width+col] = (row + col) % 256;
Result
If the parameter values are correct, the operator gen_image_const returns the value H_MSG_TRUE. Other-
wise an exception handling is raised.
Parallelization Information
gen_image_const is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain, get_image_pointer1, copy_obj
Alternatives
gen_image1, gen_image3
See also
reduce_domain, paint_gray, paint_region, set_grayval, get_image_pointer1
Module
Foundation
HALCON 8.0.2
472 CHAPTER 5. IMAGE
The size of the image is determined by Width and Height The gray values are of the type byte. Gray values
outside the valid area are clipped.
Parameter
HALCON 8.0.2
474 CHAPTER 5. IMAGE
Parameter
HALCON 8.0.2
476 CHAPTER 5. IMAGE
Result
gen_image_proto returns H_MSG_TRUE if all parameters are correct. If necessary, an exception is raised.
Parallelization Information
gen_image_proto is reentrant and processed without parallelization.
Possible Predecessors
test_obj_def
Alternatives
set_grayval, paint_gray, gen_image_const, copy_image
See also
get_image_pointer1
Module
Foundation
The size of the image is determined by Width and Height. The gray values are of the type Type. Gray values
outside the valid area are clipped.
Parameter
. ImageSurface (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte / uint2 / real
Created image with new image matrix.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Pixel type.
Default Value : "byte"
List of values : Type ∈ {"byte", "uint2", "real"}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
First order coefficient in vertical direction.
Default Value : 1.0
Suggested values : Alpha ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Beta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
First order coefficient in horizontal direction.
Default Value : 1.0
Suggested values : Beta ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
. Gamma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double
Zero order coefficient
Default Value : 1.0
Suggested values : Gamma ∈ {-2.0, -1.0, -0.5, -0.0, 0.5, 1.0, 2.0}
Minimum Increment : 0.000001
Recommended Increment : -0.005
HALCON 8.0.2
478 CHAPTER 5. IMAGE
ImageSurface(r, c) = Alpha(r−Row)∗∗2+Beta(c−Col)∗∗2+Gamma(r−Row)∗(c−Col)+Delta(r−Row)+Epsilon(c
The size of the image is determined by Width and Height. The gray values are of the type Type. Gray values
outside the valid area are clipped.
Parameter
HALCON 8.0.2
480 CHAPTER 5. IMAGE
region_to_label converts the input regions into a label image according to their index (1..n), i.e., the first
region is painted with the gray value 1, the second the gray value 2, etc. Only positive gray values are used. For
byte-images the index is entered modulo 256.
Regions larger than the generated image are clipped appropriately. If regions overlap the regions with the higher
image are entered (i.e., they are painted in the order in which they are contained in the input regions). If so desired,
the regions can be made non-overlapping by calling expand_region.
The background, i.e., the area not covered by any regions, is set to 0. This can be used to test in which image range
no region is present.
Parameter
HALCON 8.0.2
482 CHAPTER 5. IMAGE
read_image(&Image,"fabrik");
region_growing(Image,&Regions,3,3,6,100);
region_to_mean(Regions,Image,&Disp);
disp_image(Disp,WindowHandle);
set_draw(WindowHandle,"margin");
set_color(WindowHandle,"black");
disp_region(Regions,WindowHandle);
Result
region_to_mean returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can
be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
region_to_mean is reentrant and processed without parallelization.
Possible Predecessors
regiongrowing, connection
Possible Successors
disp_image
Alternatives
paint_region, intensity
Module
Foundation
5.5 Domain
add_channels ( const Hobject Regions, const Hobject Image,
Hobject *GrayRegions )
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input regions (without gray values).
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real / complex / vector_field
Gray image for regions.
. GrayRegions (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real / complex / vector_field
Regions with gray values (also gray images).
Number of elements : Regions = GrayRegions
Parallelization Information
add_channels is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, gen_circle, draw_region
Possible Successors
threshold, regiongrowing, get_domain
Alternatives
change_domain, reduce_domain
See also
full_domain, get_domain, intersection
Module
Foundation
HALCON 8.0.2
484 CHAPTER 5. IMAGE
See also
full_domain, get_domain, intersection
Module
Foundation
HALCON 8.0.2
486 CHAPTER 5. IMAGE
The operator reduce_domain reduces the definition domain of the given image to the indicated region. The
new definition domain is calculated as the intersection of the old definition domain with the region. Thus, the new
definition domain can be a subset of the region. The size of the matrix is not changed.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Input image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
New definition domain.
. ImageReduced (output_object) . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int1 / int2 /
uint2 / int4 / real / complex / vector_field
Image with reduced definition domain.
Parallelization Information
reduce_domain is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
get_domain
Alternatives
change_domain, rectangle1_domain, add_channels
See also
full_domain, get_domain, intersection
Module
Foundation
5.6 Features
area_center_gray ( const Hobject Regions, const Hobject Image,
double *Area, double *Row, double *Column )
T_area_center_gray ( const Hobject Regions, const Hobject Image,
Htuple *Area, Htuple *Row, Htuple *Column )
Compute the area and center of gravity of a region in a gray value image.
area_center_gray computes the area and center of gravity of the regions Regions that have gray values
which are defined by the image Image. This operator is similar to area_center, but in contrast to that
operator, the gray values of the image are taken into account while computing the area and center of gravity.
The area A of a region R in the image with the gray values g(r, c) is defined as
X
A= g(r, c).
(r,c)∈R
This means that the area is defined by the volume of the gray value function g(r, c). The center of gravity is defined
by the first two normalized moments of the gray values g(r, c), i.e., by (m1,0 , m0,1 ), where
1 X p q
mp,q = r c g(r, c).
A
(r,c)∈R
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Image (input_object) . . . . . . singlechannel-image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real
Gray value image.
. Area (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Gray value volume of the region.
HALCON 8.0.2
488 CHAPTER 5. IMAGE
Contrast:
width
X
Contrast = (i − j)2 cij
i,j=0
where
width = Width of CoocMatrix
cij = Entry of co-occurrence matrix
Pwidth
ux = i,j=0 i ∗ cij
Pwidth
uy = j ∗ cij
Pi,j=0
width
s2x = i,j=0 (i − ux )2 ∗ cij
Pwidth
s2y = 2
i,j=0 (i − uy ) ∗ cij
Attention
The region of the input image is disregarded.
Parameter
Compute the orientation and major axes of a region in a gray value image.
The operator elliptic_axis_gray calculates the length of the axes and the orientation of the ellipse having
the “same orientation” and the “aspect ratio” as the input region. Several input regions can be passed in Regions
as tuples. The length of the major axis Ra and the minor axis Rb as well as the orientation of the major axis with
regard to the x-axis (Phi) are determined. The angle is returned in radians. The calculation is done analogously
to elliptic_axis. The only difference is that in elliptic_axis_gray the gray value moments are
used instead of the region moments. The gray value moments are derived from the input image Image. For the
definition of the gray value moments, see area_center_gray.
HALCON 8.0.2
490 CHAPTER 5. IMAGE
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Image (input_object) . . . . . . singlechannel-image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 /
int4 / real
Gray value image.
. Ra (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Major axis of the region.
. Rb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Minor axis of the region.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Angle enclosed by the major axis and the x-axis.
Result
elliptic_axis_gray returns H_MSG_TRUE if all parameters are correct and no error occurs during execu-
tion. If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>).
If necessary, an exception handling is raised.
Parallelization Information
elliptic_axis_gray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
gen_ellipse
Alternatives
elliptic_axis
See also
area_center_gray
Module
Foundation
Anisotropy coefficient:
Pk
0 rel[i] ∗ log2 (rel[i])
Anisotropy =
Entropy
where
rel[i] = Histogram of relative gray value frequencies
i = Gray value of input image (0 . . . 255)
Pk
k = Smallest possible gray value with 0 rel[i] ≥ 0.5
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions where the features are to be determined.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Gray value image.
. Entropy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Information content (entropy) of the gray values.
Assertion : (0 ≤ Entropy) ∧ (Entropy ≤ 8)
. Anisotropy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Measure of the symmetry of gray value distribution.
Complexity
If F is the area of the region the runtime complexity is O(F + 255).
Result
The operator entropy_gray returns the value H_MSG_TRUE if an image with defined gray values is entered
and the parameters are correct. The behavior in case of empty input (no input images available) is set via the
operator set_system(’no_object_result’,<Result>), the behavior in case of empty region is set
via set_system(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
entropy_gray is reentrant and automatically parallelized (on tuple level).
Alternatives
select_gray
See also
entropy_image, gray_histo, gray_histo_abs, fuzzy_entropy, fuzzy_perimeter
Module
Foundation
To estimate the noise, one of the following four methods can be selected in Method:
• ’foerstner’: If Method is set to ’foerstner’, first for each pixel a homogeneity measure is computed based
on the first derivatives of the gray values of Image. By thresholding the homogeneity measure one obtains
the homogeneous regions in the image. The threshold is computed based on a starting value for the image
noise. The starting value is obtained by applying the method ’immerkaer’ (see below) in the first step. It
is assumed that the gray value fluctuations within the homogeneous regions are solely caused by the image
noise. Furthermore it is assumed that the image noise is Gaussian distributed. The average homogeneity
measure within the homogeneous regions is then used to calculate a refined estimate for the image noise.
The refined estimate leads to a new threshold for the homogeneity. The described process is iterated until the
estimated image noise remains constant between two successive iterations. Finally, the standard deviation of
the estimated image noise is returned in Sigma.
HALCON 8.0.2
492 CHAPTER 5. IMAGE
Note that in some cases the iteration falsely converges to the value 0. This happens, for example, if the gray
value histogram of the input image contains gaps that are caused either by an automatic radiometric scaling
of the camera or frame grabber, respectively, or by a manual spreading of the gray values using a scaling
factor > 1.
Also note that the result obtained by this method is independent of the value passed in Percent.
• ’immerkaer’: If Method is set to ’immerkaer’, first the following filter mask is applied to the input image:
1 −2 1
M = −2 4 −2 .
1 −2 1
The advantage of this method is that M is almost insensitive to image structure but only depends on the noise
in the image. Assuming a Gaussian distributed noise, its standard deviation is finally obtained as
r
π 1 X
Sigma = |Image ∗ M | ,
2 6N
Image
where N is the number of image pixels to which M is applied. Note that the result obtained by this method
is independent of the value passed in Percent.
• ’least_squares’: If Method is set to ’least_squares’, the fluctuations of the gray values with respect to a
locally fitted gray value plane are used to estimate the image noise. First, a homogeneity measure is computed
based on the first derivatives of the gray values of Image. Homogeneous image regions are determined by
selecting the Percent percent most homogeneous pixels in the domain of the input image, i.e., pixels with
small magnitudes of the first derivatives. For each homogeneous pixel a gray value plane is fitted to its 3 × 3
neighborhood. The differences between the gray values within the 3 × 3 neighborhood and the locally fitted
plane are used to estimate the standard deviation of the noise. Finally, the average standard deviation over all
homogeneous pixels is returned in Sigma.
• ’mean’: If Method is set to ’mean’, the noise estimation is based on the difference between the input
image and a noiseless version of the input image. First, a homogeneity measure is computed based on the
first derivatives of the gray values of Image. Homogeneous image regions are determined by selecting
the Percent percent most homogeneous pixels in the domain of the input image, i.e., pixels with small
magnitudes of the first derivatives. A mean filter is applied to the homogeneous image regions in order to
eliminate the noise. It is assumed that the difference between the input image and the thus obtained noiseless
version of the image represents the image noise. Finally, the standard deviation of the differences is returned
in Sigma. It should be noted that this method requires large connected homogenous image regions to be
able to reliably estimate the noise.
Note that the methods ’foerstner’ and ’immerkaer’ assume a Gaussian distribution of the image noise, whereas
the methods ’least_squares’ and’mean’ can be applied to images with arbitrarily distributed noise. In general, the
method ’foerstner’ returns the most accurate results while the method ’immerkaer’ shows the fastest computation.
If the image noise could not be estimated reliably, the error 3175 is raised. This may happen if the image does not
contain enough homogeneous regions, if the image was artificially created, or if the noise is not of Gaussian type.
In order to avoid this error, it might be useful in some cases to try one of the following modifications in dependence
of the estimation method that is passed in Method:
• Increase the size of the input image domain (useful for all methods).
• Increase the value of the parameter Percent (useful for methods ’least_squares’ and ’mean’).
• Use the method ’immerkaer’, instead of the methods ’foerstner’, ’least_squares’, or ’mean’. The method
’immerkaer’ does not rely on the existence of homogeneous image regions, and hence is almost always
applicable.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Input image.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Method to estimate the image noise.
Default Value : "foerstner"
List of values : Method ∈ {"foerstner", "immerkaer", "least_squares", "mean"}
Result
If the parameters are valid, the operator estimate_noise returns the value H_MSG_TRUE. If necessary an
exception is raised. If the image noise could not be estimated reliably, the error 3175 is raised.
Parallelization Information
estimate_noise is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
grab_image, grab_image_async, read_image, reduce_domain
Possible Successors
binomial_filter, gauss_image, mean_image, smooth_image
Alternatives
noise_distribution_mean, intensity, min_max_gray
See also
gauss_distribution, add_noise_distribution
References
W. Förstner: "‘Image Preprocessing for Feature Extraction in Digital Intensity, Color and Range Images"‘, Springer
Lecture Notes on Earth Sciences, Summer School on Data Analysis and the Statistical Foundations of Geomatics,
1999
J. Immerkaer: "‘Fast Noise Variance Estimation"‘, Computer Vision and Image Understanding, Vol. 64, No. 2, pp.
300-302, 1996
Module
Foundation
Calculate gray value moments and approximation by a first order surface (plane).
The operator fit_surface_first_order calculates the gray value moments and the parameters of the
approximation of the gray values by a first order surface. The calculation is done by minimizing the distance
between the gray values and the surface. A first order surface is described by the following formula:
HALCON 8.0.2
494 CHAPTER 5. IMAGE
r_center and c_center are the center coordinates of intersection of the input region with the full image domain. By
the minimization process the parameters from Alpha to Gamma is calculated.
The algorithm used for the fitting can be selected via Algorithm:
’regression’ Standard ’least squares’ line fitting.
’huber’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Huber.
’tukey’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Tukey.
The parameter ClippingFactor (a scaling factor for the standard deviation) controls the amount of damping
outliers: The smaller the value chosen for ClippingFactor the more outliers are detected. The detection of
outliers is repeated. The parameter Iterations specifies the number of iterations. In the modus ’regression’
this value is ignored.
Parameter
Image(r, c) = Alpha(r−r_center)∗∗2+Beta(c−c_center)∗∗2+Gamma(r−r_center)∗(c−c_center)+Delta(r−r_center)
r_center and c_center are the center coordinates of the intersection of the input region with the full image domain.
By the minimization process the parameters from Alpha to Zeta is calculated.
The algorithm used for the fitting can be selected via Algorithm:
’regression’ Standard ’least squares’ fitting.
’huber’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Huber.
’tukey’ Weighted ’least squares’ fitting, where the impact of outliers is decreased based on the approach of
Tukey.
The parameter ClippingFactor (a scaling factor for the standard deviation) controls the amount of damping
outliers: The smaller the value chosen for ClippingFactor the more outliers are detected. The detection of
outliers is repeated. The parameter Iterations specifies the number of iterations. In the modus ’regression’
this value is ignored.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be checked.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2 / direction / cyclic / real
Corresponding gray values.
. Algorithm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Algorithm for the fitting.
Default Value : "regression"
List of values : Algorithm ∈ {"regression", "tukey", "huber"}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Maximum number of iterations (unused for ’regression’).
Default Value : 5
Restriction : Iterations ≥ 0
. ClippingFactor (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Clipping factor for the elimination of outliers.
Default Value : 2.0
List of values : ClippingFactor ∈ {1.0, 1.5, 2.0, 2.5, 3.0}
Restriction : ClippingFactor > 0
. Alpha (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Alpha of the approximating surface.
. Beta (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Beta of the approximating surface.
. Gamma (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Gamma of the approximating surface.
. Delta (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Parameter Delta of the approximating surface.
HALCON 8.0.2
496 CHAPTER 5. IMAGE
1
P
H(X) = M N ln2 l Te (l)h(l)
where M × N is the size of the image, and h(l) is the histogram of the image. Furthermore,
Here, u(x(m, n)) is a fuzzy membership function defining the fuzzy set (see fuzzy_perimeter). The same
restrictions hold as in fuzzy_perimeter.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions for which the fuzzy entropy is to be calculated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image containing the fuzzy membership values.
. Apar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Start of the fuzzy function.
Default Value : 0
Suggested values : Apar ∈ {0, 5, 10, 20, 50, 100}
Typical range of values : 0 ≤ Apar ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 5
. Cpar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
End of the fuzzy function.
Default Value : 255
Suggested values : Cpar ∈ {50, 100, 150, 200, 220, 255}
Typical range of values : 0 ≤ Cpar ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 5
Restriction : Apar ≤ Cpar
. Entropy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Fuzzy entropy of a region.
Example
Result
The operator fuzzy_entropy returns the value H_MSG_TRUE if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
fuzzy_entropy is reentrant and automatically parallelized (on tuple level).
See also
fuzzy_perimeter
References
M.K. Kundu, S.K. Pal: ‘"Automatic selection of object enhancement operator with quantitative justification based
on fuzzy set theoretic measures”; Pattern Recognition Letters 11; 1990; pp. 811-829.
Module
Foundation
where M × N is the size of the image, and u(x(m, n)) is the fuzzy membership function (i.e., the input image).
This implementation uses Zadeh’s Standard-S function, which is defined as follows:
0, x≤a
2 x−a 2 ,
a<x≤b
c−a
µX (x) =
2
x−a
1 − 2 c−a , b < x ≤ c
1, c≤x
HALCON 8.0.2
498 CHAPTER 5. IMAGE
Result
The operator fuzzy_perimeter returns the value H_MSG_TRUE if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
fuzzy_perimeter is reentrant and automatically parallelized (on tuple level).
See also
fuzzy_entropy
References
M.K. Kundu, S.K. Pal: ‘"Automatic selection of object enhancement operator with quantitative justification based
on fuzzy set theoretic measures”; Pattern Recognition Letters 11; 1990; pp. 811-829.
Module
Foundation
0 0 3 2 0 0 1 0 1 1 0
1 1 2 0 2 2 0 1 0 1 1
1 2 3 0 2 0 1 1 1 0 0
1 0 1 0 0 1 0 0
0 2 0 0 0 1 0 0
2 2 1 0 1 2 0 1
0 1 0 2 0 0 2 0
0 0 2 0 0 1 0 0
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region to be checked.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Image providing the gray values.
. Matrix (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : real
Co-occurrence matrix (matrices).
. LdGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of gray values to be distinguished (2LdGray ).
Default Value : 6
List of values : LdGray ∈ {1, 2, 3, 4, 5, 6, 7, 8}
Typical range of values : 1 ≤ LdGray ≤ 256 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Direction of neighbor relation.
Default Value : 0
List of values : Direction ∈ {0, 45, 90, 135}
Result
The operator gen_cooc_matrix returns the value H_MSG_TRUE if an image with defined gray values is
entered and the parameters are correct. The behavior in case of empty input (no input images available) is set
via the operator set_system(’no_object_result’,<Result>), the behavior in case of empty region
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
gen_cooc_matrix is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
draw_region, gen_circle, gen_ellipse, gen_rectangle1, gen_rectangle2, threshold,
erosion_circle, binomial_filter, gauss_image, smooth_image, sub_image
Alternatives
cooc_feature_image
See also
cooc_feature_matrix
Module
Foundation
HALCON 8.0.2
500 CHAPTER 5. IMAGE
Parameter
whereas MIN denotes the minimal gray value, e.g., -128 for an int1 image type. Therefore, the size of the tuple
results from the ratio of the full domain of gray values and the quantisation, e.g. for images of int2 in d 65536
3.0 e =
21846 . The origin gray value of the signed image types int1 resp. int2 is mapped on the index 128 resp. 32768,
negative resp. positive gray values have smaller resp. greater indices.
The histogram can also be returned directly as a graphic via the operators set_paint
(WindowHandle,’histogram’) and disp_image.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region in which the histogram is to be calculated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2
Image the gray value distribution of which is to be calculated.
. Quantization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Quantization of the gray values.
Default Value : 1.0
List of values : Quantization ∈ {1.0, 2.0, 3.0, 5.0, 10.0}
Restriction : Quantization ≥ 1.0
. AbsoluteHisto (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . histogram-array ; Htuple . Hlong *
Absolute frequencies of the gray values.
Result
The operator gray_histo_abs returns the value H_MSG_TRUE if the image has defined gray values and
the parameters are correct. The behavior in case of empty input (no input images available) is set via the oper-
ator set_system(’no_object_result’,<Result>), the behavior in case of empty region is set via
set_system(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
gray_histo_abs is reentrant and processed without parallelization.
Possible Successors
histo_to_thresh, gen_region_histo
Alternatives
min_max_gray, intensity, gray_histo
See also
set_paint, disp_image, histo_2dim, scale_image_max, entropy_gray
Module
Foundation
1 X
HorProjection(r) = Image(r + r0 , c + c0 )
n(r + r0 )
(r+r 0 ,c+c0 )∈Region
1 X
VertProjection(c) = Image(r + r0 , c + c0 )
n(c + c0 )
(r+r 0 ,c+c0 )∈Region
Here, (r0 , c0 ) denotes the upper left corner of the smallest enclosing axis-parallel rectangle of the input region (see
smallest_rectangle1), and n(x) denotes the number of region points in the corresponding row r + r0 or
column c + c0 . Hence, the horizontal projection returns a one-dimensional function that reflects the vertical gray
value changes. Likewise, the vertical projection returns a function that reflects the horizontal gray value changes.
If Mode = ’rectangle’is selected the projection is performed in the direction of the major axes of the smallest
enclosing rectangle of arbitrary orientation of the input region (see smallest_rectangle2). Here, the hor-
izontal projection direction corresponds to the larger axis, while the vertical direction corresponds to the smaller
axis. In this mode, all gray values within the smallest enclosing rectangle of arbitrary orientation of the input
region are used to compute the projections.
HALCON 8.0.2
502 CHAPTER 5. IMAGE
Parameter
read_image(&Image,"affe");
texture_laws(Image,&Texture,"el",1,5);
draw_region(&Region,WindowHandle);
histo_2dim(Region,Texture,Image,&Histo2Dim);
set_part(WindowHandle,0,0,255,255);
disp_image(Histo2Dim,WindowHandle);
Complexity
If F is the plane of the region, the runtime complexity is O(F + 2562 ).
Result
The operator histo_2dim returns the value H_MSG_TRUE if both images have defined gray values.
The behavior in case of empty input (no input images available) is set via the operator set_system
Attention
The calculation of Deviation does not follow the usual definition if the region of the image contains only one
pixel. In this case 0.0 is returned.
Parameter
HALCON 8.0.2
504 CHAPTER 5. IMAGE
Possible Successors
threshold
Alternatives
select_gray, min_max_gray
See also
mean_image, mean_image, gray_histo, gray_histo_abs
Module
Foundation
Result
The operator min_max_gray returns the value H_MSG_TRUE if the input image has the defined gray values
and the parameters are correct. The behavior in case of empty input (no input images available) is set via the
operator set_system(’no_object_result’,<Result>). The behaviour in case of an empty region
is set via the operator set_system(’empty_region_result’,<Result>). If necessary an exception
handling is raised.
Parallelization Information
min_max_gray is reentrant and processed without parallelization.
Possible Predecessors
draw_region, gen_circle, gen_ellipse, gen_rectangle1, threshold, regiongrowing
Possible Successors
threshold
Alternatives
select_gray, intensity
See also
gray_histo, scale_image, scale_image_max, learn_ndim_norm
Module
Foundation
1 X 1 X
MRow = (r − r)(Image(r, c) − Mean) MCol = (c − c)(Image(r, c) − Mean)
F2 F2
(r,c)∈Regions (r,c)∈Regions
Thus Alpha indicates the gradient in the direction of the line axis (“down”), Beta the gradient in the direction of
the column axis (to the “right”).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be checked.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / uint2 / real
Corresponding gray values.
. MRow (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Mixed moments along a line.
. MCol (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Mixed moments along a column.
HALCON 8.0.2
506 CHAPTER 5. IMAGE
Calculate the deviation of the gray values from the approximating image plane.
The operator plane_deviation calculates the deviation of the gray values in Image from the approximation
of the gray values through a plane. Contrary to the standard deviation in case of intensity slanted gray value
planes also receive the value zero. The gray value plane is calculated according to gen_image_gray_ramp.
If F is the plane, α, β, µ the parameters of the image plane and (r0 , c0 ) the center, Deviation is defined by:
s
sum(r,c)∈Regions ((α(r − r0 ) + β(c − c0 ) + µ) − Image(r, c))2
Deviation = .
F
Attention
It should be noted that the calculation of Deviation does not follow the usual definition. It is defined to return
the value 0.0 for an image with only one pixel.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions, of which the plane deviation is to be calculated.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / cyclic
Gray value image.
. Deviation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Deviation of the gray values within a region.
Complexity
If F is the area of the region the runtime complexity amounts to O(F ).
Result
The operator plane_deviation returns the value H_MSG_TRUE if Image is of the type byte.
The behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
plane_deviation is reentrant and automatically parallelized (on tuple level).
Alternatives
intensity, gen_image_gray_ramp, sub_image
See also
moments_gray_plane
Module
Foundation
HALCON 8.0.2
508 CHAPTER 5. IMAGE
Attention
If only one feature is used the value of Operation is meaningless. Several features are processed in the order in
which they are entered. The maximum number of features is limited to 100.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to be examined.
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Gray value image.
. SelectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Regions having features within the limits.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Names of the features.
Default Value : "mean"
List of values : Features ∈ {"area", "row", "column", "ra", "rb", "phi", "min", "max", "mean", "deviation",
"plane_deviation", "anisotropy", "entropy", "fuzzy_entropy", "fuzzy_perimeter", "moments_row",
"moments_column", "alpha", "beta"}
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Logical connection of features.
Default Value : "and"
List of values : Operation ∈ {"and", "or"}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Lower limit(s) of features.
Default Value : 128.0
Suggested values : Min ∈ {0.5, 1.0, 10.0, 20.0, 50.0, 128.0, 255.0, 1000.0}
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Upper limit(s) of features.
Default Value : 255.0
Suggested values : Max ∈ {0.5, 1.0, 10.0, 20.0, 50.0, 128.0, 255.0, 1000.0}
Complexity
If F is the area of the region and N the number of features the runtime complexity is O(F ∗ N ).
Result
The operator select_gray returns the value H_MSG_TRUE if the input image has the defined gray values
and the parameters are correct. The behavior in case of empty input (no input images available) is set via the
operator set_system(’no_object_result’,<Result>), the behavior in case of empty region is set
via set_system(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
select_gray is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, mean_image, entropy_image, sobel_amp, median_separate
Possible Successors
select_shape, select_gray, shape_trans, reduce_domain, count_obj
See also
deviation_image, entropy_gray, intensity, mean_image, min_max_gray, select_obj
Module
Foundation
The histogram can also be displayed directly as a graphic via the operators set_paint
(WindowHandle,’component_histogram’) and disp_image.
Attention
The operator shape_histo_all expects a region and exactly one gray value image as input. Because of the
power of this operator the runtime of shape_histo_all is relatively large!
Parameter
reduce_domain(Region,Image,&RegionGray);
for (i=0; i<256; i++) {
threshold(RegionGray,&Seg,(double)i,255.0);
connect_and_holes(Seg,&AbsHisto[i],_);
clear_obj(Seg);
}
clear_obj(RegionGray); sum = 0;
for (i=0; i<256; i++)
sum += AbsHisto[i];
for (i=0; i<256; i++)
RelHist[i] = (double)AbsHisto[i]/Sum;
}
Complexity
If F is the area
√ √ of the input region and N the mean number of connected components the runtime complexity is
O(255(F + F N )).
Result
The operator shape_histo_all returns the value H_MSG_TRUE if an image with the defined gray val-
ues is entered. The behavior in case of empty input (no input images) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
shape_histo_all is reentrant and processed without parallelization.
Possible Successors
histo_to_thresh, threshold, gen_region_histo
Alternatives
shape_histo_point
HALCON 8.0.2
510 CHAPTER 5. IMAGE
See also
connection, convexity, compactness, connect_and_holes, entropy_gray, gray_histo,
set_paint, count_obj
Module
Foundation
Module
Foundation
5.7 Format
change_format ( const Hobject Image, Hobject *ImagePart, Hlong Width,
Hlong Height )
HALCON 8.0.2
512 CHAPTER 5. IMAGE
Parameter
Result
crop_domain_rel returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
crop_domain_rel is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
reduce_domain, threshold, connection, regiongrowing, pouring
Alternatives
crop_domain, crop_rectangle1
See also
smallest_rectangle1, intersection, gen_rectangle1, clip_region
Module
Foundation
HALCON 8.0.2
514 CHAPTER 5. IMAGE
Possible Successors
disp_image
Alternatives
crop_rectangle1, crop_domain, change_format, reduce_domain
See also
zoom_image_size, zoom_image_factor
Module
Foundation
Module
Foundation
Result
tile_channels returns H_MSG_TRUE if all parameters are correct and no error occurs during execution.
If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>). If
necessary, an exception handling is raised.
Parallelization Information
tile_channels is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
append_channel
HALCON 8.0.2
516 CHAPTER 5. IMAGE
Alternatives
tile_images, tile_images_offset
See also
change_format, crop_part, crop_rectangle1
Module
Foundation
Result
tile_images returns H_MSG_TRUE if all parameters are correct and no error occurs during execution. If the
Tile multiple image objects into a large image with explicit positioning information.
tile_images_offset tiles multiple input image objects, which must contain the same number of channels,
into a large image. The input image object Images contains Num images, which may be of different size. The
output image TiledImage contains as many channels as the input images. The size of the output image is
determined by the parameters Width and Height. The position of the upper left corner of the input images in
the output images is determined by the parameters OffsetRow and OffsetCol. Both parameters must contain
exactly Num values. Optionally, each input image can be cropped to an arbitrary rectangle that is smaller than the
input image. To do so, the parameters Row1, Col1, Row2, and Col2 must be set accordingly. If any of these four
parameters is set to -1, the corresponding input image is not cropped. In any case, all four parameters must contain
Num values. If the input images are cropped the position parameters OffsetRow and OffsetCol refer to the
upper left corner of the cropped image. If the input images overlap each other in the output image (while taking
into account their respective domains), the image with the higher index in Images overwrites the image data of
the image with the lower index. The domain of TiledImage is obtained by copying the domains of Images to
the corresponding locations in the output image.
Attention
If the input images all have the same size and tile the output image exactly, the operator tile_images usually
will be slightly faster.
Parameter
HALCON 8.0.2
518 CHAPTER 5. IMAGE
/* Example 1 */
/* Grab 2 (multi-channel) NTSC images, crop the bottom 5 lines off */
/* of each image, the right 5 columns off of the first image, and */
/* the left five lines off of the second image, and put the cropped */
/* images side-by-side. */
gen_empty_obj (Images)
for I := 1 to 2 by 1
grab_image_async (ImageGrabbed, FGHandle, -1)
concat_obj (Images, ImageGrabbed, Images)
endfor
tile_images_offset (Images, TiledImage, [0,635], [0,0], [0,0],
[0,5], [474,474], [634,639])
/* Example 2 */
/* Enlarge image by 15 rows and columns on all sides */
EnlargeColsBy := 15
EnlargeRowsBy := 15
get_image_pointer1 (Image, Pointer, Type, WidthImage, HeightImage)
tile_images_offset (Image, EnlargedImage, EnlargeRowsBy, EnlargeColsBy,
-1, -1, -1, -1, WidthImage + EnlargeColsBy*2,
HeightImage + EnlargeRowsBy*2)
Result
tile_images_offset returns H_MSG_TRUE if all parameters are correct and no error occurs during execu-
tion. If the input is empty the behavior can be set via set_system(’no_object_result’,<Result>).
If necessary, an exception handling is raised.
Parallelization Information
tile_images_offset is reentrant and automatically parallelized (on channel level).
Possible Predecessors
append_channel
Alternatives
tile_channels, tile_images
See also
change_format, crop_part, crop_rectangle1
Module
Foundation
5.8 Manipulation
/* Copy a circular part of the image ’monkey’ into a new image (New1): */
read_image(&Image,"monkey");
gen_circle(&Circle,200.0,200.0,150.0);
reduce_domain(Image,Circle,&Mask);
/* New image with black (0) background */
gen_image_proto(Image,&New1,0.0);
/* Copy a part of the image ’monkey’ into New1 */
overpaint_gray(New1,Mask);
Result
overpaint_gray returns H_MSG_TRUE if all parameters are correct. If necessary, an exception is raised.
Parallelization Information
overpaint_gray is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto
HALCON 8.0.2
520 CHAPTER 5. IMAGE
Alternatives
get_image_pointer1, paint_gray, set_grayval, copy_image
See also
paint_region, overpaint_region
Module
Foundation
The parameter Type determines whether the region should be painted filled (’fill’) or whether only its boundary
should be painted (’margin’).
If you do not want to modify Image itself, you can use the operator paint_region, which returns the result
in a newly created image.
Attention
overpaint_region modifies the content of an already existing image (Image). Besides, even other image
objects may be affected: For example, if you created Image via copy_obj from another image object (or
vice versa), overpaint_region will also modify the image matrix of this other image object. Therefore,
overpaint_region should only be used to overpaint newly created image objects. Typical operators for this
task are, e.g., gen_image_const (creates a new image with a specified size), gen_image_proto (creates
an image with the size of a specified prototype image) or copy_image (creates an image as the copy of a
specified image).
Parameter
. Image (input_object) . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real /
complex
Image in which the regions are to be painted.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be painted into the input image.
. Grayval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Desired gray values of the regions.
Default Value : 255.0
Suggested values : Grayval ∈ {0.0, 1.0, 2.0, 5.0, 10.0, 16.0, 32.0, 64.0, 128.0, 253.0, 254.0, 255.0}
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Paint regions filled or as boundaries.
Default Value : "fill"
List of values : Type ∈ {"fill", "margin"}
Example
gen_rectangle1(&Rectangle,100.0,100.0,300.0,300.0);
/* generate a black image */
gen_image_const(&New1,"byte", 768, 576)
Result
overpaint_region returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
overpaint_region is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto, reduce_domain
Alternatives
set_grayval, paint_region, paint_xld
See also
reduce_domain, set_draw, paint_gray, overpaint_gray, gen_image_const
Module
Foundation
/* Copy a circular part of the image ’monkey’ into the image ’fabrik’: */
read_image(&Image,"monkey");
gen_circle(&Circle,200.0,200.0,150.0);
reduce_domain(Image,Circle,&Mask);
read_image(&Image,"fabrik");
/* Copy a part of the image ’monkey’ into ’fabrik’ */
paint_gray(Mask,Image2,&MixedImage);
Result
paint_gray returns H_MSG_TRUE if all parameters are correct. If necessary, an exception is raised.
Parallelization Information
paint_gray is reentrant and processed without parallelization.
HALCON 8.0.2
522 CHAPTER 5. IMAGE
Possible Predecessors
read_image, gen_image_const, gen_image_proto
Alternatives
get_image_pointer1, set_grayval, copy_image, overpaint_gray
See also
paint_region, overpaint_region
Module
Foundation
The parameter Type determines whether the region should be painted filled (’fill’) or whether only its boundary
should be painted (’margin’).
As an alternative to paint_region, you can use the operator overpaint_region, which directly paints
the regions into Image.
Parameter
read_image(&Image,"monkey");
gen_rectangle1(&Rectangle,100.0,100.0,300.0,300.0);
/* paint a white rectangle */
paint_region(Rectangle,Image,&ImageResult,255.0,"fill");
Result
paint_region returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be
set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
paint_region is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto, reduce_domain
Alternatives
set_grayval, overpaint_region, paint_xld
See also
reduce_domain, paint_gray, overpaint_gray, set_draw, gen_image_const
Module
Foundation
Parameter
HALCON 8.0.2
524 CHAPTER 5. IMAGE
copy_image(Image1,&Image3);
compose3(Image1,Image2,Image3,&Image);
/* extract subpixel border */
threshold_sub_pix(Image1,&Border,128);
/* select the circle and the arrows */
select_obj(Border,&circle,14);
select_obj(Border,&arrows,16);
concat_obj(circle,arrows,&green_dot);
/* paint a green circle and white arrows,
* therefore define tuple grayval:=[0,255,0,255,255,255].
* (to paint all objects e.g. blue define grayval:=[0,0,255]) */
T_paint_xld(green_dot,Image,&ImageResult,grayval);
Result
paint_xld returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can be set via
set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
paint_xld is reentrant and processed without parallelization.
Possible Predecessors
read_image, gen_image_const, gen_image_proto, gen_contour_polygon_xld,
threshold_sub_pix
Alternatives
set_grayval, paint_gray, paint_region
See also
gen_image_const
Module
Foundation
. Image (input_object) . . . . . . . . . . . . image ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Image to be modified.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) Hlong
Row coordinates of the pixels to be modified.
Default Value : 0
Suggested values : Row ∈ {0, 10, 50, 127, 255, 511}
Typical range of values : 0 ≤ Row
Restriction : (0 ≤ Row) ∧ (Row < height(Image))
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) Hlong
Column coordinates of the pixels to be modified.
Default Value : 0
Suggested values : Column ∈ {0, 10, 50, 127, 255, 511}
Typical range of values : 0 ≤ Column
Restriction : (0 ≤ Column) ∧ (Column < width(Image))
5.9 Type-Conversion
HALCON 8.0.2
526 CHAPTER 5. IMAGE
simply “clipped.” It is therefore advisable to adapt the range of gray values by calling scale_image before
calling this operator. For images of type complex, only the real part is converted. The imaginary part is ignored.
This facilitates an efficient conversion of images that have been transformed back from the frequency domain.
Such images always have an imaginary part of 0.
Attention
If the source and destination image type are identical, no new image matrix is allocated.
Parameter
HALCON 8.0.2
528 CHAPTER 5. IMAGE
Lines
6.1 Access
T_approx_chain ( const Htuple Row, const Htuple Column,
const Htuple MinWidthCoord, const Htuple MaxWidthCoord,
const Htuple ThreshStart, const Htuple ThreshEnd,
const Htuple ThreshStep, const Htuple MinWidthSmooth,
const Htuple MaxWidthSmooth, const Htuple MinWidthCurve,
const Htuple MaxWidthCurve, const Htuple Weight1,
const Htuple Weight2, const Htuple Weight3, Htuple *ArcCenterRow,
Htuple *ArcCenterCol, Htuple *ArcAngle, Htuple *ArcBeginRow,
Htuple *ArcBeginCol, Htuple *LineBeginRow, Htuple *LineBeginCol,
Htuple *LineEndRow, Htuple *LineEndCol, Htuple *Order )
529
530 CHAPTER 6. LINES
HALCON 8.0.2
532 CHAPTER 6. LINES
firstline = get_i(Tline,0);
firstcol = get_i(Tcol,0);
/* approximation with lines and circular arcs */
set_d(t1,0.4,0);
set_d(t2,2.4,0);
set_d(t3,0.3,0);
set_d(t4,0.9,0);
set_d(t5,0.2,0);
set_d(t6,0.4,0);
set_d(t7,2.4,0);
set_i(t8,2,0);
set_i(t9,12,0);
set_d(t10,1.0,0);
set_d(t11,1.0,0);
set_d(t12,1.0,0);
T_approx_chain(Rows,Columns,t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,
&Bzl,&Bzc,&Br,&Bwl,&Bwc,&Ll0,&Lc0,&Ll1,&Lc1,&order);
nob = length_tuple(Bzl);
nol = length_tuple(Ll0);
/* draw lines and arcs */
set_i(WindowHandleTuple,WindowHandle,0) ;
set_line_width(WindowHandle,4);
if (nob>0) T_disp_arc(Bzl,Bzc,Br,Bwl,Bwc);
set_line_width(WindowHandle,1);
if (nol>0) T_disp_line(WindowHandleTuple,Ll0,Lc0,Ll1,Lc1);
Result
The operator approx_chain returns the value H_MSG_TRUE if the parameters are correct. Otherwise an
exception is raised.
Parallelization Information
approx_chain is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, edges_image, get_region_contour, threshold, hysteresis_threshold
Possible Successors
set_line_width, disp_arc, disp_line
Alternatives
get_region_polygon, approx_chain_simple
See also
get_region_chain, smallest_circle, disp_circle, disp_line
Module
Foundation
HALCON 8.0.2
534 CHAPTER 6. LINES
set_line_width(WindowHandle,1);
if (nol>0) T_disp_line(WindowHandleTuple,Ll0,Lc0,Ll1,Lc1);
Result
The operator approx_chain_simple returns the value H_MSG_TRUE if the parameters are correct. Other-
wise an exception is raised.
Parallelization Information
approx_chain_simple is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, edges_image, get_region_contour, threshold, hysteresis_threshold
Possible Successors
set_line_width, disp_arc, disp_line
Alternatives
get_region_polygon, approx_chain
See also
get_region_chain, smallest_circle, disp_circle, disp_line
Module
Foundation
6.2 Features
line_orientation ( double RowBegin, double ColBegin, double RowEnd,
double ColEnd, double *Phi )
Alternatives
line_position, select_lines, partition_lines
See also
line_position, select_lines, partition_lines, detect_edge_segments
Module
Foundation
HALCON 8.0.2
536 CHAPTER 6. LINES
Attention
If only one feature is used the value of Operation is meaningless. Several features are processed according to
the sequence in which they are passed.
Parameter
. RowBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong
Row coordinates of the starting points of the input lines.
. ColBeginIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong
Column coordinates of the starting points of the input lines.
. RowEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong
Row coordinates of the ending points of the input lines.
. ColEndIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x-array ; Htuple . Hlong
Column coordinates of the ending points of the input lines.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Features to be used for selection.
List of values : Feature ∈ {"length", "row", "column", "phi"}
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Desired combination of the features.
List of values : Operation ∈ {"and", "or"}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong / double
Lower limits of the features or ’min’.
Default Value : "min"
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / Hlong / double
Upper limits of the features or ’max’.
Default Value : "max"
. RowBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong *
Row coordinates of the starting points of the lines fulfilling the conditions.
. ColBeginOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *
Column coordinates of the starting points of the lines fulfilling the conditions.
. RowEndOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y-array ; Htuple . Hlong *
Row coordinates of the ending points of the lines fulfilling the conditions.
. ColEndOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x-array ; Htuple . Hlong *
Column coordinates of the ending points of the lines fulfilling the conditions.
. FailRowBOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y-array ; Htuple . Hlong *
Row coordinates of the starting points of the lines not fulfilling the conditions.
Attention
If only one feature is used the value of Operation is meaningless. Several features are processed according to
the sequence in which they are passed.
Parameter
HALCON 8.0.2
538 CHAPTER 6. LINES
HALCON 8.0.2
540 CHAPTER 6. LINES
Matching
7.1 Component-Based
clear_all_component_models ( )
T_clear_all_component_models ( )
clear_all_training_components ( )
T_clear_all_training_components ( )
541
542 CHAPTER 7. MATCHING
Parallelization Information
clear_all_training_components is processed completely exclusively without parallelization.
Possible Predecessors
train_model_components, write_training_components
See also
clear_training_components
Module
Matching
Possible Predecessors
train_model_components, write_training_components
See also
clear_all_training_components
Module
Matching
Adopt new parameters that are used to create the model components into the training result.
With cluster_model_components you can modify parameters after a first training has been per-
formed using train_model_components. cluster_model_components sets the crite-
rion AmbiguityCriterion that is used to solve the ambiguities, the maximum contour overlap
MaxContourOverlap, and the cluster threshold of the training result ComponentTrainingID to
the specified values. A detailed description of these parameters can be found in the documentation of
train_model_components. By modifying these parameters, the way in which the initial components are
merged into rigid model components changes. For example, the greater the cluster threshold is chosen, the fewer
initial components are merged.
The rigid model components are returned in ModelComponents. In order to receive reasonable results, it is es-
sential that the same training images that were used to perform the training with train_model_components
are passed in TrainingImages. The pose of the newly clustered components within the training images is
determined using the shape-based matching. As in train_model_components, one can decide whether the
shape models should be pregenerated by using set_system(’pregenerate_shape_models’,...).
Furthermore, set_system(’border_shape_models’,...) can be used to determine whether the mod-
els must lie completely within the training images or whether they can extend partially beyond the image border.
Thus, you can select suitable parameter values interactively by repeatedly calling
inspect_clustered_components with different parameter values and then setting the chosen val-
ues by using get_training_components.
Parameter
HALCON 8.0.2
544 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator cluster_model_components returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
cluster_model_components is processed completely exclusively without parallelization.
Possible Predecessors
train_model_components, inspect_clustered_components
Possible Successors
get_training_components, create_trained_component_model,
modify_component_relations, write_training_components,
get_component_relations, clear_training_components,
clear_all_training_components
Module
Matching
Prepare a component model for matching based on explicitly specified components and relations.
create_component_model prepares patterns, which are passed in the form of a model image
ModelImage and several regions ComponentRegions, as a component model for matching. The out-
put parameter ComponentModelID is a handle for this model, which is used in subsequent calls to
find_component_model. In contrast to create_trained_component_model, no preceding training
with train_model_components needs to be performed before calling create_component_model.
Each of the regions passed in ComponentRegions describes a separate model component. Later, the index of
a component region in ComponentRegions is used to denote the model component. The reference point of a
component is the center of gravity of its associated region, which is passed in ComponentRegions. It can be
calculated by calling area_center.
The relative movements (relations) of the model components can be set by passing VariationRow,
VariationColumn, and VariationAngle. Because directly passing the relations is complicated, instead of
the relations the variations of the model components are passed. The variations describe the movements of the com-
ponents independently from each other relative to their poses in the model image ModelImage. The parameters
VariationRow and VariationColumn describe the movement of the components in row and column di-
rection by ± 21 VariationRow and ± 12 VariationColumn, respectively. The parameter VariationAngle
describes the angle variation of the component by ± 12 VariationAngle. Based on these values, the relations
are automatically computed. The three parameters must either contain one element, in which case the parameter is
used for all model components, or must contain the same number of elements as ComponentRegions, in which
case each parameter element refers to the corresponding model component in ComponentRegions.
The parameters AngleStart and AngleExtent determine the range of possible rotations of the component
model in an image.
Internally, a separate shape model is built for each model component (see create_shape_model). There-
fore, the parameters ContrastLowComp, ContrastHighComp, MinSizeComp, MinContrastComp,
MinScoreComp, NumLevelsComp, AngleStepComp, OptimizationComp, MetricComp, and
PregenerationComp correspond to the parameters of create_shape_model, with the following differ-
ences: First, in the parameter Contrast of create_shape_model the upper as well as the lower threshold
for the hysteresis threshold method can be passed. Additionally, a third value, which suppresses small connected
contour regions, can be passed. In contrast, the operator create_component_model offers three sepa-
rate parameters ContrastHighComp, ContrastLowComp, and MinScoreComp in order to set these three
values. Consequently, also the automatic computation of the contrast threshold(s) is different. If both hystere-
sis threshold should be automatically determined, both ContrastLowComp and ContrastHighComp must
be set to ’auto’. In contrast, if only one threshold value should be determined, ContrastLowComp must be
set to ’auto’ while ContrastHighComp must be set to an arbitrary value different from ’auto’. Secondly,
the parameter Optimization of create_shape_model provides the possibility to reduce the number
of model points as well as the possibility to completely pregenerate the shape model. In contrast, the oper-
ator create_trained_component_model uses a separate parameter PregenerationComp in order
HALCON 8.0.2
546 CHAPTER 7. MATCHING
to decide whether the shape models should be completely pregenerated or not. A third difference concerning
the parameter MinScoreComp should be noted. When using the shape-based matching, this parameter needs
not be passed when preparing the shape model using create_shape_model, but only during the search
using find_shape_model. In contrast, when preparing the component model it is favorable to analyze ro-
tational symmetries of the model components and similarities between the model components. However, this
analysis only leads to meaningful results if the value for MinScoreComp that is used during the search (see
find_component_model) is already approximately known.
In addition to the parameters ContrastLowComp, ContrastHighComp, and MinSizeComp also the pa-
rameters MinContrastComp, NumLevelsComp, AngleStepComp, and OptimizationComp can be au-
tomatically determined by passing ’auto’ for the respective parameters.
All component-specific input parameters (parameter names terminate with the suffix Comp) must either contain
one element, in which case the parameter is used for all model components, or must contain the same number of
elements as the number of regions in ComponentRegions, in which case each parameter element refers to the
corresponding element in ComponentRegions.
In addition to the individual shape models, the component model also contains information about the way the
single model components must be searched relative to each other using find_component_model in order to
minimize the computation time of the search. For this, the components are represented in a tree structure. First, the
component that stands at the root of this search tree (root component) is searched. Then, the remaining components
are searched relative to the pose of their predecessor in the search tree.
The root component can be passed as an input parameter of find_component_model during the search. To
what extent a model component is suited to act as the root component depends on several factors. In principle, a
model component that can be found in the image with a high probability should be chosen. Therefore, a component
that is sometimes occluded to a high degree or that is missing in some cases is not well suited to act as the root
component. Additionally, the computation time that is associated with the root component during the search
can serve as a criterion. A ranking of the model components that is based on the latter criterion is returned in
RootRanking. In this parameter the indices of the model components are sorted in descending order according
to their associated search time, i.e., RootRanking[0] contains the index of the model component that, chosen
as root component, allows the fastest search. Note that the ranking returned in RootRanking represents only a
coarse estimation. Furthermore, the calculation of the root ranking assumes that the image size as well as the value
of the system parameter ’border_shape_models’ are identical when calling create_component_model and
find_component_model.
Parameter
. ModelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Input image from which the shape models of the model components should be created.
. ComponentRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Input regions from which the shape models of the model components should be created.
. VariationRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Variation of the model components in row direction.
Suggested values : VariationRow ∈ {0, 1, 2, 3, 4, 5, 10, 20, 30, 40, 50, 100, 150}
Restriction : VariationRow ≥ 0
. VariationColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong
Variation of the model components in column direction.
Suggested values : VariationColumn ∈ {0, 1, 2, 3, 4, 5, 10, 20, 30, 40, 50, 100, 150}
Restriction : VariationColumn ≥ 0
. VariationAngle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double
Angle variation of the model components.
Suggested values : VariationAngle ∈ {0, 0.017, 0, 035, 0.05, 0.07, 0.09, 0.17, 0.35, 0.52, 0.67, 0.87}
Restriction : VariationAngle ≥ 0
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Smallest rotation of the component model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Extent of the rotation of the component model.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.28, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
HALCON 8.0.2
548 CHAPTER 7. MATCHING
Result
If the parameters are valid, the operator create_component_model returns the value H_MSG_TRUE. If
necessary an exception is raised.
Parallelization Information
create_component_model is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, concat_obj
Possible Successors
find_component_model
Alternatives
create_trained_component_model
See also
create_shape_model, find_shape_model
Module
Matching
is a handle for this model, which is used in subsequent calls to find_component_model. In con-
trast to create_component_model, the model components must have been previously trained using
train_model_components before calling create_trained_component_model.
The parameters AngleStart and AngleExtent determine the range of possible rotations of the component
model in an image.
Internally, a separate shape model is built for each model component (see create_shape_model).
Therefore, the parameters MinContrastComp, MinScoreComp, NumLevelsComp, AngleStepComp,
OptimizationComp, MetricComp, and PregenerationComp correspond to the parameters of
create_shape_model, with the following differences: First, the parameter Optimization of
create_shape_model provides the possibility to reduce the number of model points as well as the possibility
to completely pregenerate the shape model. In contrast, the operator create_trained_component_model
uses a separate parameter PregenerationComp in order to decide whether the shape models should be com-
pletely pregenerated or not. A second difference concerning the parameter MinScoreComp should be noted.
When using the shape-based matching, this parameter needs not be passed when preparing the shape model us-
ing create_shape_model but only during the search using find_shape_model. In contrast, when
preparing the component model it is favorable to analyze rotational symmetries of the model components and
similarities between the model components. However, this analysis only leads to meaningful results if the value
for MinScoreComp that is used during the search (see find_component_model) is already approximately
known. After the search with find_component_model the pose parameters of the components in a search
image are returned. Note that the pose parameters refer to the reference points of the components. The reference
point of a component is the center of gravity of its associated region that is returned in ModelComponents of
train_model_components.
The parameters MinContrastComp, NumLevelsComp, AngleStepComp, and OptimizationComp can
be automatically determined by passing ’auto’ for the respective parameters.
All component-specific input parameters (parameter names terminate with the suffix Comp) must either contain
one element, in which case the parameter is used for all model components, or must contain the same number
of elements as the number of model components contained in ComponentTrainingID, in which case each
parameter element refers to the corresponding component in ComponentTrainingID.
In addition to the individual shape models, the component model also contains information about the way the
single model components must be searched relative to each other using find_component_model in order to
minimize the computation time of the search. For this, the components are represented in a tree structure. First, the
component that stands at the root of this search tree (root component) is searched. Then, the remaining components
are searched relative to the pose of their predecessor in the search tree.
The root component can be passed as an input parameter of find_component_model during the search. To
what extent a model component is suited to act as root component depends on several factors. In principle, a model
component that can be found in the image with a high probability should be chosen. Therefore, a component that
is sometimes occluded to a high degree or that is missing in some cases is not well suited to act as root component.
Additionally, the computation time that is associated with the root component during the search can serve as a
criterion. A ranking of the model components that is based on the latter criterion is returned in RootRanking.
In this parameter the indices of the model components are sorted in descending order according to their associ-
ated computation time, i.e., RootRanking[0] contains the index of the model component that, chosen as root
component, allows the fastest search. Note that the ranking returned in RootRanking represents only a coarse
estimation. Furthermore, the calculation of the root ranking assumes that the image size as well as the value of the
system parameter ’border_shape_models’ are identical when calling create_trained_component_model
and find_component_model.
Parameter
. ComponentTrainingID (input_control) . . . . . . . . . . . . . . . . . . . . . . component_training ; (Htuple .) Hlong
Handle of the training result.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Smallest rotation of the component model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Extent of the rotation of the component model.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.28, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
HALCON 8.0.2
550 CHAPTER 7. MATCHING
Result
If the parameters are valid, the operator create_trained_component_model returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
create_trained_component_model is processed completely exclusively without parallelization.
Possible Predecessors
train_model_components, read_training_components
Possible Successors
find_component_model
Alternatives
create_component_model
See also
create_shape_model, find_shape_model
Module
Matching
HALCON 8.0.2
552 CHAPTER 7. MATCHING
The operator find_component_model finds the best NumMatches instances of the compo-
nent model ComponentModelID in the input image Image. The model must have been created
previously by calling create_trained_component_model, create_component_model, or
read_component_model.
The components of the component model ComponentModelID are represented in in a tree structure. The com-
ponent that stands at the root of this search tree (root component) is searched within the full search space, i.e., at
all allowed positions and in the allowed range of orientations (see below). In contrast, the remaining components
are searched relative to the pose of their predecessor in the search tree within a restricted search space that is com-
puted from the relations (recursive search). The index of the root component can be passed in RootComponent.
To what extent a model component is suited to act as root component depends on several factors. In principle, a
model component that can be found in the image with a high probability, should be chosen. Therefore, a com-
ponent that is sometimes occluded to a high degree or that is missing in some cases is not well suited to act as
root component. The behavior of the operator when dealing with a missing or strongly occluded root compo-
nent can be set with IfRootNotFound (see below). Also, the computation time that is associated with the
root component during the search can serve as a criterion. A ranking of the model components that is based on
the latter criterion is returned in RootRanking of the operator create_trained_component_model or
create_component_model, respectively. If the complete ranking is passed in RootComponent, the first
value RootComponent[0] is automatically selected as the root component. The domain of the image Image
determines the search space for the reference point, i.e., the allowed positions, of the root component. The pa-
rameters AngleStartRoot and AngleExtentRoot specify the allowed angle range within which the root
component is searched. If necessary, the range of rotations is clipped to the range given when the component model
was created with create_trained_component_model or create_component_model, respectively.
The angle range for each component can be queried with get_shape_model_params after requesting the
corresponding shape model handles with get_component_model_params.
The position and rotation of the model components of all found component model instances are returned
in RowComp, ColumnComp, and AngleComp. The coordinates RowComp and ColumnComp are the
coordinates of the origin (reference point) of the component in the search image. If the component
model was created with create_trained_component_model by training, the origin of the compo-
nent is the center of gravity of the respective returned contour region in ModelComponents of the op-
erator train_model_components. Otherwise, if the component model was created manually with
create_component_model, the origin of the component is the center of gravity of the corresponding passed
component region ComponentRegion of the operator create_component_model. Since the relations be-
tween the components in ComponentModelID refer to this reference point, the origin of the components must
not be modified by using set_shape_model_origin.
Additionally, the score of each found component instance is returned in ScoreComp. The score is a number
between 0 and 1, and is an approximate measure of how much of the component is visible in the image. If,
for example, half of the component is occluded, the score cannot exceed 0.5. While ScoreComp represents
the score of the instances of the single components, Score contains the score of the instances of the entire
component model. More precisely, Score contains the weighted mean of the associated values of ScoreComp.
The weighting is performed according to the number of model points within the respective component.
In order to assign the values in RowComp, ColumnComp, AngleComp, and ScoreComp to the as-
sociated model component, the index of the model component (see create_component_model and
train_model_components, respectively) is returned in ModelComp. Furthermore, for each found instance
of the component model its associated component matches are given in ModelStart and ModelEnd. Thus,
the matches of the components that correspond to the first found instance of the component model are given
by the interval of indices [ModelStart[0],ModelEnd[0]]. The indices refer to the parameters RowComp,
ColumnComp, AngleComp, ScoreComp, and ModelComp. Assume, for example, that two instances of the
component model, which consists of three components, are found in the image, where for one instance only two
components (component 0 and component 2) could be found. Then the returned parameters could, for exam-
ple, have the following elements: RowComp = [100,200,300,150,250], ColumnComp = [200,210,220,400,425],
AngleComp = [0,0.1,-0.2,0.1,0.2,0], ScoreComp = [1,1,1,1,1], ModelComp = [0,1,2,0,2], ModelStart =
[0,3], ModelEnd = [2,4], Score = [1,1]. The operator get_found_component_model can be used to
visualize the result of the search and to extract the component matches of a certain component model instance.
By default, the components are searched at image positions where the components lie completely within the im-
age. This means that the components will not be found if they extend beyond the borders of the image, even
if they would achieve a score greater than MinScoreComp (see below). This behavior can be changed with
set_system(’border_shape_models’,’true’), which will cause components that extend beyond the
image border to be found if they achieve a score greater than MinScoreComp. Here, points lying outside the
image are regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the search
will increase in this mode.
The parameter MinScore determines what score a potential match of the component model must at least have to
be regarded as an instance of the component model in the image. If the component model can be expected never
to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If a missing or strongly occluded
root component must be assumed, and hence IfRootNotFound is set to ’select_new_root’ (see below), the
search is faster the larger MinScore is chosen. Otherwise, the value of this parameter only slightly influences the
computation time.
The maximum number of model instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches. If all model instances exceeding MinScore in the image
should be found, NumMatches must be set to 0.
In some cases, found instances only differ in the pose of one or a few components. The parameter MaxOverlap
determines by what fraction (i.e., a number between 0 and 1) two instances may at most overlap in order to
consider them as different instances, and hence to return them separately. If two instances overlap each other by
more than MaxOverlap only the best instance is returned. The calculation of the overlap is based on the smallest
enclosing rectangles of arbitrary orientation (see smallest_rectangle2) of the found component instances.
If MaxOverlap = 0, the found instances may not overlap at all, while for MaxOverlap = 1 no check for
overlap is performed, and hence all instances are returned.
The parameter IfRootNotFound specifies the behavior of the operator when dealing with a missing or
strongly occluded root component. This parameter strongly influences the computation time of the operator. If
IfRootNotFound is set to ’stop_search’, it is assumed that the root component is always found in the image.
Consequently, for instances for which the root component could not be found the search for the remaining compo-
nents is not continued. If IfRootNotFound is set to ’select_new_root’, different components are successively
chosen as the root component and searched within the full search space. The order in which the selection of the
root component is performed corresponds to the order passed in RootRanking. The poses of the found in-
stances of all root components are then used to start the recursive search for the remaining components. Hence,
it is possible to find instances even if the original root component is not found. However, the computation time
of the search increases significantly in comparison to the search when choosing ’stop_search’. The number of
root components to search depends on the value specified for MinScore. The higher the value for MinScore
is chosen the fewer root components must be searched, and thus the faster the search is performed. If the number
of elements in RootComponent is less than the number of required root components during the search, the root
components are completed by the automatically computed order (see create_trained_component_model
or create_component_model).
The parameter IfComponentNotFound specifies the behavior of the operator when dealing with missing or
strongly occluded components other than the root component. Here, it can be stated in which way components
that must be searched relative to the pose of another (predecessor) component should be treated if the predecessor
component was not found. If IfComponentNotFound is set to ’prune_branch’, such components are not
searched at all and are also treated as ’not found’. If IfComponentNotFound is set to ’search_from_upper’,
such components are searched relative to the pose of the predecessor component of the predecessor component. If
IfComponentNotFound is set to ’search_from_best’, such components are searched relative to the pose of the
already found component from which the relative search can be performed with minimum computational effort.
The parameter PosePrediction determines whether the pose of components that could not be found should
be estimated. If PosePrediction is set to ’none’, only the poses of the found components are returned. In
contrast, if PosePrediction is set to ’from_neighbors’ or ’from_all’, the poses of components that could not
be found are estimated and returned with a score of ScoreComp = 0.0. The estimation of the poses is then either
based on the poses of the found neighboring components in the search tree (’from_neighbors’) or on the poses of
all found components (’from_all’).
Internally, the shape-based matching is used for the component-based matching in order to search the individ-
ual components (see find_shape_model). Therefore, the parameters MinScoreComp, SubPixelComp,
NumLevelsComp, and GreedinessComp have the same meaning as the corresponding parameters in
find_shape_model. These parameters must either contain one element, in which case the parameter is used
for all components, or must contain the same number of elements as model components in ComponentModelID,
in which case each parameter element refers to the corresponding component in ComponentModelID.
NumLevelsComp may also contain two elements or twice the number of elements as model components. The
first value determines the number of pyramid levels to use. The second value determines the lowest pyramid level
HALCON 8.0.2
554 CHAPTER 7. MATCHING
to which the found matches are tracked. If different values should be used for different components, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevelsComp. If, for ex-
ample, two components are contained in ComponentModelID, and the number of pyramid levels is 5 for the
first component and 4 for the second component, and the lowest pyramid level is 2 for the first component and 1
for the second component, NumLevelsComp = [5,2,4,1] must be selected. Further details can be found in the
documentation of find_shape_models.
Parameter
HALCON 8.0.2
556 CHAPTER 7. MATCHING
Module
Matching
chosen the more contour regions are merged. The default value of ’merge_distance’ is 5 and the default value of
’merge_fraction’ is 0.5 (corresponds to 50 percent).
When using the second possibility, i.e., the components of the component model are approximately known,
the training by using train_model_components can be performed without previously executing
gen_initial_components. If this is desired, the initial components can be specified by the user
and directly passed to train_model_components. Furthermore, if the components as well as the
relative movements (relations) of the components are known, gen_initial_components as well as
train_model_components need not be executed. In fact, by immediately passing the components as well
as the relations to create_component_model, the component model can be created without any training.
In both cases, however, gen_initial_components can be used to evaluate the effect of the feature ex-
traction parameters ContrastLow, ContrastHigh, and MinSize of train_model_components and
create_component_model, and hence to find suitable parameter values for a certain application.
For this, the image regions for the (initial) components must be explicitly given, i.e., for each (initial) component
a separate image from which the (initial) component should be created is passed. In this case, ModelImage
contains multiple image objects. The domain of each image object is used as the region of interest for calculating
the corresponding (initial) component. The image matrix of all image objects in the tuple must be identical, i.e.,
ModelImage cannot be constructed in an arbitrary manner using concat_obj, but must be created from the
same image using add_channels or equivalent calls. If this is not the case, an error message is returned. If
the paramters ContrastLow, ContrastHigh, or MinSize only contain one element, this value is applied
to the creation of all (initial) components. In contrast, if different values for different (initial) components should
be used, tuples of values can be passed for these three parameters. In this case, the tuples must have a length
that corresponds to the number of (initial) components, i.e., the number of image objects in ModelImage. The
contour regions of the (initial) components are returned in InitialComponents.
Thus, the second possibility is equivalent to the function of inspect_shape_model within the shape-based
matching. However, in contrast to inspect_shape_model, gen_initial_components does not return
the contour regions on multiple image pyramid levels. Therefore, if the number of pyramid levels to be used
should be chosen manually, preferably inspect_shape_model should be called individually for each (initial)
component.
For both described possibilities the parameters ContrastLow, ContrastHigh, and MinSize can be au-
tomatically determined. If both hysteresis threshold should be automatically determined, both ContrastLow
and ContrastHigh must be set to ’auto’. In contrast, if only one threshold value should be determined,
ContrastLow must be set to ’auto’ while ContrastHigh must be set to an arbitrary value different from
’auto’.
If the input image ModelImage has one channel the representation of the model is created with the method
that is used in create_component_model or create_trained_component_model for the metrics
’use_polarity’, ’ignore_global_polarity’, and ’ignore_local_polarity’. If the input image has more than one chan-
nel the representation is created with the method that is used for the metric ’ignore_color_polarity’.
Parameter
. ModelImage (input_object) . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; Hobject : byte / uint2
Input image from which the initial components should be extracted.
. InitialComponents (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Contour regions of initial components.
. ContrastLow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Lower hysteresis threshold for the contrast of the initial components in the image.
Default Value : "auto"
Suggested values : ContrastLow ∈ {"auto", 10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Restriction : ContrastLow > 0
. ContrastHigh (input_control) . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Upper hysteresis threshold for the contrast of the initial components in the image.
Default Value : "auto"
Suggested values : ContrastHigh ∈ {"auto", 10, 20, 30, 40, 60, 80, 100, 120, 140, 160}
Restriction : (ContrastHigh > 0) ∧ (ContrastHigh ≥ ContrastLow)
. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong / const char *
Minimum size of the initial components.
Default Value : "auto"
Suggested values : MinSize ∈ {"auto", 0, 5, 10, 20, 30, 40}
Restriction : MinSize ≥ 0
HALCON 8.0.2
558 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator gen_initial_components returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gen_initial_components is reentrant and processed without parallelization.
Possible Predecessors
draw_region, add_channels, reduce_domain
Possible Successors
train_model_components
Alternatives
inspect_shape_model
Module
Matching
Result
If the handle of the component model is valid, the operator get_component_model_params returns the
value H_MSG_TRUE. If necessary, an exception is raised.
Parallelization Information
get_component_model_params is reentrant and processed without parallelization.
Possible Predecessors
create_trained_component_model, create_component_model
See also
get_shape_model_params
HALCON 8.0.2
560 CHAPTER 7. MATCHING
Module
Matching
HALCON 8.0.2
562 CHAPTER 7. MATCHING
Result
If the parameters are valid, the operator get_component_model_tree returns the value H_MSG_TRUE. If
necessary an exception is raised.
Parallelization Information
get_component_model_tree is reentrant and processed without parallelization.
Possible Predecessors
create_trained_component_model, create_component_model
See also
train_model_components
Module
Matching
Return the relations between the model components that are contained in a training result.
get_component_relations returns the relations between model components after training them with
train_model_components. With the parameter ReferenceComponent, you can select a reference com-
ponent. get_component_relations then returns the relations between the reference component and
all other components in the model image (if Image = ’model_image’ or Image = 0) or in a training image
(if Image ≥ 1). In order to obtain the relations in the ith training image, Image must be set to i. The re-
sult of the training returned by train_model_components must be passed in ComponentTrainingID.
ReferenceComponent describes the index of the reference component and must be within the range of 0 and
n-1, if n is the number of model components (see train_model_components).
The relations are returned in form of regions in Relations as well as in form of numerical values in Row,
Column, Phi, Length1, Length2, AngleStart, and AngleExtent.
The region object tuple Relations is designed as follows. For each component a separate region is returned.
Consequently, Relations contains n regions, where the order of the regions within the tuple is determined by the
index of the corresponding components. The positions of all components in the image are represented by circles
with a radius of 3 pixels. For each component other than the reference component ReferenceComponent, ad-
ditionally the position relation and the orientation relation relative to the reference component are represented.
The position relation is represented by a rectangle and the orientation relation is represend by a circle sec-
tor with a radius of 30 pixels. The center of the circle is placed at the mean relative position of the compo-
nent. The rectangle describes the movement of the reference point of the respective component relative to the
pose of the reference component, while the circle sector describes the variation of the relative orientation (see
train_model_components). A relative orientation of 0 corresponds to the relative orientation of both com-
ponents in the model image. If both components appear in the same relative orientation in all images, the circle
sector consequently degenerates to a straight line.
In addition to the region object tuple Relations, the relations are also returned in form of numerical values in
Row, Column, Phi, Length1, Length2, AngleStart, and AngleExtent. These parameters are tuples
of length n and contain the relations of all components relative to the reference component, where the order of
the values within the tuples is determined by the index of the corresponding component. The position relation is
described by the parameters of the corresponding rectangle Row, Column, Phi, Length1, and Length2 (see
gen_rectangle2). The orientation relation is described by the starting angle AngleStart and the angle
extent AngleExtent. For the reference component only the position within the image is returned in Row and
Column. All other values are set to 0.
If the reference component has not been found in the current image, an array of empty regions is returned and the
corresponding parameter values are set to 0.
The operator get_component_relations is particularly useful in order to visualize the result of the train-
ing that was performed with train_model_components. With this, it is possible to evaluate the varia-
tions that are contained in the training images. Sometimes it might be reasonable to restart the training with
train_model_components while using a different set of training images.
Parameter
HALCON 8.0.2
564 CHAPTER 7. MATCHING
Result
If the parameters are valid, the operator get_found_component_model returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
get_found_component_model is reentrant and processed without parallelization.
Possible Predecessors
find_component_model
See also
train_model_components, create_component_model
Module
Matching
HALCON 8.0.2
566 CHAPTER 7. MATCHING
Result
If the handle of the training result is valid, the operator get_training_components returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
get_training_components is reentrant and processed without parallelization.
Possible Predecessors
train_model_components
Possible Successors
train_model_components
HALCON 8.0.2
568 CHAPTER 7. MATCHING
See also
find_shape_model
Module
Matching
Result
If the handle of the training result is valid, the operator inspect_clustered_components returns the value
H_MSG_TRUE. If necessary an exception is raised.
Parallelization Information
inspect_clustered_components is reentrant and processed without parallelization.
Possible Predecessors
train_model_components
Possible Successors
cluster_model_components
Module
Matching
HALCON 8.0.2
570 CHAPTER 7. MATCHING
Module
Matching
HALCON 8.0.2
572 CHAPTER 7. MATCHING
HALCON 8.0.2
574 CHAPTER 7. MATCHING
AmbiguityCriterion. In almost all cases the best results are obtained with ’rigidity’, which assumes the
rigidity of the compound object. The more the rigidity of the compound object is violated by the pose of the initial
component, the worse its evaluation is. In the case of ’distance’, only the distance between the initial components
is considered during the evaluation. Hence, the pose of the initial component receives a good evaluation if its dis-
tances to the other initial components is similar to the corresponding distances in the model image. Accordingly,
when choosing ’orientation’, only the relative orientation is considered during the evaluation. Finally, the simulta-
neous consideration of distance and orientation can be achieved by choosing ’distance_orientation’. In contrast to
’rigidity’, the relative pose of the initial components is not considered when using ’distance_orientation’.
The process of solving the ambiguities can be further influenced by the parameter MaxContourOverlap. This
parameter describes the extent by which the contours of two initial component matches may overlap each other.
Let the letters ’I’ and ’T’, for example, be two initial components that should be searched in a training image
that shows the string ’IT’. Then, the initial component ’T’ should be found at its correct pose. In contrast, the
initial component ’I’ will be found at its correct pose (’I’) but also at the pose of the ’T’ because of the simi-
larity of the two components. To discard the wrong match of the initial component ’I’, an appropriate value for
MaxContourOverlap can be chosen: If overlapping matches should be tolerated, MaxContourOverlap
should be set to 1. If overlapping matches should be completely avoided, MaxContourOverlap should be set
to 0. By choosing a value between 0 and 1, the maximum percentage of overlapping contour pixels can be adjusted.
The decision which initial components can be clustered to rigid model components is made based on the poses
of the initial components in the model image and in the training images. Two initial components are merged
if they do not show any relative movement over all images. Assume that in the case of the above mentioned
switch the training image would show the same switch state as the model image, the algorithm would merge the
respective initial components because it assumes that the entire switch is one rigid model component. The extent
by which initial components are merged can be influenced with the parameter ClusterThreshold. This cluster
threshold is based on the probability that two initial components belong to the same rigid model component. Thus,
ClusterThreshold describes the minimum probability which two initial components must have in order to be
merged. Since the threshold is based on a probability value, it must lie in the interval between 0 and 1. The greater
the threshold is chosen, the smaller the number of initial components that are merged. If a threshold of 0 is chosen,
all initial components are merged into one rigid component, while for a threshold of 1 no merging is performed
and each initial component is adopted as one rigid model component.
The final rigid model components are returned in ModelComponents. Later, the index of a component region
in ModelComponents is used to denote the model component. The poses of the components in the training
images can be examined by using get_training_components.
After the determination of the model components their relative movements are analyzed by determining the move-
ment of one component with respect to a second component for each pair of components. For this, the components
are referred to their reference points. The reference point of a component is the center of gravity of its contour
region, which is returned in ModelComponents. It can be calculated by calling area_center. Finally, the
relative movement is represented by the smallest enclosing rectangle of arbitrary orientation of the reference point
movement and by the smallest enclosing angle interval of the relative orientation of the second component over all
images. The determined relations can be inspected by using get_component_relations.
Parameter
HALCON 8.0.2
576 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator train_model_components returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
train_model_components is processed completely exclusively without parallelization.
Possible Predecessors
gen_initial_components
Possible Successors
inspect_clustered_components, cluster_model_components,
modify_component_relations, write_training_components,
get_training_components, get_component_relations,
create_trained_component_model, clear_training_components,
clear_all_training_components
See also
create_shape_model, find_shape_model
Module
Matching
Parallelization Information
write_component_model is reentrant and processed without parallelization.
Possible Predecessors
create_component_model, create_trained_component_model
Module
Matching
7.2 Correlation-Based
clear_all_ncc_models ( )
T_clear_all_ncc_models ( )
HALCON 8.0.2
578 CHAPTER 7. MATCHING
Alternatives
clear_ncc_model
Module
Matching
can be queried using get_ncc_model_params. In rare cases, it might happen that create_ncc_model
determines a value for the number of pyramid levels that is too large or too small. If the number of pyramid lev-
els is chosen too large, the model may not be recognized in the image or it may be necessary to select very low
parameters for MinScore in find_ncc_model in order to find the model. If the number of pyramid levels is
chosen too small, the time required to find the model in find_ncc_model may increase. In these cases, the
number of pyramid levels should be selected by inspecting the output of gen_gauss_pyramid. Here, Mode
= ’constant’ and Scale = 0.5 should be used.
The parameters AngleStart and AngleExtent determine the range of possible rotations, in which the model
can occur in the image. Note that the model can only be found in this range of angles by find_ncc_model. The
parameter AngleStep determines the step length within the selected range of angles. Hence, if subpixel accuracy
is not specified in find_ncc_model, this parameter specifies the accuracy that is achievable for the angles in
find_ncc_model. AngleStep should be chosen based on the size of the object. Smaller models do not
possess many different discrete rotations in the image, and hence AngleStep should be chosen larger for smaller
models. If AngleExtent is not an integer multiple of AngleStep, AngleStep is modified accordingly.
The model is pre-generated for the selected angle range and stored in memory. The memory required to store the
model is proportional to the number of angle steps and the number of points in the model. Hence, if AngleStep
is too small or AngleExtent too big, it may happen that the model no longer fits into the (virtual) memory. In
this case, either AngleStep must be enlarged or AngleExtent must be reduced. In any case, it is desirable
that the model completely fits into the main memory, because this avoids paging by the operating system, and
hence the time to find the object will be much smaller. Since angles can be determined with subpixel resolution
by find_ncc_model, AngleStep ≥ 1 can be selected for models of a diameter smaller than about 200
pixels. If AngleStep = 0 auto 0 or 0 is selected, create_ncc_model automatically determines a suitable
angle step length based on the size of the model. The automatically computed angle step length can be queried
using get_ncc_model_params.
The parameter Metric determines the conditions under which the model is recognized in the image. If Metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the model is
a bright object on a dark background, the object is found only if it is also brighter than the background. If Metric
= ’ignore_global_polarity’, the object is found in the image also if the contrast reverses globally. In the above
example, the object hence is also found if it is darker than the background. The runtime of find_ncc_model
will increase slightly in this case.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_ncc_model_origin.
Parameter
. Template (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image whose domain will be used to create the model.
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Maximum number of pyramid levels.
Default Value : "auto"
List of values : NumLevels ∈ {"auto", 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double
Smallest rotation of the pattern.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.79, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double
Extent of the rotation angles.
Default Value : 0.79
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.79, 0.39}
Restriction : AngleExtent ≥ 0
. AngleStep (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; double / const char *
Step length of the angles (resolution).
Default Value : "auto"
Suggested values : AngleStep ∈ {"auto", 0, 0.0175, 0.0349, 0.0524, 0.0698, 0.0873}
Restriction : (AngleStep ≥ 0) ∧ (AngleStep ≤ (pi/16))
. Metric (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Match metric.
Default Value : "use_polarity"
List of values : Metric ∈ {"use_polarity", "ignore_global_polarity"}
HALCON 8.0.2
580 CHAPTER 7. MATCHING
Here, n denotes the number of points in the template, R denotes the domain (ROI) of the template, mt is the mean
gray value of the template
1 X
mt = t(u, v)
n
(u,v)∈R
1 X 2
s2t = (t(u, v) − mt )
n
(u,v)∈R
mi (r, c) is the mean gray value of the image at position (r, c) over all points of the template (i.e., the template
points are shifted by (r, c))
1 X
mi (r, c) = i(r + u, c + v)
n
(u,v)∈R
and s2i (r, c) is the variance of the gray values of the image at position (r, c) over all points of the template
1 X 2
s2i (r, c) = (i(r + u, c + v) − mi (r, c))
n
(u,v)∈R
The NCC measures how well the template and image correspond at a particular point (r, c). It assumes values
between −1 and 1. The larger the absolute value of the correlation, the larger the degree of correspondence
between the template and image. A value of 1 means that the gray values in the image are a linear transformation
of the gray values in the template:
i(r + u, c + v) = at(u, v) + b
where a > 0. Similarly, a value of −1 means that the gray values in the image are a linear transformation of the
gray values in the template with a < 0. Hence, in this case the template occurs with a reversed polarity in the
image. Because of the above property, the NCC is invariant to linear illumination changes.
The NCC as defined above is used if the NCC model has been created with Metric = ’use_polarity’. If the model
has been created with Metric = ’ignore_global_polarity’, the absolute value of ncc(r, c) is used as the score.
It should be noted that the NCC is very sensitive to occlusion and clutter as well as to nonlinear illumination
changes in the image. If a model should be found in the presence of occlusion, clutter, or nonlinear illumination
changes the search should be performed using the shape-based matching (see, e.g., create_shape_model).
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_ncc_model. A different origin set with set_ncc_model_origin is not taken into account here.
The model is searched within those points of the domain of the image, in which the model lies completely within
the image. This means that the model will not be found if it extends beyond the borders of the image, even if it
would achieve a score greater than MinScore (see below).
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. If necessary, the range of rotations is clipped to the range given when the model was created with
create_ncc_model. In particular, this means that the angle ranges of the model and the search must truly
overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all angles in the re-
mainder of the paragraph are given in degrees, whereas they have to be specified in radians in find_ncc_model.
Hence, if the model, for example, was created with AngleStart = −20◦ and AngleExtent = 40◦ and the
angle search space in find_ncc_model is, for example, set to AngleStart = 350◦ and AngleExtent =
20◦ , the model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ .
To find the model, in this example it would be necessary to select AngleStart = −10◦ .
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rotations
are found in the image. If the model has repeating structures it may happen that multiple instances with identical
rotations are found at similar positions in the image. The parameter MaxOverlap determines by what fraction
(i.e., a number between 0 and 1) two instances may at most overlap in order to consider them as different instances,
and hence to be returned separately. If two instances overlap each other by more than MaxOverlap only the
best instance is returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary
orientation (see smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances
may not overlap at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’false’, the model’s pose is only determined with pixel accuracy and the angle resolution
HALCON 8.0.2
582 CHAPTER 7. MATCHING
that was specified with create_ncc_model. If SubPixel is set to ’true’, the position as well as the rotation
are determined with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This
mode costs almost no computation time and achieves a high accuracy. Hence, SubPixel should usually be set to
’true’.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the num-
ber of levels is clipped to the range given when the shape model was created with create_ncc_model. If
NumLevels is set to 0, the number of pyramid levels specified in create_ncc_model is used. Optionally,
NumLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. If the lowest pyramid level to use is chosen too large, it may happen that
the desired accuracy cannot be achieved, or that wrong instances of the model are found because the model is not
specific enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the model.
In this case, the lowest pyramid level to use must be set to a smaller value.
Parameter
Result
If the parameter values are correct, the operator find_ncc_model returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_ncc_model is reentrant and processed without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model, set_ncc_model_origin
Possible Successors
clear_ncc_model
Alternatives
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models,
best_match_rot_mg
Module
Matching
HALCON 8.0.2
584 CHAPTER 7. MATCHING
Result
If the handle of the model is valid, the operator get_ncc_model_origin returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
get_ncc_model_origin is reentrant and processed without parallelization.
Possible Predecessors
create_ncc_model, read_ncc_model, set_ncc_model_origin
Possible Successors
find_ncc_model
See also
area_center
Module
Matching
HALCON 8.0.2
586 CHAPTER 7. MATCHING
See also
area_center
Module
Matching
7.3 Gray-Value-Based
Possible Successors
set_reference_template, best_match, fast_match, fast_match_mg,
set_offset_template, best_match_mg, best_match_pre_mg, best_match_rot,
best_match_rot_mg
Module
Matching
The runtime of the operator depends on the size of the domain of Image. Therefore it is important to restrict the
domain as far as possible, i.e. to apply the operator only in a very confined “region of interest”. The parameter
MaxError determines the maximal error which the searched position is allowed to have at most. The lower this
value is, the faster the operator runs.
Row and Column return the position of the best match, whereby Error indicates the average difference of the
grayvalues. If no position with an error below MaxError was found the position (0, 0) and a matching result of
255 for Error are returned. In this case MaxError has to be set larger.
The maximum error of the position (without noise) is 0.1 pixel. The average error is 0.03 pixel.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image inside of which the pattern has to be found.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; (Htuple .) Hlong
Template number.
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; (Htuple .) double
Maximum average difference of the grayvalues.
Default Value : 20
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Subpixel accuracy in case of ’true’.
Default Value : "false"
List of values : SubPixel ∈ {"true", "false"}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double *
Row position of the best match.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double *
Column position of the best match.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Average divergence of the grayvalues of the best match.
HALCON 8.0.2
588 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator best_match returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_template, read_template, set_offset_template, set_reference_template,
adapt_template, draw_region, draw_rectangle1, reduce_domain
Alternatives
fast_match, fast_match_mg, best_match_mg, best_match_pre_mg, best_match_rot,
best_match_rot_mg, exhaustive_match, exhaustive_match_mg
Module
Matching
HALCON 8.0.2
590 CHAPTER 7. MATCHING
Parameter
. ImagePyramid (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image-array ; Hobject : byte
Image pyramid inside of which the pattern has to be found.
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong
Template number.
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; double
Maximal average difference of the grayvalues.
Default Value : 30
Suggested values : MaxError ∈ {0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 15, 17, 20, 30, 40, 50, 60, 70}
Typical range of values : 0 ≤ MaxError ≤ 255
Minimum Increment : 1
Recommended Increment : 3
. SubPixel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Exactness in subpixels in case of ’true’.
Default Value : "false"
List of values : SubPixel ∈ {"true", "false"}
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of the used resolution levels.
Default Value : 3
List of values : NumLevels ∈ {1, 2, 3, 4, 5, 6}
. WhichLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Resolution level up to which the method “best match” is used.
Default Value : "original"
Suggested values : WhichLevels ∈ {"all", "original", 0, 1, 2, 3, 4, 5, 6}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .point.y ; double *
Row position of the best match.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; double *
Column position of the best match.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double *
Average divergence of the grayvalues in the best match.
Result
If the parameter values are correct, the operator best_match_pre_mg returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behaviour can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
best_match_pre_mg is reentrant and processed without parallelization.
Possible Predecessors
gen_gauss_pyramid, create_template, read_template, adapt_template, draw_region,
draw_rectangle1, reduce_domain, set_reference_template
Alternatives
fast_match, fast_match_mg, exhaustive_match, exhaustive_match_mg
Module
Matching
The operator best_match_rot performs a matching of the template of TemplateID and Image. It works
similar to best_match with the extension that the pattern can be rotated. The parameters AngleStart
and AngleExtend define the maximum rotation of the pattern: AngleStart specifies the maximum counter
clockwise rotation and AngleExtend the maximum clockwise rotation relative to this angle. Both values have
to smaller or equal to the values used for the creation of the pattern (see create_template_rot). In addition
to best_match best_match_rot returns the rotion angle of the pattern in Angle (radiant). The accuracy
of this angle depends on the parameter AngleStep of create_template_rot. In the case of SubPixel =
’true’ the position and the angle are calculated with “sub pixel” accuracy.
Parameter
HALCON 8.0.2
592 CHAPTER 7. MATCHING
Module
Matching
clear_all_templates ( )
T_clear_all_templates ( )
HALCON 8.0.2
594 CHAPTER 7. MATCHING
The operator clear_template deallocates the memory of a template which has been created by
create_template or create_template_rot. After execution of the operator clear_template
the template can no longer be used. The value of TemplateID is not valid. However, the number can be used
again by further calls of create_template or create_template_rot.
Parameter
. TemplateID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . template ; Hlong
Template number.
Result
If the number of the template is valid, the operator clear_template returns the value H_MSG_TRUE. If
necessary an exception handling will be raised.
Parallelization Information
clear_template is processed completely exclusively without parallelization.
Possible Predecessors
create_template, create_template_rot, read_template, write_template
See also
clear_all_templates
Module
Matching
stable matching, i.e., the possibilty to miss good matches is reduced. The optimization process selects the most
stable and significant gray values to be tested first during the matching process. Using this technique a wrong
match can be eliminated very early.
The reference position for the template is its center of gravity. I.e. if you apply the template to the orig-
inal image the center of gravity is returned. This default reference can be adapted using the operator
set_reference_template.
In sub pixel mode a special position correction is calculated which is added after each matching: The template is
applied to the original image and the difference between the found position and the center of gravity is used as a
correction vector. This is important for patterns in a textured context or for asymetric pattern. For most templates
this correction vector is near null.
If the pattern is no longer used, it has to be freed by the operator clear_template in order to deallocate the
memory.
Before the use of the template, which is stored independently of the image size, it can be adapted explicitly to the
size of a definite image size by using adapt_template.
Parameter
HALCON 8.0.2
596 CHAPTER 7. MATCHING
A ∗ 12 ∗ AngleExtend
M=
AngleStep
After the transformation, a number (TemplateID) is assigned to the template for being used in the further
process.
A description of the other parameters can be found at the operator create_template.
Attention
You have to be aware, that depending on the resolution a large number of pre calculated patterns have to be created
which might result in a large amount of memory needed.
Parameter
The difference between fast_match and exhaustive_match is that the matching for one position is
stopped if the error is to high. This leads to a reduced runtime but one might miss correct matches. The runtime of
the operator depends mainly on the size of the domain of Image. Therefore it is important to restrict the domain
as far as possible, i.e. to apply the operator only in a very confined “region of interest”. The parameter MaxError
determines the maximal error which the searched position is allowed to show. The lower this value is, the faster
the operator runs.
All points which show a matching error smaller than MaxError will be returned in the output region Matches.
This region can be used for further processing. For example by using a connection and best_match to find all
the matching objects. If no point has a match error below MaxError the empty region (i.e no points) is returned.
Parameter
HALCON 8.0.2
598 CHAPTER 7. MATCHING
HALCON 8.0.2
600 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator set_reference_template returns the value
H_MSG_TRUE. If necessary, an exception handling is raised.
Parallelization Information
set_reference_template is reentrant and processed without parallelization.
Possible Predecessors
create_template, create_template_rot, read_template, adapt_template
Possible Successors
best_match, best_match_mg, best_match_rot, fast_match, fast_match_mg
Module
Matching
7.4 Shape-Based
clear_all_shape_models ( )
T_clear_all_shape_models ( )
HALCON 8.0.2
602 CHAPTER 7. MATCHING
Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
read_shape_model, write_shape_model
Alternatives
clear_shape_model
Module
Matching
The model is generated using multiple image pyramid levels and is stored in memory. If a complete pregeneration
of the model is selected (see below), the model is generated at multiple rotations and anisotropic scales (i.e.,
independent scales in the row and column direction) on each level. The output parameter ModelID is a handle
for this model, which is used in subsequent calls to find_aniso_shape_model.
The number of pyramid levels is determined with the parameter NumLevels. It should be chosen as
large as possible because by this the time necessary to find the object is significantly reduced. On the
other hand, NumLevels must be chosen such that the model is still recognizable and contains a sufficient
number of points (at least four) on the highest pyramid level. This can be checked using the output of
inspect_shape_model. If not enough model points are generated, the number of pyramid levels is reduced
internally until enough model points are found on the highest pyramid level. If this procedure would lead to a
model with no pyramid levels, i.e., if the number of model points is already too small on the lowest pyramid level,
create_aniso_shape_model returns with an error message. If NumLevels is set to ’auto’ (or 0 for back-
wards compatibility), create_aniso_shape_model determines the number of pyramid levels automatically.
The automatically computed number of pyramid levels can be queried using get_shape_model_params. In
rare cases, it might happen that create_aniso_shape_model determines a value for the number of pyra-
mid levels that is too large or too small. If the number of pyramid levels is chosen too large, the model may not
be recognized in the image or it may be necessary to select very low parameters for MinScore or Greediness in
find_aniso_shape_model in order to find the model. If the number of pyramid levels is chosen too small,
the time required to find the model in find_aniso_shape_model may increase. In these cases, the number
of pyramid levels should be selected using the output of inspect_shape_model.
The parameters AngleStart and AngleExtent determine the range of possible rotations, in which
the model can occur in the image. Note that the model can only be found in this range of angles by
find_aniso_shape_model. The parameter AngleStep determines the step length within the selected
range of angles. Hence, if subpixel accuracy is not specified in find_aniso_shape_model, this param-
eter specifies the accuracy that is achievable for the angles in find_aniso_shape_model. AngleStep
should be chosen based on the size of the object. Smaller models do not have many different discrete rotations
in the image, and hence AngleStep should be chosen larger for smaller models. If AngleExtent is not an
integer multiple of AngleStep, AngleStep is modified accordingly.
The parameters ScaleRMin, ScaleRMax, ScaleCMin, and ScaleCMax determine the range of possible
anisotropic scales of the model in the row and column direction. A scale of 1 in both scale factors corresponds to
the original size of the model. The parameters ScaleRStep and ScaleCStep determine the step length within
the selected range of scales. Hence, if subpixel accuracy is not specified in find_aniso_shape_model,
these parameters specify the accuracy that is achievable for the scales in find_aniso_shape_model. Like
AngleStep, ScaleRStep and ScaleCStep should be chosen based on the size of the object. If the respective
range of scales is not an integer multiple of ScaleRStep and ScaleCStep, ScaleRStep and ScaleCStep
are modified accordingly.
Note that the transformations are treated internally such that the scalings are applied first, followed by the rotation.
Therefore, the model should usually be aligned such that it appears horizontally or vertically in the model image.
If a complete pregeneration of the model is selected (see below), the model is pre-generated for the selected
angle and scale range and stored in memory. The memory required to store the model is proportional to
the number of angle steps, the number of scale steps, and the number of points in the model. Hence, if
AngleStep, ScaleRStep, or ScaleCStep are too small or AngleExtent or the range of scales are
too big, it may happen that the model no longer fits into the (virtual) memory. In this case, AngleStep,
ScaleRStep, or ScaleCStep must be enlarged or AngleExtent or the range of scales must be re-
duced. In any case, it is desirable that the model completely fits into the main memory, because this avoids
paging by the operating system, and hence the time to find the object will be much smaller. Since an-
gles can be determined with subpixel resolution by find_aniso_shape_model, AngleStep ≥ 1◦ and
ScaleRStep, ScaleCStep ≥ 0.02 can be selected for models of a diameter smaller than about 200 pixels.
If AngleStep = 0 auto 0 or ScaleRStep, ScaleCStep = 0 auto 0 (or 0 for backwards compatibility in both
cases) is selected, create_aniso_shape_model automatically determines a suitable angle or scale step
length, respectively, based on the size of the model. The automatically computed angle and scale step lengths can
be queried using get_shape_model_params.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
find_aniso_shape_model. Because of this, the recognition of the model might require slightly more time.
For particularly large models, it may be useful to reduce the number of model points by setting Optimization
to a value different from ’none’. If Optimization = ’none’, all model points are stored. In all other cases,
HALCON 8.0.2
604 CHAPTER 7. MATCHING
the number of points is reduced according to the value of Optimization. If the number of points is reduced,
it may be necessary in find_aniso_shape_model to set the parameter Greediness to a smaller value,
e.g., 0.7 or 0.8. For small models, the reduction of the number of model points does not result in a speed-up of
the search because in this case usually significantly more potential instances of the model must be examined. If
Optimization is set to ’auto’, create_aniso_shape_model automatically determines the reduction of
the number of model points.
Optionally, a second value can be passed in Optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of Optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with set_system(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, find_aniso_shape_model
typically returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a
completely pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two
modes. If maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter Contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
Contrast should be chosen such that only the significant features of the template are used for the model.
Contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in edges_image. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see hysteresis_threshold. Optionally, Contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in Contrast. In this case, the first two values can simply be set
to identical values. The effect of this parameter can be checked in advance with inspect_shape_model.
If Contrast is set to ’auto’, create_aniso_shape_model determines the three above described values
automatically. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’),
or the minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not deter-
mined automatically can additionally be passed in the form of a tuple. Also various combinations are allowed: If,
for example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are determined
automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while the hys-
teresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determination of
the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be preferred
if certain model components should be included or suppressed because of application-specific reasons or if the
object contains several different contrasts. Therefore, the contrast thresholds should be automatically determined
with determine_shape_model_params and subsequently verified using inspect_shape_model be-
fore calling create_aniso_shape_model.
With MinContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by find_aniso_shape_model. In other words, this parameter separates the model from the noise
in the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If
multichannel images are used for the model and the search images, and if the parameter Metric is set to ’ig-
nore_color_polarity’ (see below) the noise in one channel must be multiplied by the square root of the number
of channels to determine MinContrast. If, for example, the gray values fluctuate within a range of 10 gray
levels in a single channel and the image is a three-channel image MinContrast should be set to 17. Obviously,
MinContrast must be smaller than Contrast. If the model should be recognized in very low contrast im-
ages, MinContrast must be set to a correspondingly small value. If the model should be recognized even if it
is severely occluded, MinContrast should be slightly larger than the range of gray value fluctuations created
by noise in order to ensure that the position and rotation of the model are extracted robustly and accurately by
find_aniso_shape_model. If MinContrast is set to ’auto’, the minimum contrast is determined auto-
matically based on the noise in the model image. Consequently, an automatic determination only makes sense if
the image noise during the recognition is similar to the noise in the model image. Furthermore, in some cases it is
advisable to increase the automatically determined value in order to increase the robustness against occlusions (see
above). The automatically computed minimum contrast can be queried using get_shape_model_params.
The parameter Metric determines the conditions under which the model is recognized in the image. If Metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the
model is a bright object on a dark background, the object is found only if it is also brighter than the back-
ground. If Metric = ’ignore_global_polarity’, the object is found in the image also if the contrast reverses
globally. In the above example, the object hence is also found if it is darker than the background. The runtime
of find_aniso_shape_model will increase slightly in this case. If Metric = ’ignore_local_polarity’, the
model is found even if the contrast changes locally. This mode can, for example, be useful if the object consists
of a part with medium gray value, within which either darker or brighter sub-objects lie. Since in this case the
runtime of find_aniso_shape_model increases significantly, it is usually better to create several models
that reflect the possible contrast variations of the object with create_aniso_shape_model, and to match
them simultaneously with find_aniso_shape_models. The above three metrics can only be applied to
single-channel images. If a multichannel image is used as the model image or as the search image only the first
channel will be used (and no error message will be returned). If Metric = ’ignore_color_polarity’, the model
is found even if the color contrast changes locally. This is, for example, the case if parts of the object can change
their color, e.g., from red to green. In particular, this mode is useful if it is not known in advance in which channels
the object is visible. In this mode, the runtime of find_aniso_shape_model can also increase significantly.
The metric ’ignore_color_polarity’ can be used for images with an arbitrary number of channels. If it is used for
single-channel images it has the same effect as ’ignore_local_polarity’. It should be noted that for Metric =
’ignore_color_polarity’ the number of channels in the model creation with create_aniso_shape_model
and in the search with find_aniso_shape_model can be different. This can, for example, be used to create
a model from a synthetically generated single-channel image. Furthermore, it should be noted that the channels do
not need to contain a spectral subdivision of the light (like in an RGB image). The channels can, for example, also
contain images of the same object that were obtained by illuminating the object from different directions.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_shape_model_origin.
Parameter
HALCON 8.0.2
606 CHAPTER 7. MATCHING
Possible Successors
find_aniso_shape_model, find_aniso_shape_models, get_shape_model_params,
clear_shape_model, write_shape_model, set_shape_model_origin
Alternatives
create_shape_model, create_scaled_shape_model, create_template_rot
See also
set_system, get_system
Module
Matching
HALCON 8.0.2
608 CHAPTER 7. MATCHING
of 1 corresponds to the original size of the model. The parameter ScaleStep determines the step length within
the selected range of scales. Hence, if subpixel accuracy is not specified in find_scaled_shape_model,
this parameter specifies the accuracy that is achievable for the scales in find_scaled_shape_model. Like
AngleStep, ScaleStep should be chosen based on the size of the object. If the range of scales is not an integer
multiple of ScaleStep, ScaleStep is modified accordingly.
If a complete pregeneration of the model is selected (see below), the model is pre-generated for the selected angle
and scale range and stored in memory. The memory required to store the model is proportional to the number
of angle steps, the number of scale steps, and the number of points in the model. Hence, if AngleStep or
ScaleStep are too small or AngleExtent or the range of scales are too big, it may happen that the model
no longer fits into the (virtual) memory. In this case, either AngleStep or ScaleStep must be enlarged or
AngleExtent or the range of scales must be reduced. In any case, it is desirable that the model completely fits
into the main memory, because this avoids paging by the operating system, and hence the time to find the object will
be much smaller. Since angles can be determined with subpixel resolution by find_scaled_shape_model,
AngleStep ≥ 1◦ and ScaleStep ≥ 0.02 can be selected for models of a diameter smaller than about 200
pixels. If AngleStep = 0 auto 0 or ScaleStep = 0 auto 0 (or 0 for backwards compatibility in both cases)
is selected, create_scaled_shape_model automatically determines a suitable angle or scale step length,
respectively, based on the size of the model. The automatically computed angle and scale step lengths can be
queried using get_shape_model_params.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
find_scaled_shape_model. Because of this, the recognition of the model might require slightly more
time.
For particularly large models, it may be useful to reduce the number of model points by setting Optimization
to a value different from ’none’. If Optimization = ’none’, all model points are stored. In all other cases,
the number of points is reduced according to the value of Optimization. If the number of points is reduced,
it may be necessary in find_scaled_shape_model to set the parameter Greediness to a smaller value,
e.g., 0.7 or 0.8. For small models, the reduction of the number of model points does not result in a speed-up of
the search because in this case usually significantly more potential instances of the model must be examined. If
Optimization is set to ’auto’, create_scaled_shape_model automatically determines the reduction
of the number of model points.
Optionally, a second value can be passed in Optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of Optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with set_system(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, find_scaled_shape_model
typically returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a
completely pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two
modes. If maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter Contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
Contrast should be chosen such that only the significant features of the template are used for the model.
Contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in edges_image. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see hysteresis_threshold. Optionally, Contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in Contrast. In this case, the first two values can simply be set to
identical values. The effect of this parameter can be checked in advance with inspect_shape_model. If
Contrast is set to ’auto’, create_scaled_shape_model determines the three above described values
automatically. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’),
or the minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not deter-
mined automatically can additionally be passed in the form of a tuple. Also various combinations are allowed: If,
for example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are determined
automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while the hys-
teresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determination of
the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be preferred
if certain model components should be included or suppressed because of application-specific reasons or if the
object contains several different contrasts. Therefore, the contrast thresholds should be automatically determined
with determine_shape_model_params and subsequently verified using inspect_shape_model be-
fore calling create_scaled_shape_model.
With MinContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by find_scaled_shape_model. In other words, this parameter separates the model from the noise
in the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If
multichannel images are used for the model and the search images, and if the parameter Metric is set to ’ig-
nore_color_polarity’ (see below) the noise in one channel must be multiplied by the square root of the number
of channels to determine MinContrast. If, for example, the gray values fluctuate within a range of 10 gray
levels in a single channel and the image is a three-channel image MinContrast should be set to 17. Obviously,
MinContrast must be smaller than Contrast. If the model should be recognized in very low contrast im-
ages, MinContrast must be set to a correspondingly small value. If the model should be recognized even if it
is severely occluded, MinContrast should be slightly larger than the range of gray value fluctuations created
by noise in order to ensure that the position and rotation of the model are extracted robustly and accurately by
find_scaled_shape_model. If MinContrast is set to ’auto’, the minimum contrast is determined auto-
matically based on the noise in the model image. Consequently, an automatic determination only makes sense if
the image noise during the recognition is similar to the noise in the model image. Furthermore, in some cases it is
advisable to increase the automatically determined value in order to increase the robustness against occlusions (see
above). The automatically computed minimum contrast can be queried using get_shape_model_params.
The parameter Metric determines the conditions under which the model is recognized in the image. If Metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the
model is a bright object on a dark background, the object is found only if it is also brighter than the back-
ground. If Metric = ’ignore_global_polarity’, the object is found in the image also if the contrast reverses
globally. In the above example, the object hence is also found if it is darker than the background. The runtime of
find_scaled_shape_model will increase slightly in this case. If Metric = ’ignore_local_polarity’, the
model is found even if the contrast changes locally. This mode can, for example, be useful if the object consists
of a part with medium gray value, within which either darker or brighter sub-objects lie. Since in this case the
runtime of find_scaled_shape_model increases significantly, it is usually better to create several models
that reflect the possible contrast variations of the object with create_scaled_shape_model, and to match
them simultaneously with find_scaled_shape_models. The above three metrics can only be applied to
single-channel images. If a multichannel image is used as the model image or as the search image only the first
channel will be used (and no error message will be returned). If Metric = ’ignore_color_polarity’, the model is
found even if the color contrast changes locally. This is, for example, the case if parts of the object can change their
color, e.g., from red to green. In particular, this mode is useful if it is not known in advance in which channels the
object is visible. In this mode, the runtime of find_scaled_shape_model can also increase significantly.
The metric ’ignore_color_polarity’ can be used for images with an arbitrary number of channels. If it is used for
single-channel images it has the same effect as ’ignore_local_polarity’. It should be noted that for Metric =
’ignore_color_polarity’ the number of channels in the model creation with create_scaled_shape_model
and in the search with find_scaled_shape_model can be different. This can, for example, be used to create
a model from a synthetically generated single-channel image. Furthermore, it should be noted that the channels do
not need to contain a spectral subdivision of the light (like in an RGB image). The channels can, for example, also
contain images of the same object that were obtained by illuminating the object from different directions.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_shape_model_origin.
Parameter
HALCON 8.0.2
610 CHAPTER 7. MATCHING
If necessary an exception is raised. If the parameters NumLevels and Contrast are chosen such that the model
contains too few points, the error 8510 is raised.
Parallelization Information
create_scaled_shape_model is processed completely exclusively without parallelization.
Possible Predecessors
draw_region, reduce_domain, threshold
Possible Successors
find_scaled_shape_model, find_scaled_shape_models, get_shape_model_params,
clear_shape_model, write_shape_model, set_shape_model_origin
Alternatives
create_shape_model, create_aniso_shape_model, create_template_rot
See also
set_system, get_system
Module
Matching
HALCON 8.0.2
612 CHAPTER 7. MATCHING
larger for smaller models. If AngleExtent is not an integer multiple of AngleStep, AngleStep is modified
accordingly.
If a complete pregeneration of the model is selected (see below), the model is pre-generated for the selected
angle range and stored in memory. The memory required to store the model is proportional to the number of
angle steps and the number of points in the model. Hence, if AngleStep is too small or AngleExtent too
big, it may happen that the model no longer fits into the (virtual) memory. In this case, either AngleStep
must be enlarged or AngleExtent must be reduced. In any case, it is desirable that the model completely
fits into the main memory, because this avoids paging by the operating system, and hence the time to find the
object will be much smaller. Since angles can be determined with subpixel resolution by find_shape_model,
AngleStep ≥ 1 can be selected for models of a diameter smaller than about 200 pixels. If AngleStep = 0 auto 0
(or 0 for backwards compatibility) is selected, create_shape_model automatically determines a suitable
angle step length based on the size of the model. The automatically computed angle step length can be queried
using get_shape_model_params.
If a complete pregeneration of the model is not selected, the model is only created in a reference pose on each
pyramid level. In this case, the model must be transformed to the different angles and scales at runtime in
find_shape_model. Because of this, the recognition of the model might require slightly more time.
For particularly large models, it may be useful to reduce the number of model points by setting Optimization
to a value different from ’none’. If Optimization = ’none’, all model points are stored. In all other cases, the
number of points is reduced according to the value of Optimization. If the number of points is reduced, it may
be necessary in find_shape_model to set the parameter Greediness to a smaller value, e.g., 0.7 or 0.8.
For small models, the reduction of the number of model points does not result in a speed-up of the search because
in this case usually significantly more potential instances of the model must be examined. If Optimization is
set to ’auto’, create_shape_model automatically determines the reduction of the number of model points.
Optionally, a second value can be passed in Optimization. This value determines whether the model is pre-
generated completely or not. To do so, the second value of Optimization must be set to either ’pregeneration’
or ’no_pregeneration’. If the second value is not used (i.e., if only one value is passed), the mode that is set
with set_system(’pregenerate_shape_models’,...) is used. With the default value (’pregener-
ate_shape_models’ = ’false’), the model is not pregenerated completely. The complete pregeneration of the model
normally leads to slightly lower runtimes because the model does not need to be transformed at runtime. However,
in this case, the memory requirements and the time required to create the model are significantly higher. It should
also be noted that it cannot be expected that the two modes return exactly identical results because transforming
the model at runtime necessarily leads to different internal data for the transformed models than pregenerating the
transformed models. For example, if the model is not pregenerated completely, find_shape_model typically
returns slightly lower scores, which may require setting a slightly lower value for MinScore than for a completely
pregenerated model. Furthermore, the poses obtained by interpolation may differ slightly in the two modes. If
maximum accuracy is desired, the pose of the model should be determined by least-squares adjustment.
The parameter Contrast determines the contrast the model points must have. The contrast is a measure for
local gray value differences between the object and the background and between different parts of the object.
Contrast should be chosen such that only the significant features of the template are used for the model.
Contrast can also contain a tuple with two values. In this case, the model is segmented using a method sim-
ilar to the hysteresis threshold method used in edges_image. Here, the first element of the tuple determines
the lower threshold, while the second element determines the upper threshold. For more information about the
hysteresis threshold method, see hysteresis_threshold. Optionally, Contrast can contain a third value
as the last element of the tuple. This value determines a threshold for the selection of significant model compo-
nents based on the size of the components, i.e., components that have fewer points than the minimum size thus
specified are suppressed. This threshold for the minimum size is divided by two for each successive pyramid
level. If small model components should be suppressed, but hysteresis thresholding should not be performed,
nevertheless three values must be specified in Contrast. In this case, the first two values can simply be set to
identical values. The effect of this parameter can be checked in advance with inspect_shape_model. If
Contrast is set to ’auto’, create_shape_model determines the three above described values automati-
cally. Alternatively, only the contrast (’auto_contrast’), the hysteresis thresholds (’auto_contrast_hyst’), or the
minimum size (’auto_min_size’) can be determined automatically. The remaining values that are not determined
automatically can additionally be passed in the form of a tuple. Also various combinations are allowed: If, for
example, [’auto_contrast’,’auto_min_size’] is passed, both the contrast and the minimum size are determined
automatically. If [’auto_min_size’,20,30] is passed, the minimum size is determined automatically while the hys-
teresis thresholds are set to 20 and 30, etc. In certain cases, it might happen that the automatic determination of
the contrast thresholds is not satisfying. For example, a manual setting of these parameters should be preferred
if certain model components should be included or suppressed because of application-specific reasons or if the
object contains several different contrasts. Therefore, the contrast thresholds should be automatically determined
with determine_shape_model_params and subsequently verified using inspect_shape_model be-
fore calling create_shape_model.
With MinContrast, it can be determined which contrast the model must at least have in the recognition per-
formed by find_shape_model. In other words, this parameter separates the model from the noise in the image.
Therefore, a good choice is the range of gray value changes caused by the noise in the image. If, for example, the
gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If multichannel images
are used for the model and the search images, and if the parameter Metric is set to ’ignore_color_polarity’ (see
below) the noise in one channel must be multiplied by the square root of the number of channels to determine
MinContrast. If, for example, the gray values fluctuate within a range of 10 gray levels in a single channel
and the image is a three-channel image MinContrast should be set to 17. Obviously, MinContrast must
be smaller than Contrast. If the model should be recognized in very low contrast images, MinContrast
must be set to a correspondingly small value. If the model should be recognized even if it is severely occluded,
MinContrast should be slightly larger than the range of gray value fluctuations created by noise in order to en-
sure that the position and rotation of the model are extracted robustly and accurately by find_shape_model. If
MinContrast is set to ’auto’, the minimum contrast is determined automatically based on the noise in the model
image. Consequently, an automatic determination only makes sense if the image noise during the recognition is
similar to the noise in the model image. Furthermore, in some cases it is advisable to increase the automatically
determined value in order to increase the robustness against occlusions (see above). The automatically computed
minimum contrast can be queried using get_shape_model_params.
The parameter Metric determines the conditions under which the model is recognized in the image. If Metric
= ’use_polarity’, the object in the image and the model must have the same contrast. If, for example, the model is
a bright object on a dark background, the object is found only if it is also brighter than the background. If Metric
= ’ignore_global_polarity’, the object is found in the image also if the contrast reverses globally. In the above
example, the object hence is also found if it is darker than the background. The runtime of find_shape_model
will increase slightly in this case. If Metric = ’ignore_local_polarity’, the model is found even if the contrast
changes locally. This mode can, for example, be useful if the object consists of a part with medium gray value,
within which either darker or brighter sub-objects lie. Since in this case the runtime of find_shape_model
increases significantly, it is usually better to create several models that reflect the possible contrast variations of
the object with create_shape_model, and to match them simultaneously with find_shape_models.
The above three metrics can only be applied to single-channel images. If a multichannel image is used as the
model image or as the search image only the first channel will be used (and no error message will be returned).
If Metric = ’ignore_color_polarity’, the model is found even if the color contrast changes locally. This is,
for example, the case if parts of the object can change their color, e.g., from red to green. In particular, this
mode is useful if it is not known in advance in which channels the object is visible. In this mode, the runtime
of find_shape_model can also increase significantly. The metric ’ignore_color_polarity’ can be used for
images with an arbitrary number of channels. If it is used for single-channel images it has the same effect as
’ignore_local_polarity’. It should be noted that for Metric = ’ignore_color_polarity’ the number of channels
in the model creation with create_shape_model and in the search with find_shape_model can be
different. This can, for example, be used to create a model from a synthetically generated single-channel image.
Furthermore, it should be noted that the channels do not need to contain a spectral subdivision of the light (like
in an RGB image). The channels can, for example, also contain images of the same object that were obtained by
illuminating the object from different directions.
The center of gravity of the domain (region) of the model image Template is used as the origin (reference point)
of the model. A different origin can be set with set_shape_model_origin.
Parameter
HALCON 8.0.2
614 CHAPTER 7. MATCHING
HALCON 8.0.2
616 CHAPTER 7. MATCHING
case Parameters contains the value ’min_contrast’, the computed minimum contrast is at least MinContrast.
If MinContrast is set to ’auto’, the minimum contrast is determined without restrictions.
Parameter
Find the best matches of an anisotropic scale invariant shape model in an image.
The operator find_aniso_shape_model finds the best NumMatches instances of the anisotropic scale
invariant shape model ModelID in the input image Image. The model must have been created previously by
calling create_aniso_shape_model or read_shape_model.
The position, rotation, and scale in the row and column direction of the found instances of the model are returned
in Row, Column, Angle, ScaleR, and ScaleC. The coordinates Row and Column are the coordinates of the
origin of the shape model in the search image. By default, the origin is the center of gravity of the domain (region)
of the image that was used to create the shape model with create_aniso_shape_model. A different origin
can be set with set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example below shows how to create this matrix and use it to display the model at the found position in the
search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_aniso_shape_model. A different origin set with set_shape_model_origin is not taken into
account. The model is searched within those points of the domain of the image, in which the model lies completely
within the image. This means that the model will not be found if it extends beyond the borders of the image, even if
it would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
HALCON 8.0.2
618 CHAPTER 7. MATCHING
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleRMin, ScaleRMax, ScaleCMin, and ScaleCMax determine the range of
scales in the row and column directions for which the model is searched. If necessary, both ranges are clipped
to the range given when the model was created with create_aniso_shape_model. In particular, this
means that the angle ranges of the model and the search must truly overlap. The angle range in the search is
not adapted modulo 2π. To simplify the presentation, all angles in the remainder of the paragraph are given in
degrees, whereas they have to be specified in radians in find_aniso_shape_model. Hence, if the model,
for example, was created with AngleStart = −20◦ and AngleExtent = 40◦ and the angle search space in
find_scaled_shape_model is, for example, set to AngleStart = 350◦ and AngleExtent = 20◦ , the
model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ . To find
the model, in this example it would be necessary to select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_aniso_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_aniso_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_aniso_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set
to at least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired
accuracy cannot be achieved, or that wrong instances of the model are found because the model is not specific
enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this
case, the lowest pyramid level to use must be set to a smaller value.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject : byte / uint2
Input image in which the model should be found.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; Htuple . Hlong
Handle of the model.
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Smallest rotation of the model.
Default Value : -0.39
Suggested values : AngleStart ∈ {-3.14, -1.57, -0.78, -0.39, -0.20, 0.0}
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double
Extent of the rotation angles.
Default Value : 0.78
Suggested values : AngleExtent ∈ {6.29, 3.14, 1.57, 0.78, 0.39, 0.0}
Restriction : AngleExtent ≥ 0
. ScaleRMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum scale of the model in the row direction.
Default Value : 0.9
Suggested values : ScaleRMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleRMin > 0
. ScaleRMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Maximum scale of the model in the row direction.
Default Value : 1.1
Suggested values : ScaleRMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleRMax ≥ ScaleRMin
. ScaleCMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum scale of the model in the column direction.
Default Value : 0.9
Suggested values : ScaleCMin ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : ScaleCMin > 0
. ScaleCMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Maximum scale of the model in the column direction.
Default Value : 1.1
Suggested values : ScaleCMax ∈ {1.0, 1.1, 1.2, 1.3, 1.4, 1.5}
Restriction : ScaleCMax ≥ ScaleCMin
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum score of the instances of the model to be found.
Default Value : 0.5
Suggested values : MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MinScore ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
. NumMatches (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of instances of the model to be found.
Default Value : 1
Suggested values : NumMatches ∈ {0, 1, 2, 3, 4, 5, 10, 20}
. MaxOverlap (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Maximum overlap of the instances of the model to be found.
Default Value : 0.5
Suggested values : MaxOverlap ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Typical range of values : 0 ≤ MaxOverlap ≤ 1
Minimum Increment : 0.01
Recommended Increment : 0.05
HALCON 8.0.2
620 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator find_aniso_shape_model returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_aniso_shape_model is reentrant and processed without parallelization.
Possible Predecessors
create_aniso_shape_model, read_shape_model, set_shape_model_origin
Possible Successors
clear_shape_model
Alternatives
find_shape_model, find_scaled_shape_model, find_shape_models,
find_scaled_shape_models, find_aniso_shape_models, best_match_rot_mg
See also
set_system, get_system
Module
Matching
Find the best matches of multiple anisotropic scale invariant shape models.
The operator find_aniso_shape_models finds the best NumMatches instances of the anisotropic scale
invariant shape models that are passed in ModelIDs in the input image Image. The models must have been
created previously by calling create_aniso_shape_model or read_shape_model.
Hence, in contrast to find_aniso_shape_model, multiple models can be searched in the same image in
one call. This changes the semantics of all input parameters to some extent. All input parameters must either
contain one element, in which case the parameter is used for all models, or must contain the same number of ele-
ments as ModelIDs, in which case each parameter element refers to the corresponding element in ModelIDs.
(NumLevels may also contain either two or twice the number of elements as ModelIDs; see below.) As usual,
the domain of the input image Image is used to restrict the search space for the reference point of the models
ModelIDs. Consistent with the above semantics, the input image Image can therefore contain a single image
object or an image object tuple containing multiple image objects. If Image contains a single image object, its
domain is used as the region of interest for all models in ModelIDs. If Image contains multiple image objects,
each domain is used as the region of interest for the corresponding model in ModelIDs. In this case, the im-
age matrix of all image objects in the tuple must be identical, i.e., Image cannot be constructed in an arbitrary
manner using concat_obj, but must be created from the same image using add_channels or equivalent
calls. If this is not the case, an error message is returned. The above semantics also hold for the input con-
trol parameters. Hence, for example, MinScore can contain a single value or the same number of values as
ModelIDs. In the first case, the value of MinScore is used for all models in ModelIDs, while in the second
case the respective value of the elements in MinScore is used for the corresponding model in ModelIDs. An
extension to these semantics holds for NumMatches and MaxOverlap. If NumMatches contains one ele-
ment, find_aniso_shape_models returns the best NumMatches instances of the model irrespective of the
type of the model. If, for example, two models are passed in ModelIDs and NumMatches = 2 is selected, it
can happen that two instances of the first model and no instances of the second model, one instance of the first
model and one instance of the second model, or no instances of the first model and two instances of the second
model are returned. If, on the other hand, NumMatches contains multiple values, the number of instances re-
turned of the different models corresponds to the number specified in the respective entry in NumMatches. If,
for example, NumMatches = [1, 1] is selected, one instance of the first model and one instance of the second
model is returned. For a detailed description of the semantics of NumMatches, see below. A similar extension
of the semantics holds for MaxOverlap. If a single value is passed for MaxOverlap, the overlap is com-
puted for all found instances of the different models, irrespective of the model type, i.e., instances of the same
or of different models that overlap too much are eliminated. If, on the other hand, multiple values are passed in
MaxOverlap, the overlap is only computed for found instances of the model that have the same model type, i.e.,
only instances of the same model that overlap too much are eliminated. In this mode, models of different types
may overlap completely. For a detailed description of the semantics of MaxOverlap, see below. Hence, a call to
find_aniso_shape_models with multiple values for ModelIDs, NumMatches and MaxOverlap has
the same effect as multiple independent calls to find_aniso_shape_model with the respective parameters.
However, a single call to find_aniso_shape_models is considerably more efficient.
The type of the found instances of the models is returned in Model. The elements of Model are indices into the
tuple ModelIDs, i.e., they can contain values from 0 to |ModelIDs| − 1. Hence, a value of 0 in an element of
Model corresponds to an instance of the first model in ModelIDs.
HALCON 8.0.2
622 CHAPTER 7. MATCHING
The position, rotation, and scale in the row and column direction of the found instances of the model are returned
in Row, Column, Angle, ScaleR, and ScaleC. The coordinates Row and Column are the coordinates of the
origin of the shape model in the search image. By default, the origin is the center of gravity of the domain (region)
of the image that was used to create the shape model with create_aniso_shape_model. A different origin
can be set with set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for find_aniso_shape_model shows how to create this matrix and use it to display the
model at the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_aniso_shape_model. A different origin set with set_shape_model_origin is not taken into
account. The model is searched within those points of the domain of the image, in which the model lies completely
within the image. This means that the model will not be found if it extends beyond the borders of the image, even if
it would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleRMin, ScaleRMax, ScaleCMin, and ScaleCMax determine the range of
scales in the row and column directions for which the model is searched. If necessary, both ranges are clipped
to the range given when the model was created with create_aniso_shape_model. In particular, this
means that the angle ranges of the model and the search must truly overlap. The angle range in the search is
not adapted modulo 2π. To simplify the presentation, all angles in the remainder of the paragraph are given in
degrees, whereas they have to be specified in radians in find_aniso_shape_models. Hence, if the model,
for example, was created with AngleStart = −20◦ and AngleExtent = 40◦ and the angle search space in
find_aniso_shape_models is, for example, set to AngleStart = 350◦ and AngleExtent = 20◦ , the
model will not be found, even though the angle ranges would overlap if they were regarded modulo 360◦ . To find
the model, in this example it would be necessary to select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_aniso_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_aniso_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_aniso_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at
least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, NumLevels must contain twice the number of elements as ModelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevels. If, for example,
two models are specified in ModelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, NumLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in ModelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in NumLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
HALCON 8.0.2
624 CHAPTER 7. MATCHING
HALCON 8.0.2
626 CHAPTER 7. MATCHING
into account. The model is searched within those points of the domain of the image, in which the model lies
completely within the image. This means that the model will not be found if it extends beyond the borders of the
image, even if it would achieve a score greater than MinScore (see below). This behavior can be changed with
set_system(’border_shape_models’,’true’), which will cause models that extend beyond the im-
age border to be found if they achieve a score greater than MinScore. Here, points lying outside the image are
regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase
in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleMin and ScaleMax determine the range of scales for which the model
is searched. If necessary, both ranges are clipped to the range given when the model was created with
create_scaled_shape_model. In particular, this means that the angle ranges of the model and the search
must truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians in
find_scaled_shape_model. Hence, if the model, for example, was created with AngleStart = −20◦
and AngleExtent = 40◦ and the angle search space in find_scaled_shape_model is, for example, set
to AngleStart = 350◦ and AngleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_scaled_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_scaled_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_scaled_shape_model is
used. If NumLevels is set to 0, the number of pyramid levels specified in create_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set
to at least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired
accuracy cannot be achieved, or that wrong instances of the model are found because the model is not specific
enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this
case, the lowest pyramid level to use must be set to a smaller value.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
HALCON 8.0.2
628 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator find_scaled_shape_model returns the value
H_MSG_TRUE. If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_scaled_shape_model is reentrant and processed without parallelization.
Possible Predecessors
create_scaled_shape_model, read_shape_model, set_shape_model_origin
Possible Successors
clear_shape_model
Alternatives
find_shape_model, find_aniso_shape_model, find_shape_models,
find_scaled_shape_models, find_aniso_shape_models, best_match_rot_mg
See also
set_system, get_system
Module
Matching
HALCON 8.0.2
630 CHAPTER 7. MATCHING
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for find_scaled_shape_model shows how to create this matrix and use it to display the
model at the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_scaled_shape_model. A different origin set with set_shape_model_origin is not taken
into account. The model is searched within those points of the domain of the image, in which the model lies
completely within the image. This means that the model will not be found if it extends beyond the borders of the
image, even if it would achieve a score greater than MinScore (see below). This behavior can be changed with
set_system(’border_shape_models’,’true’), which will cause models that extend beyond the im-
age border to be found if they achieve a score greater than MinScore. Here, points lying outside the image are
regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase
in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. The parameters ScaleMin and ScaleMax determine the range of scales for which the model
is searched. If necessary, both ranges are clipped to the range given when the model was created with
create_scaled_shape_model. In particular, this means that the angle ranges of the model and the search
must truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians in
find_scaled_shape_models. Hence, if the model, for example, was created with AngleStart = −20◦
and AngleExtent = 40◦ and the angle search space in find_scaled_shape_models is, for example, set
to AngleStart = 350◦ and AngleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation or scale that is slightly outside the
specified range are found. This may happen if the specified range is smaller than the range given when the model
was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle and scale resolution that was specified with create_scaled_shape_model.
If SubPixel is set to ’interpolation’ (or ’true’) the position as well as the rotation and scale are determined
with subpixel accuracy. In this mode, the model’s pose is interpolated from the score function. This mode costs
almost no computation time and achieves an accuracy that is high enough for most applications. In some applica-
tions, however, the accuracy requirements are extremely high. In these cases, the model’s pose can be determined
through a least-squares adjustment, i.e., by minimizing the distances of the model points to their corresponding
image points. In contrast to ’interpolation’, this mode requires additional computation time. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used
to determine the accuracy with which the minimum distance is being searched. The higher the accuracy is cho-
sen, the longer the subpixel extraction will take, however. Usually, SubPixel should be set to ’interpolation’.
If least-squares adjustment is desired, ’least_squares’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number of
levels is clipped to the range given when the shape model was created with create_scaled_shape_model.
If NumLevels is set to 0, the number of pyramid levels specified in create_scaled_shape_model is used.
Optionally, NumLevels can contain a second value that determines the lowest pyramid level to which the found
matches are tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid
level and tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value
of 1). This mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the
matches are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at
least ’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, NumLevels must contain twice the number of elements as ModelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevels. If, for example,
two models are specified in ModelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, NumLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in ModelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in NumLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
HALCON 8.0.2
632 CHAPTER 7. MATCHING
Alternatives
find_shape_models, find_aniso_shape_models, find_shape_model,
find_scaled_shape_model, find_aniso_shape_model, best_match_rot_mg
See also
set_system, get_system
Module
Matching
HALCON 8.0.2
634 CHAPTER 7. MATCHING
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle resolution that was specified with create_shape_model. If SubPixel is set
to ’interpolation’ (or ’true’) the position as well as the rotation are determined with subpixel accuracy. In this
mode, the model’s pose is interpolated from the score function. This mode costs almost no computation time
and achieves an accuracy that is high enough for most applications. In some applications, however, the accuracy
requirements are extremely high. In these cases, the model’s pose can be determined through a least-squares ad-
justment, i.e., by minimizing the distances of the model points to their corresponding image points. In contrast to
’interpolation’, this mode requires additional computation time. The different modes for least-squares adjustment
(’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used to determine the accuracy with
which the minimum distance is being searched. The higher the accuracy is chosen, the longer the subpixel extrac-
tion will take, however. Usually, SubPixel should be set to ’interpolation’. If least-squares adjustment is desired,
’least_squares’ should be chosen because this results in the best tradeoff between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number
of levels is clipped to the range given when the shape model was created with create_shape_model. If
NumLevels is set to 0, the number of pyramid levels specified in create_shape_model is used. Optionally,
NumLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at least
’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on the
higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the lowest
pyramid level to use must be set to a smaller value.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
HALCON 8.0.2
636 CHAPTER 7. MATCHING
Result
If the parameter values are correct, the operator find_shape_model returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_shape_model is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model, read_shape_model, set_shape_model_origin
Possible Successors
clear_shape_model
Alternatives
find_scaled_shape_model, find_aniso_shape_model, find_scaled_shape_models,
find_shape_models, find_aniso_shape_models, best_match_rot_mg
See also
set_system, get_system
Module
Matching
the model type, i.e., instances of the same or of different models that overlap too much are eliminated. If, on the
other hand, multiple values are passed in MaxOverlap, the overlap is only computed for found instances of the
model that have the same model type, i.e., only instances of the same model that overlap too much are eliminated.
In this mode, models of different types may overlap completely. For a detailed description of the semantics
of MaxOverlap, see below. Hence, a call to find_shape_models with multiple values for ModelIDs,
NumMatches and MaxOverlap has the same effect as multiple independent calls to find_shape_model
with the respective parameters. However, a single call to find_shape_models is considerably more efficient.
The type of the found instances of the models is returned in Model. The elements of Model are indices into the
tuple ModelIDs, i.e., they can contain values from 0 to |ModelIDs| − 1. Hence, a value of 0 in an element of
Model corresponds to an instance of the first model in ModelIDs.
The position and rotation of the found instances of the model is returned in Row, Column, and Angle. The
coordinates Row and Column are the coordinates of the origin of the shape model in the search image. By default,
the origin is the center of gravity of the domain (region) of the image that was used to create the shape model with
create_shape_model. A different origin can be set with set_shape_model_origin.
Note that the coordinates Row and Column do not exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are optimized for creating the transformation matrix
with which you can use the results of the matching for various tasks, e.g., to align ROIs for other processing steps.
The example given for find_shape_model shows how to create this matrix and use it to display the model at
the found position in the search image and to calculate the exact coordinates.
Additionally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which
is an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
The domain of the image Image determines the search space for the reference point of the model, i.e.,
for the center of gravity of the domain (region) of the image that was used to create the shape model with
create_shape_model. A different origin set with set_shape_model_origin is not taken into account.
The model is searched within those points of the domain of the image, in which the model lies completely within
the image. This means that the model will not be found if it extends beyond the borders of the image, even if it
would achieve a score greater than MinScore (see below). This behavior can be changed with set_system
(’border_shape_models’,’true’), which will cause models that extend beyond the image border to be
found if they achieve a score greater than MinScore. Here, points lying outside the image are regarded as being
occluded, i.e., they lower the score. It should be noted that the runtime of the search will increase in this mode.
The parameters AngleStart and AngleExtent determine the range of rotations for which the model is
searched. If necessary, the range of rotations is clipped to the range given when the model was created with
create_shape_model. In particular, this means that the angle ranges of the model and the search must
truly overlap. The angle range in the search is not adapted modulo 2π. To simplify the presentation, all
angles in the remainder of the paragraph are given in degrees, whereas they have to be specified in radians
in find_shape_models. Hence, if the model, for example, was created with AngleStart = −20◦
and AngleExtent = 40◦ and the angle search space in find_shape_models is, for example, set to
AngleStart = 350◦ and AngleExtent = 20◦ , the model will not be found, even though the angle ranges
would overlap if they were regarded modulo 360◦ . To find the model, in this example it would be necessary to
select AngleStart = −10◦ .
Furthermore, it should be noted that in some cases instances with a rotation that is slightly outside the specified
range of rotations are found. This may happen if the specified range of rotations is smaller than the range given
when the model was created.
The parameter MinScore determines what score a potential match must at least have to be regarded as an instance
of the model in the image. The larger MinScore is chosen, the faster the search is. If the model can be expected
never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9. If the matches are not tracked
to the lowest pyramid level (see below) it might happen that instances with a score slightly below MinScore are
found.
The maximum number of instances to be found can be determined with NumMatches. If more than
NumMatches instances with a score greater than MinScore are found in the image, only the best NumMatches
instances are returned. If fewer than NumMatches are found, only that number is returned, i.e., the parameter
MinScore takes precedence over NumMatches.
If the model exhibits symmetries it may happen that multiple instances with similar positions but different rota-
tions are found in the image. The parameter MaxOverlap determines by what fraction (i.e., a number between
0 and 1) two instances may at most overlap in order to consider them as different instances, and hence to be
HALCON 8.0.2
638 CHAPTER 7. MATCHING
returned separately. If two instances overlap each other by more than MaxOverlap only the best instance is
returned. The calculation of the overlap is based on the smallest enclosing rectangle of arbitrary orientation (see
smallest_rectangle2) of the found instances. If MaxOverlap = 0, the found instances may not overlap
at all, while for MaxOverlap = 1 all instances are returned.
The parameter SubPixel determines whether the instances should be extracted with subpixel accuracy. If
SubPixel is set to ’none’ (or ’false’ for backwards compatibility) the model’s pose is only determined with
pixel accuracy and the angle resolution that was specified with create_shape_model. If SubPixel is set
to ’interpolation’ (or ’true’) the position as well as the rotation are determined with subpixel accuracy. In this
mode, the model’s pose is interpolated from the score function. This mode costs almost no computation time
and achieves an accuracy that is high enough for most applications. In some applications, however, the accuracy
requirements are extremely high. In these cases, the model’s pose can be determined through a least-squares ad-
justment, i.e., by minimizing the distances of the model points to their corresponding image points. In contrast to
’interpolation’, this mode requires additional computation time. The different modes for least-squares adjustment
(’least_squares’, ’least_squares_high’, and ’least_squares_very_high’) can be used to determine the accuracy with
which the minimum distance is being searched. The higher the accuracy is chosen, the longer the subpixel extrac-
tion will take, however. Usually, SubPixel should be set to ’interpolation’. If least-squares adjustment is desired,
’least_squares’ should be chosen because this results in the best tradeoff between runtime and accuracy.
The number of pyramid levels used during the search is determined with NumLevels. If necessary, the number
of levels is clipped to the range given when the shape model was created with create_shape_model. If
NumLevels is set to 0, the number of pyramid levels specified in create_shape_model is used. Optionally,
NumLevels can contain a second value that determines the lowest pyramid level to which the found matches are
tracked. Hence, a value of [4,2] for NumLevels means that the matching starts at the fourth pyramid level and
tracks the matches to the second lowest pyramid level (the lowest pyramid level is denoted by a value of 1). This
mechanism can be used to decrease the runtime of the matching. It should be noted, however, that in general the
accuracy of the extracted pose parameters is lower in this mode than in the normal mode, in which the matches
are tracked to the lowest pyramid level. Hence, if a high accuracy is desired, SubPixel should be set to at least
’least_squares’. If the lowest pyramid level to use is chosen too large, it may happen that the desired accuracy
cannot be achieved, or that wrong instances of the model are found because the model is not specific enough on
the higher pyramid levels to facilitate a reliable selection of the correct instance of the model. In this case, the
lowest pyramid level to use must be set to a smaller value. If the lowest pyramid level is specified separately for
each model, NumLevels must contain twice the number of elements as ModelIDs. In this case, the number
of pyramid levels and the lowest pyramid level must be specified interleaved in NumLevels. If, for example,
two models are specified in ModelIDs, the number of pyramid levels is 5 for the first model and 4 for the second
model, and the lowest pyramid level is 2 for the first model and 1 for the second model, NumLevels = [5 , 2 , 4 , 1 ]
must be selected. If exactly two models are specified in ModelIDs, a special case occurs. If in this case the lowest
pyramid level is to be specified, the number of pyramid levels and the lowest pyramid level must be specified
explicitly for both models, even if they are identical, because specifying two values in NumLevels is interpreted
as the explicit specification of the number of pyramid levels for the two models.
The parameter Greediness determines how “greedily” the search should be carried out. If Greediness = 0,
a safe search heuristic is used, which always finds the model if it is visible in the image. However, the search will
be relatively time consuming in this case. If Greediness = 1, an unsafe search heuristic is used, which may
cause the model not to be found in rare cases, even though it is visible in the image. For Greediness = 1, the
maximum search speed is achieved. In almost all cases, the shape model will always be found for Greediness =
0.9.
Parameter
HALCON 8.0.2
640 CHAPTER 7. MATCHING
Possible Successors
clear_shape_model
Alternatives
find_scaled_shape_models, find_aniso_shape_models, find_shape_model,
find_scaled_shape_model, find_aniso_shape_model, best_match_rot_mg
See also
set_system, get_system
Module
Matching
HALCON 8.0.2
642 CHAPTER 7. MATCHING
two values each are returned in the above three parameters. Here, the first value of the respective parameter refers
to the scaling in the row direction, while the second value refers to the scaling in the column direction.
Note that the parameters Optimization and Contrast that also can be determined automatically during
the model creation cannot be queried by using get_shape_model_params. If their value is of interest
determine_shape_model_params should be used instead.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model ; (Htuple .) Hlong
Handle of the model.
. NumLevels (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong *
Number of pyramid levels.
. AngleStart (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double *
Smallest rotation of the pattern.
. AngleExtent (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double *
Extent of the rotation angles.
Assertion : AngleExtent ≥ 0
. AngleStep (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double *
Step length of the angles (resolution).
Assertion : (AngleStep ≥ 0) ∧ (AngleStep ≤ (pi/16))
. ScaleMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum scale of the pattern.
Assertion : ScaleMin > 0
. ScaleMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum scale of the pattern.
Assertion : ScaleMax ≥ ScaleMin
. ScaleStep (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Scale step length (resolution).
Assertion : ScaleStep ≥ 0
. Metric (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) char *
Match metric.
. MinContrast (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; (Htuple .) Hlong *
Minimum contrast of the objects in the search images.
Result
If the handle of the model is valid, the operator get_shape_model_params returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
get_shape_model_params is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
read_shape_model
See also
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models
Module
Matching
HALCON 8.0.2
644 CHAPTER 7. MATCHING
Module
Foundation
Parallelization Information
set_shape_model_origin is processed completely exclusively without parallelization.
Possible Predecessors
create_shape_model, create_scaled_shape_model, create_aniso_shape_model,
read_shape_model
Possible Successors
find_shape_model, find_scaled_shape_model, find_aniso_shape_model,
find_shape_models, find_scaled_shape_models, find_aniso_shape_models,
get_shape_model_origin
See also
area_center
Module
Matching
HALCON 8.0.2
646 CHAPTER 7. MATCHING
Matching-3D
647
648 CHAPTER 8. MATCHING-3D
clear_all_object_model_3d ( )
T_clear_all_object_model_3d ( )
clear_all_shape_model_3d ( )
T_clear_all_shape_model_3d ( )
The operator clear_object_model_3d frees the memory of a 3D object model that was created by
read_object_model_3d_dxf. After calling clear_object_model_3d, the model can no longer be
used. The handle ObjectModel3DID becomes invalid.
Parameter
. ObjectModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; Hlong
Handle of the 3D object model.
Result
If the handle of the model is valid, the operator clear_object_model_3d returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
clear_object_model_3d is processed completely exclusively without parallelization.
Possible Predecessors
read_object_model_3d_dxf
See also
clear_all_object_model_3d
Module
3D Metrology
HALCON 8.0.2
650 CHAPTER 8. MATCHING-3D
’x’: The equatorial plane is the yz plane. The positive x axis points to the north pole.
’-x’: The equatorial plane is the yz plane. The positive x axis points to the south pole.
’y’: The equatorial plane is the xz plane. The positive y axis points to the north pole.
’-y’: The equatorial plane is the xz plane. The positive y axis points to the south pole.
’z’: The equatorial plane is the xy plane. The positive z axis points to the north pole.
’-z’: The equatorial plane is the xy plane. The positive z axis points to the south pole.
The position of the zero meridian can be specified with the parameter ZeroMeridian. For this, the coordinate
axis (lying in the equatorial plane) that points to the zero meridian must be passed. The following values for
ZeroMeridian are valid:
’x’: The positive x axis points in the direction of the zero meridian.
’-x’: The negative x axis points in the direction of the zero meridian.
’y’: The positive y axis points in the direction of the zero meridian.
’-y’: The negative y axis points in the direction of the zero meridian.
’z’: The positive z axis points in the direction of the zero meridian.
’-z’: The negative z axis points in the direction of the zero meridian.
Only reasonable combinations of EquatPlaneNormal and ZeroMeridian are permitted, i.e., the normal
of the equatorial plane must not be parallel to the direction of the zero meridian. For example, the combination
EquatPlaneNormal=’y’ and ZeroMeridian=’-y’ is not permitted.
Note that in order to guarantee a consistent conversion back from spherical to Cartesian coordinates by using
convert_point_3d_spher_to_cart, the same values must be passed for EquatPlaneNormal and
ZeroMeridian as were passed to convert_point_3d_cart_to_spher.
The operator convert_point_3d_cart_to_spher can be used, for example, to convert a given camera
position into spherical coordinates. If multiple camera positions are converted in this way, one obtains a pose range
(in spherical coordinates), which can be passed to create_shape_model_3d in order to create a 3D shape
model.
Parameter
’x’: The equatorial plane is the yz plane. The positive x axis points to the north pole.
’-x’: The equatorial plane is the yz plane. The positive x axis points to the south pole.
’y’: The equatorial plane is the xz plane. The positive y axis points to the north pole.
’-y’: The equatorial plane is the xz plane. The positive y axis points to the south pole.
’z’: The equatorial plane is the xy plane. The positive z axis points to the north pole.
’-z’: The equatorial plane is the xy plane. The positive z axis points to the south pole.
The position of the zero meridian can be specified with the parameter ZeroMeridian. For this, the coordinate
axis (lying in the equatorial plane) that points to the zero meridian must be passed. The following values for
ZeroMeridian are valid:
’x’: The positive x axis points in the direction of the zero meridian.
’-x’: The negative x axis points in the direction of the zero meridian.
’y’: The positive y axis points in the direction of the zero meridian.
’-y’: The negative y axis points in the direction of the zero meridian.
HALCON 8.0.2
652 CHAPTER 8. MATCHING-3D
’z’: The positive z axis points in the direction of the zero meridian.
’-z’: The negative z axis points in the direction of the zero meridian.
Only reasonable combinations of EquatPlaneNormal and ZeroMeridian are permitted, i.e., the normal
of the equatorial plane must not be parallel to the direction of the zero meridian. For example, the combination
EquatPlaneNormal=’y’ and ZeroMeridian=’-y’ is not permitted.
Note that in order to guarantee a consistent conversion back from Cartesian to spherical coordinates by using
convert_point_3d_cart_to_spher, the same values must be passed for EquatPlaneNormal and
ZeroMeridian as were passed to convert_point_3d_spher_to_cart.
The operator convert_point_3d_spher_to_cart can be used, for example, to convert a camera position
that is given in spherical coordinates into Cartesian coordinates. The result can then be utilized to create a complete
camera pose by passing the Cartesian coordinates to create_cam_pose_look_at_point.
Parameter
The operator create_cam_pose_look_at_point creates a 3D camera pose with respect to a world coor-
dinate system based on two points and the camera roll angle.
The first of the two points defines the position of the optical center of the camera in the world coordinate system,
i.e., the origin of the camera coordinate system. It is given by its three coordinates CamPosX, CamPosY, and
CamPosZ. The second of the two points defines the viewing direction of the camera. It represents the point in the
world coordinate system at which the camera is to look. It is also specified by its three coordinates LookAtX,
LookAtY, and LookAtZ. Consequently, the second point lies on the z axis of the camera coordinate system.
Finally, the remaining degree of freedom to be specified is a rotation of the camera around its z axis, i.e.,
the roll angle of the camera. To determine this rotation, the normal of a reference plane can be specified in
RefPlaneNormal, which defines the reference orientation of the camera. Finally, the camera roll angle can
be specified in CamRoll, which describes a rotation of the camera around its z axis with respect to its reference
orientation.
The reference plane can be seen as a plane in the world coordinate system that is parallel to the x axis of the
camera (in its reference orientation, i.e., with a roll angle of 0). In an alternative interpretation, the normal vector
of the reference plane projected onto the image plane points upwards, i.e., it is mapped to the negative y axis of the
camera coordinate system. The parameter RefPlaneNormal may take one of the following values:
’x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world coordi-
nate system points upwards in the image plane.
’-x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world
coordinate system points downwards in the image plane.
’y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world coordi-
nate system points upwards in the image plane.
’-y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world
coordinate system points downwards in the image plane.
’z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world coordi-
nate system points upwards in the image plane.
’-z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world
coordinate system points downwards in the image plane.
Alternatively to the above values, an arbitrary normal vector can be specified in RefPlaneNormal, which is not
restricted to the coordinate axes. For this, a tuple of three values representing the three components of the normal
vector must be passed.
Note that the position of the optical center and the point at which the camera looks must differ from each other.
Furthermore, the normal vector of the reference plane and the z axis of the camera must not be parallel. Otherwise,
the camera pose is not well-defined.
create_cam_pose_look_at_point is particularly useful if a 3D object model or a 3D shape
model should be visualized from a certain camera position. In this case, the pose that is cre-
ated by create_cam_pose_look_at_point can be passed to project_object_model_3d or
project_shape_model_3d, respectively.
It is also possible to pass tuples of different length for different input parameters. In this case, internally the
maximum number of parameter values over all input control parameters is computed. This number is taken as
the number of output camera poses. Then, all input parameters can contain a single value or the same number of
values as output camera poses. In the first case, the single value is used for the computation of all camera poses,
while in the second case the respective value of the element in the parameter is used for the computation of the
corresponding camera pose.
Parameter
. CamPosX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
X coordinate of the optical center of the camera.
. CamPosY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Y coordinate of the optical center of the camera.
. CamPosZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Z coordinate of the optical center of the camera.
. LookAtX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
X coordinate of the 3D point to which the camera is directed.
HALCON 8.0.2
654 CHAPTER 8. MATCHING-3D
The pose range within which the model views are generated can be specified by the parameters RefRotX,
RefRotY, RefRotZ, OrderOfRotation, LongitudeMin, LongitudeMax, LatitudeMin,
LatitudeMax, CamRollMin, CamRollMax, DistMin, and DistMax. Note that the model will
only be recognized during the matching if it appears within the specified pose range. The parameters are described
in the following:
Before computing the views, the origin of the coordinate system of the 3D object model is moved to the refer-
ence point of the 3D object model, which is the center of the smallest enclosing axis-parallel cuboid and can be
queried by using get_object_model_3d_params. The virtual cameras, which are used to create the views,
are arranged around the 3D object model in such a way that they all look at the origin of the coordinate system,
i.e., the z axes of the cameras pass through the origin. The pose range can then be specified by restricting the
views to a certain quadrilateral on the sphere around the origin. This naturally leads to the use of the spheri-
cal coordinates longitude, latitude, and radius. The definition of the spherical coordinate system is chosen such
that the equatorial plane corresponds to the xz plane of the Cartesian coordinate system with the y axis pointing
to the south pole (negative latitude) and the negative z axis pointing in the direction of the zero meridian (see
convert_point_3d_spher_to_cart or convert_point_3d_cart_to_spher for further details
about the conversion between Cartesian and spherical coordinates). The advantage of this definition is that a cam-
era with the pose [0,0,z,0,0,0,0] has its optical center at longitude=0, latitude=0, and radius=z. In this case, the
radius represents the distance of the optical center of the camera to the reference point of the 3D object model.
The longitude range, for which views are to be generated, can be specified by LongitudeMin and
LongitudeMax, both given in radians. Accordingly, the latitude range can be specified by LatitudeMin
and LatitudeMax, also given in radians. The minimum and maximum distance between the camera cen-
ter and the model reference point is specified by DistMin and DistMax. Note that the unit of the distance
must be meters (assuming that the parameter Scale has been correctly set when reading the DXF file with
read_object_model_3d_dxf). Finally, the minimum and the maximum camera roll angle can be speci-
fied in CamRollMin and CamRollMax. This interval specifies the allowable camera rotation around its z axis
with respect to the 3D object model. If the image plane is parallel to the plane on which the objects reside and if it
is known that the object may rotate in this plane only in a restricted range, then it is reasonable to specify this range
in CamRollMin and CamRollMax. In all other cases the interpretation of the camera roll angle is difficult, and
hence, it is recommended to set this interval to [−π, +π]. Note that the larger the specified pose range is chosen
the more memory the model will consume (except from the range of the camera roll angle) and the slower the
matching will be.
The orientation of the coordinate system of the 3D object model is defined by the coordinates within the DXF file
that was read by using read_object_model_3d_dxf. Therefore, it is reasonable to previously rotate the
3D object model into a reference orientation such that the view that corresponds to longitude=0 and latitude=0 is
approximately at the center of the pose range. This can be achieved by passing appropriate values for the reference
orientation in RefRotX, RefRotY, RefRotZ, and OrderOfRotation. The rotation is performed around the
axes of the 3D object model, which origin was set to the reference point. The longitude and latitude range can then
be interpreted as a variation of the 3D object model pose around the reference orientation. There are two possible
ways to specify the reference orientation. The first possibility is to specify three rotation angles in RefRotX,
RefRotY, and RefRotZ and the order in which the three rotations are to be applied in OrderOfRotation,
which can either be ’gba’ or ’abg’. The second possibility is to specify the three components of the Rodriguez
rotation vector in RefRotX, RefRotY, and RefRotZ. In this case, OrderOfRotation must be set to ’ro-
driguez’ (see create_pose for detailed information about the order of the rotations and the definition of the
Rodriguez vector).
Thus, two transformations are applied to the 3D object model before computing the model views within the pose
range. The first transformation is the translation of the origin of the coordinate systems to the reference point. The
second transformation is the rotation of the 3D object model to the desired reference orientation around the axes
of the reference coordinate system. By combining both transformations one obtains the reference pose of the 3D
shape model. The reference pose of the 3D shape model thus describes the pose of the reference coordinate system
with respect to the coordinate system of the 3D object model defined by the DXF file. Let t = (x, y, z)0 be the
coordinates of the reference point of the 3D object model and R be the rotation matrix containing the reference
orientation. Then, a point pm given in the 3D object model coordinate system can be transformed to a point pr in
the reference coordinate system of the 3D shape model by applying the following formula:
pr = R · (pm − t)
This transformation can be expressed by a homogeneous 3D transformation matrix or alternatively in terms of a 3D
pose. The latter can be queried by passing ’reference_pose’ for the parameter GenParamNames of the operator
get_shape_model_3d_params. The above formula can be best imagined as a pose of pose type 8, 10, or 12,
HALCON 8.0.2
656 CHAPTER 8. MATCHING-3D
depending on the value that was chosen for OrderOfRotation (see create_pose for detailed information
about the different pose types). Note, however, that get_shape_model_3d_params always returns the pose
using the pose type 0. Finally, poses that are given in one of the two coordinate systems can be transformed to the
other coordinate system by using trans_pose_shape_model_3d.
With MinContrast, it can be determined which edge contrast the model must at least have in the recognition
performed by find_shape_model_3d. In other words, this parameter separates the model from the noise
in the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If
multichannel images are used for the search images, the noise in one channel must be multiplied by the square root
of the number of channels to determine MinContrast. If, for example, the gray values fluctuate within a range
of 10 gray levels in a single channel and the image is a three-channel image, MinContrast should be set to 17.
If the model should be recognized in very low contrast images, MinContrast must be set to a correspondingly
small value. If the model should be recognized even if it is severely occluded, MinContrast should be slightly
larger than the range of gray value fluctuations created by noise in order to ensure that the pose of the model is
extracted robustly and accurately by find_shape_model_3d.
The parameters described above are application-dependent and must be always specified when creating a 3D
shape model. In addition, there are some generic parameters that can optionally be used to influence the model
creation. For most applications these parameters need not to be specified but can be left at their default values.
If desired, these parameters and their corresponding values can be specified by using GenParamNames and
GenParamValues, respectively. The following values for GenParamNames are possible:
’num_levels’: For efficiency reasons the model views are generated on multiple pyramid levels. On higher levels
fewer views are generated than on lower levels. With the parameter ’num_levels’ the number of pyramid
levels on which model views are generated can be specified. It should be chosen as large as possible because
by this the time necessary to find the model is significantly reduced. On the other hand, the number of
levels must be chosen such that the shape representations of the views on the highest pyramid level are
still recognizable and contain a sufficient number of points (at least four). If not enough model points are
generated for a certain view, the view is deleted from the model and replaced by a view on a lower pyramid
level. If for all views on a pyramid level not enough model points are generated, the number of levels is
reduced internally until for at least one view enough model points are found on the highest pyramid level.
If this procedure would lead to a model with no pyramid levels, i.e., if the number of model points is too
small for all views already on the lowest pyramid level, create_shape_model_3d returns an error
message. If ’num_levels’ is set to ’auto’ (default value), create_shape_model_3d determines the
number of pyramid levels automatically. In this case all model views on all pyramid levels are automatically
checked whether their shape representations are still recognizable. If the shape representation of a certain
view is found to be not recognizable, the view is deleted from the model and replaced by a view on a lower
pyramid level. Note that if ’num_levels’ is set to ’auto’, the number of pyramid levels can be different for
different views. In rare cases, it might happen that create_shape_model_3d determines a value for
the number of pyramid levels that is too large or too small. If the number of pyramid levels is chosen too
large, the model may not be recognized in the image or it may be necessary to select very low parameters
for MinScore or Greediness in find_shape_model_3d in order to find the model. If the number
of pyramid levels is chosen too small, the time required to find the model in find_shape_model_3d
may increase. In these cases, the views on the pyramid levels should be checked by using the output of
get_shape_model_3d_contours.
Suggested values: ’auto’, 3, 4, 5, 6
Default value: ’auto’
’optimization’: For models with particularly large model views, it may be useful to reduce the number of model
points by setting ’optimization’ to a value different from ’none’. If ’optimization’ = ’none’, all model points
are stored. In all other cases, the number of points is reduced according to the value of ’optimization’. If
the number of points is reduced, it may be necessary in find_shape_model_3d to set the parame-
ter Greediness to a smaller value, e.g., 0.7 or 0.8. For models with small model views, the reduction
of the number of model points does not result in a speed-up of the search because in this case usually
significantly more potential instances of the model must be examined. If ’optimization’ is set to ’auto’,
create_shape_model_3d automatically determines the reduction of the number of model points for
each model view.
List of values: ’auto’, ’none’, ’point_reduction_low’, ’point_reduction_medium’, ’point_reduction_high’
Default value: ’auto’
’metric’: This parameter determines the conditions under which the model is recognized in the image. Cur-
rently, only the metric ’ignore_segment_polarity’ is supported, which recognizes an object even if the con-
HALCON 8.0.2
658 CHAPTER 8. MATCHING-3D
Possible Predecessors
read_object_model_3d_dxf, project_object_model_3d, get_object_model_3d_params
Possible Successors
find_shape_model_3d, write_shape_model_3d, project_shape_model_3d,
get_shape_model_3d_params, get_shape_model_3d_contours
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology
HALCON 8.0.2
660 CHAPTER 8. MATCHING-3D
values. If desired, these parameters and their corresponding values can be specified by using GenParamNames
and GenParamValues, respectively. The following values for GenParamNames are possible:
• If the pose range in which the model is to be searched is smaller than the pose range that was specified during
the model creation with create_shape_model_3d, the pose range can be restricted appropriately with
the following parameters. If the values lie outside the pose range of the model, the values are automatically
clipped to the pose range of the model.
’longitude_min’: Sets the minimum longitude of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-180)
’longitude_max’: Sets the maximum longitude of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(180)
’latitude_min’: Sets the minimum latitude of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-90)
’latitude_max’: Sets the maximum latitude of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(90)
’cam_roll_min’: Sets the minimum camera roll angle of the pose range.
Suggested values: rad(-45), rad(-30), rad(-15)
Default value: rad(-180)
’cam_roll_max’: Sets the maximum camera roll angle of the pose range.
Suggested values: rad(15), rad(30), rad(45)
Default value: rad(180)
’dist_min’: Sets the minimum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default value: 0
’dist_max’: Sets the maximum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default value: (∞)
• Further generic parameters that do not concern the pose range can be specified:
’num_matches’: With this parameter the maximum number of instances to be found can be determined.
If more than the specified number of instances with a score greater than MinScore are found in the
image, only the best ’num_matches’ instances are returned. If fewer than ’num_matches’ are found,
only that number is returned, i.e., the parameter MinScore takes precedence over ’num_matches’. If
’num_matches’ is set to 0, all matches that satisfy the score criterion are returned. Note that the more
matches should be found the slower the matching will be.
Suggested values: 0, 1, 2, 3
Default value: 1
’max_overlap’: It may happen that multiple instances with similar positions but with different orientations
are found in the image. The parameter ’max_overlap’ determines by what fraction (i.e., a number be-
tween 0 and 1) two instances may at most overlap in order to consider them as different instances, and
hence to be returned separately. If two instances overlap each other by more than the specified value only
the best instance is returned. The calculation of the overlap is based on the smallest enclosing rectangle
of arbitrary orientation (see smallest_rectangle2) of the found instances. If 0 max _overlap 0 = 0,
the found instances may not overlap at all, while for 0 max _overlap 0 = 1 all instances are returned.
Suggested values: 0.0, 0.2, 0.4, 0.6, 0.8, 1.0
Default value: 0.5
’pose_refinement’: This parameter determines whether the poses of the instances should be refined after
the matching. If ’pose_refinement’ is set to ’none’ the model’s pose is only determined with a limited
accuracy. In this case, the accuracy depends on several sampling steps that are used inside the match-
ing process and, therefore cannot be predicted very well. Therefore, ’pose_refinement’ should only be
set to ’none’ when the computation time is of primary concern and an approximate pose is sufficient.
In all other cases the pose should be determined through a least-squares adjustment, i.e., by minimiz-
ing the distances of the model points to their corresponding image points. In order to achieve a high
accuracy, this refinement is directly performed in 3D. Therefore, the refinement requires additional com-
putation time. The different modes for least-squares adjustment (’least_squares’, ’least_squares_high’,
and ’least_squares_very_high’) can be used to determine the accuracy with which the minimum distance
is searched for. The higher the accuracy is chosen, the longer the pose refinement will take, however.
For most applications ’least_squares_high’ should be chosen because this results in the best tradeoff
between runtime and accuracy.
List of values: ’none’, ’least_squares’, ’least_squares_high’, ’least_squares_very_high’
Default value: ’least_squares_high’
’outlier_suppression’: This parameter only takes effect if ’pose_refinement’ is set to a value other than
’none’, and hence, a least-squares adjustment is performed. Then, in some cases it might be useful
to apply a robust outlier suppression during the least-squares adjustment. This might be necessary, for
example, if a high degree of clutter is present in the image, which prevents the least-squares adjustment
from finding the optimum pose. In this case, ’outlier_suppression’ should be set to either ’medium’
(eliminates a medium proportion of outliers) or ’high’ (eliminates a high proportion of outliers). How-
ever, in most applications, no robust outlier suppression is necessary, and hence, ’pose_refinement’ can
be set to ’none’. It should be noted that activating the outlier suppression comes along with a signifi-
cantly increasing computation time.
List of values: ’none’, ’medium’, ’high’
Default value: ’none’
’cov_pose_mode’: This parameter only takes effect if ’pose_refinement’ is set to a value other than ’none’,
and hence, a least-squares adjustment is performed. ’cov_pose_mode’ determines the mode in which
the accuracies that are computed during the least-squares adjustment are returned in CovPose. If
’cov_pose_mode’ is set to ’standard_deviations’, the 6 standard deviations of the 6 pose parameters
are returned for each match. In contrast, if ’cov_pose_mode’ is set to ’covariances’, CovPose contains
the 36 values of the complete 6 × 6 covariance matrix of the 6 pose parameters.
List of values: ’standard_deviations’, ’covariances’
Default value: ’standard_deviations’
’border_model’: The model is searched within those points of the domain of the image in which the model
lies completely within the image. This means that the model will not be found if it extends beyond
the borders of the image, even if it would achieve a score greater than MinScore. This behavior can
be changed by setting ’border_model’ to ’true’, which will cause models that extend beyond the image
border to be found if they achieve a score greater than MinScore. Here, points lying outside the image
are regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the
search will increase in this mode.
List of values: ’false’, ’true’
Default value: ’false’
Parameter
HALCON 8.0.2
662 CHAPTER 8. MATCHING-3D
Result
If the parameter values are correct, the operator find_shape_model_3d returns the value H_MSG_TRUE.
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
find_shape_model_3d is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d
Possible Successors
project_shape_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology
’reference_point’: 3D coordinates of the reference point of the model. The reference point is the center of the
smallest enclosing axis-parallel cuboid (see parameter ’bounding_box1’).
’bounding_box1’: Smallest enclosing axis-parallel cuboid (min_x, min_y, min_z, max_x, max_y, max_z).
Parameter
HALCON 8.0.2
664 CHAPTER 8. MATCHING-3D
even on the highest pyramid level, a higher number of pyramid levels should be chosen already during the creation
of the 3D shape model by using create_shape_model_3d.
Additionally, the pose of the selected view is returned in ViewPose. It can be used, for example, to project the
3D shape model according to the view pose by using project_shape_model_3d. The rating of the model
contours that was described above can then be performed by comparing the ModelContours to the projected
model. Note that the position of the contours of the projection and the position of the model contours may slightly
differ because of radial distortions.
Parameter
’cam_param’: Interior parameters of the camera that is used for the matching.
’ref_rot_x’: Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or
without unit).
’ref_rot_y’: Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or
without unit).
’ref_rot_z’: Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or
without unit).
’order_of_rotation’: Meaning of the rotation values of the reference orientation.
’longitude_min’: Minimum longitude of the model views.
’longitude_max’: Maximum longitude of the model views.
’latitude_min’: Minimum latitude of the model views.
’latitude_max’: Maximum latitude of the model views.
’cam_roll_min’: Minimum camera roll angle of the model views.
’cam_roll_max’: Maximum camera roll angle of the model views.
’dist_min’: Minimum camera-object-distance of the model views.
’dist_max’: Maximum camera-object-distance of the model views.
’min_contrast’: Minimum contrast of the objects in the search images.
’num_levels’: User-specified number of pyramid levels.
’num_levels_max’: Maximum number of used pyramid levels over all model views.
’optimization’: Kind of optimization by reducing the number of model points.
’metric’: Match metric.
’min_face_angle’: Minimum 3D face angle for which 3D object model edges are included in the 3D shape model.
’min_size’: Minimum size of the projected 3D object model edge (in number of pixels) to include the projected
edge in the 3D shape model.
’model_tolerance’: Maximum acceptable tolerance of the projected 3D object model edges (in pixels).
’num_views_per_level’: Number of model views per pyramid level. For each pyramid level the number of views
that are stored in the 3D shape model are returned. Thus, the number of returned elements corresponds to the
number of used pyramid levels, which can be queried with ’num_levels_max’.
’reference_pose’: Reference position and orientation of the 3d shape model. The returned pose describes the pose
of the internally used reference coordinate system of the 3D shape model with respect to the coordinate
system that is used in the underlying 3D object model.
’reference_point’: 3D coordinates of the reference point of the underlying 3D object model.
’bounding_box1’: Smallest enclosing axis-parallel cuboid of the underlying 3D object model in the following
order: [min_x, min_y, min_z, max_x, max_y, max_z].
A detailed description of the parameters can be looked up with the operator create_shape_model_3d.
It is possible to query the values of several parameters with a single operator call by passing a tuple containing the
names of all desired parameters to GenParamNames. As a result a tuple of the same length with the correspond-
ing values is returned in GenParamValues. Note that this is solely possible for parameters that return only a
single value.
Parameter
HALCON 8.0.2
666 CHAPTER 8. MATCHING-3D
Result
If the parameters are valid, the operator get_shape_model_3d_params returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
get_shape_model_3d_params is reentrant and processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d
Possible Successors
find_shape_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology
Result
project_object_model_3d returns H_MSG_TRUE if all parameters are correct. If necessary, an exception
is raised.
Parallelization Information
project_object_model_3d is reentrant and processed without parallelization.
Possible Predecessors
read_object_model_3d_dxf, affine_trans_object_model_3d
Possible Successors
clear_object_model_3d
See also
project_shape_model_3d
Module
3D Metrology
HALCON 8.0.2
668 CHAPTER 8. MATCHING-3D
• POLYLINE
– Polyface meshes
• 3DFACE
• LINE
• CIRCLE
• ARC
• ELLIPSE
• SOLID
• BLOCK
• INSERT
Two-dimensional linear elements like the DXF elements CIRCLE or ELLIPSE are interpreted as faces even if they
are not extruded. If necessary, they are closed. Two-dimensional linear elements that consist of just two points are
not used because they do not define a face. Thus, elements of the type LINE are only used if they are extruded.
The curved surface of extruded DXF entities of the type CIRCLE, ARC, and ELLIPSE is approximated by planar
faces. The accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’
and ’max_approx_error’. The parameter ’min_num_points’ defines the minimum number of sampling points
that are used for the approximation of the DXF element CIRCLE, ARC, or ELLIPSE. Note that the parameter
’min_num_points’ always refers to the full circle or ellipse, respectively, even for ARCs or elliptical arcs, i.e., if
’min_num_points’ is set to 50 and a DXF entity of the type ARC is read that represents a semi-circle, this semi-
circle is approximated by at least 25 sampling points. The parameter ’max_approx_error’ defines the maximum
deviation of the XLD contour from the ideal circle or ellipse, respectively. The determination of this deviation
is carried out in the units used in the DXF file. For the determination of the accuracy of the approximation both
criteria are evaluated. Then, the criterion that leads to the more accurate approximation is used.
Internally, the following default values are used for the generic parameters:
’min_num_points’ = 20
’max_approx_error’ = 0.25
To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
One possible way to create a suitable DXF file is to create a 3D model of the object with the CAD program
AutoCAD. Ensure that the surface of the object is modelled, not only its edges. Lines that, e.g., define object
edges, will not be used by HALCON, because they do not define the surface of the object. Once the modelling is
completed, you can store the model in DWG format. To convert the DWG file into a DXF file that is suitable for
HALCON’s 3D matching, carry out the following steps:
• Export the 3D CAD model to a 3DS file using the 3dsout command of AutoCAD. This will triangulate the
object’s surface, i.e., the model will only consist of planes. (Users of AutoCAD 2007 or newer versions can
download this command utility from Autodesk’s web site.)
• Open a new empty sheet in AutoCAD.
• Import the 3DS file into this empty sheet with the 3dsin command of AutoCAD.
• Save the object into a DXF R12 file.
Users of other CAD programs should ensure that the surface of the 3D model is triangulated before it is exported
into the DXF file. If the CAD program is not able to carry out the triangulation, it is often possible to save the 3D
model in the proprietary format of the CAD program and to convert it into a suitable DXF file by using a CAD file
format converter that is able to perform the triangulation.
Parameter
HALCON 8.0.2
670 CHAPTER 8. MATCHING-3D
Result
read_object_model_3d_dxf returns H_MSG_TRUE if all parameters are correct. If necessary, an excep-
tion is raised.
Parallelization Information
read_object_model_3d_dxf is processed completely exclusively without parallelization.
Possible Successors
affine_trans_object_model_3d, project_object_model_3d
Module
3D Metrology
Transform a pose that refers to the coordinate system of a 3D object model to a pose that refers to the reference
coordinate system of a 3D shape model and vice versa.
The operator trans_pose_shape_model_3d transforms the pose PoseIn into the pose PoseOut by using
the transformation direction specified in Transformation. In the majority of cases, the operator will be used
to transform a camera pose that is given with respect to the source coordinate system to a camera pose that refers
to the target coordinate system.
The pose can be transformed between two coordinate systems. The first coordinate system is the reference coordi-
nate system of the 3D shape model that is passed in ShapeModel3DID. The origin of the reference coordinate
system lies at the reference point of the underlying 3D object model. The orientation of the reference coordi-
nate system is determined by the reference orientation that was specified when creating the 3D shape model with
create_shape_model_3d.
The second coordinate system is the world coordiante system, i.e., the coordinate system of the 3D object model
that underlies the 3D shape model. This coordinate system is implicitly determined by the coordinates that are
stored in the DXF file that was read by using read_object_model_3d_dxf.
If Transformation is set to ’ref_to_model’, it is assumed that PoseIn refers to the reference coordinate
system of the 3D shape model. The resulting output pose PoseOut in this case refers to the coordinate system of
the 3D object model.
If Transformation is set to ’model_to_ref’, it is assumed that PoseIn refers to the coordinate system of the
3D object model. The resulting output pose PoseOut in this case refers to the reference coordinate system of the
3D shape model.
The relative pose of the two coordinate systems can be queried by passing ’reference_pose’ for GenParamNames
in the operator get_shape_model_3d_params.
Parameter
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; Htuple . Hlong
Handle of the 3D shape model.
. PoseIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Pose to be transformed in the source system.
. Transformation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Direction of the transformation.
Default Value : "ref_to_model"
List of values : Transformation ∈ {"ref_to_model", "model_to_ref"}
. PoseOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Transformed 3D pose in the target system.
Result
If the parameters are valid, the operator trans_pose_shape_model_3d returns the value H_MSG_TRUE.
If necessary an exception is raised.
Parallelization Information
trans_pose_shape_model_3d is reentrant and processed without parallelization.
Possible Predecessors
find_shape_model_3d
Alternatives
hom_mat3d_translate, hom_mat3d_rotate
Module
3D Metrology
HALCON 8.0.2
672 CHAPTER 8. MATCHING-3D
Morphology
9.1 Gray-Values
A range filtering is calculated according to the following scheme: The indicated mask is put over the image to be
filtered in such a way that the center of the mask touches all pixels once. For each of these pixels all neighboring
pixels covered by the mask are sorted in an ascending sequence corresponding to their gray values. Each sorted
sequence of gray values contains the same number of gray values like the mask has image points. The n-th highest
element, (= ModePercent, rank values between 0...100 in percent) is selected and set as result gray value in the
corresponding result image.
If ModePercent is 0, then the operator equals to the gray value opening ( gray_opening). If ModePercent
is 50, the operator results in the median filter, which is applied twice ( median_image). The ModePercent
100 in dual_rank means that it calculates the gray value closing ( gray_closing). Choosing parameter
values inside this range results in a smooth transformation of these operators.
Parameter
673
674 CHAPTER 9. MORPHOLOGY
read_image(&Image,"fabrik");
dual_rank(Image,&ImageOpening,"circle",10,10,"mirrored");
disp_image(ImageOpening,WindowHandle);
√ Complexity
For each pixel: O( F ∗ 10) with F = area of the structuring element.
Result
If the parameter values are correct the operator dual_rank returns the value H_MSG_TRUE. The
behavior in case of empty input (no input images available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
dual_rank is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Possible Predecessors
read_image
Possible Successors
threshold, dyn_threshold, sub_image, regiongrowing
Alternatives
rank_image, gray_closing, gray_opening, median_image
See also
gen_circle, gen_rectangle1, gray_erosion_rect, gray_dilation_rect, sigma_image
References
W. Eckstein, O. Munkelt “Extracting Objects from Digital Terrain Model” Remote Sensing and Reconstruction for
Threedimensional Objects and Scenes, SPIE Symposium on Optical Science, Engeneering, and Instrumentation,
July 1995, San Diego
Module
Foundation
HALCON 8.0.2
676 CHAPTER 9. MORPHOLOGY
gray_bothat applies a gray value bottom hat transformation to the input image Image with the structuring
element SE. The gray value bottom hat transformation of an image i with a structuring element s is defined as
bothat(i, s) = (i • s) − i,
i.e., the difference of the closing of the image with s and the image (see gray_closing). For the generation of
structuring elements, see read_gray_se.
Parameter
i • s = (i ⊕ s) s ,
i.e., a dilation of the image with s followed by an erosion with s (see gray_dilation and gray_erosion).
For the generation of structuring elements, see read_gray_se.
Parameter
Result
gray_closing returns H_MSG_TRUE if the structuring element is not the empty region. Otherwise, an excep-
tion is raised.
Parallelization Information
gray_closing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Alternatives
dual_rank
See also
closing, gray_dilation, gray_erosion
Module
Foundation
i ◦ s = (i ⊕ s) s ,
i.e., a dilation of the image with s followed by an erosion with s (see gray_dilation_rect and
gray_erosion_rect).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Input image.
. ImageClosing (output_object) . . . . image(-array) ; Hobject * : byte / direction / cyclic / int2 / int4 / real
Gray-closed image.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
HALCON 8.0.2
678 CHAPTER 9. MORPHOLOGY
Result
gray_closing_rect returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_closing_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_closing, gray_closing_shape
See also
closing_rectangle1, gray_dilation_rect, gray_erosion_rect
Module
Foundation
i • s = (i ⊕ s) s ,
i.e., a dilation of the image with s followed by an erosion with s (see gray_dilation_shape and
gray_erosion_shape).
Attention
Note that gray_closing_shape requires considerably more time for mask sizes of type float than for mask
sizes of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter
Here, S is the domain of the structuring element s, i.e., the pixels z where s(z) > 0 (see read_gray_se).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageDilation (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / real
Gray-dilated image.
Result
gray_dilation returns H_MSG_TRUE if the structuring element is not the empty region. Otherwise, an
exception is raised.
Parallelization Information
gray_dilation is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Possible Successors
sub_image, gray_erosion
Alternatives
gray_dilation_rect
See also
gray_opening, gray_closing, dilation1, gray_skeleton
HALCON 8.0.2
680 CHAPTER 9. MORPHOLOGY
Module
Foundation
. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Image for which the maximum gray values are to be calculated.
. ImageMax (output_object) . . . . . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int2 / int4 / real
Image containing the maximum gray values.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_dilation_rect returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_dilation_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
See also
gray_skeleton
Module
Foundation
gray_dilation_shape calculates the maximum gray value of the input image Image within a mask of shape
MaskShape, vertical size MaskHeight and horizontal size MaskWidth for each image point. The resulting
image is returned in ImageMax.
If the parameters MaskHeight or MaskWidth are of the type integer and are even, they are changed to the next
larger odd value. In contrast, if at least one of the two parameters is of the type float, the input image Image is
transformed with both the next larger and the next smaller odd mask size, and the output image ImageMax is
interpolated from the two intermediate images. Therefore, note that gray_dilation_shape returns different
results for mask sizes of, e.g., 4 and 4.0!
In case of the values ’rhombus’ und ’octagon’ for the MaskShape control parameter, MaskHeight and
MaskWidth must be equal. The parameter value ’octagon’ for MaskShape denotes an equilateral octagonal
mask which is a suitable approximation for a circular structure. At the border of the image the gray values are
mirrored.
Attention
Note that gray_dilation_shape requires considerably more time for mask sizes of type float than for mask
sizes of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter
HALCON 8.0.2
682 CHAPTER 9. MORPHOLOGY
gray_erosion applies a gray value erosion to the input image Image with the structuring element SE. The
gray value erosion of an image i with a structuring element s at the pixel position x is defined as:
Here, S is the domain of the structuring element s, i.e., the pixels z where s(z) > 0 (see read_gray_se).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / real
Gray-eroded image.
Result
gray_erosion returns H_MSG_TRUE if the structuring element is not the empty region. Otherwise, an excep-
tion is raised.
Parallelization Information
gray_erosion is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Possible Successors
gray_dilation, sub_image
Alternatives
gray_erosion_rect
See also
gray_opening, gray_closing, erosion1, gray_skeleton
Module
Foundation
HALCON 8.0.2
684 CHAPTER 9. MORPHOLOGY
i ◦ s = (i s) ⊕ s ,
i.e., an erosion of the image with s followed by a dilation with s (see gray_erosion and gray_dilation).
For the generation of structuring elements, see read_gray_se.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / real
Input image.
. SE (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Structuring element.
. ImageOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / real
Gray-opened image.
Result
gray_opening returns H_MSG_TRUE if the structuring element is not the empty region. Otherwise, an excep-
tion is raised.
Parallelization Information
gray_opening is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
read_gray_se
Alternatives
dual_rank
See also
opening, gray_dilation, gray_erosion
Module
Foundation
i ◦ s = (i s) ⊕ s ,
i.e., an erosion of the image with s followed by a dilation with s (see gray_erosion_rect and
gray_dilation_rect).
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Input image.
. ImageOpening (output_object) . . . . image(-array) ; Hobject * : byte / direction / cyclic / int2 / int4 / real
Gray-opened image.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_opening_rect returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior
can be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_opening_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_opening, gray_opening_shape
See also
opening_rectangle1, gray_dilation_rect, gray_erosion_rect
Module
Foundation
HALCON 8.0.2
686 CHAPTER 9. MORPHOLOGY
i ◦ s = (i s) ⊕ s ,
i.e., an erosion of the image with s followed by a dilation with s (see gray_erosion_shape and
gray_dilation_shape).
Attention
Note that gray_opening_shape requires considerably more time for mask sizes of type float than for mask
sizes of type integer. This is especially true for rectangular masks with differnt width and height!
Parameter
See also
gray_dilation_shape, gray_erosion_shape, opening
Module
Foundation
. Image (input_object) . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int2 / int4 / real
Image for which the gray value range is to be calculated.
. ImageResult (output_object) . . . . . image(-array) ; Hobject * : byte / direction / cyclic / int2 / int4 / real
Image containing the gray value range.
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the filter mask.
Default Value : 11
Suggested values : MaskHeight ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskHeight ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskHeight)
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the filter mask.
Default Value : 11
Suggested values : MaskWidth ∈ {3, 5, 7, 9, 11, 13, 15}
Typical range of values : 3 ≤ MaskWidth ≤ 511 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : odd(MaskWidth)
Result
gray_range_rect returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can
be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
gray_range_rect is reentrant and automatically parallelized (on tuple level, channel level, domain level).
Alternatives
gray_dilation_rect, gray_erosion_rect, sub_image
Module
Foundation
HALCON 8.0.2
688 CHAPTER 9. MORPHOLOGY
tophat(i, s) = i − (i ◦ s),
i.e., the difference of the image and its opening with s (see gray_opening). For the generation of structuring
elements, see read_gray_se.
Parameter
can also be used as structuring elements. However, care should be taken not to use too large images, since the
runtime is proportional to the area of the image times the area of the structuring element.
Parameter
. SE (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte
Generated structuring element.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of the file containing the structuring element.
Result
read_gray_se returns H_MSG_TRUE if all parameters are correct. Otherwise, an exception is raised.
Parallelization Information
read_gray_se is reentrant and processed without parallelization.
Possible Successors
gray_erosion, gray_dilation, gray_opening, gray_closing, gray_tophat,
gray_bothat
Alternatives
gen_disc_se
See also
read_image, paint_region, paint_gray, crop_part
Module
Foundation
9.2 Region
threshold(Image,&Regions,128.0,255.0);
gen_circle(&Circle,128.0,128.0,16.0);
bottom_hat(Regions,Circle,&RegionBottomHat);
set_color(WindowHandle,"red");
disp_region(Regions,WindowHandle);
set_color(WindowHandle,"green");
disp_region(RegionBottomHat,WindowHandle);
HALCON 8.0.2
690 CHAPTER 9. MORPHOLOGY
Result
bottom_hat returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
intersection(Margin1,Margin2,&Intersections);
connection(Intersections,&Single);
T_area_center(Single,_,&Rows,&Columns);
Complexity
Let F be the area of the input region. Then the runtime complexity for one region is
√
O(3 F ) .
Result
boundary returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Close a region.
A closing operation is defined as a dilation followed by a Minkowsi subtraction. By applying closing
to a region, larger structures remain mostly intact, while small gaps between adjacent regions and holes smaller
than StructElement are closed, and the regions’ boundaries are smoothed. All closing variants share the
property that separate regions are not merged, but remain separate objects. The position of StructElement is
meaningless, since a closing operation is invariant with respect to the choice of the reference point.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
HALCON 8.0.2
692 CHAPTER 9. MORPHOLOGY
Attention
closing is applied to each input region separately. If gaps between different regions are to be closed, union1
or union2 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be closed.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element (position-invariant).
. RegionClosing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Closed regions.
Example
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O(2 · F1 · F 2) .
Result
closing returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
closing_circle behaves analogously to closing, i.e., the regions’ boundaries are smoothed and holes
within a region which are smaller than the circular structuring element of radius Radius are closed. The
closing_circle operation is defined as a dilation followed by a Minkowski subtraction, both with the same
circular structuring element.
Attention
closing_circle is applied to each input region separately. If gaps between different regions are to be closed,
union1 or union2 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be closed.
. RegionClosing (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Closed regions.
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double / Hlong
Radius of the circular structuring element.
Default Value : 3.5
Suggested values : Radius ∈ {1.5, 2.5, 3.5, 4.5, 5.5, 7.5, 9.5, 12.5, 15.5, 19.5, 25.5, 33.5, 45.5, 60.5, 110.5}
Typical range of values : 0.5 ≤ Radius ≤ 511.5 (lin)
Minimum Increment : 1.0
Recommended Increment : 1.0
Example
Complexity
Let F 1 be the area of the input region. Then the runtime complexity for one region is:
√
O(4 · F 1 · Radius) .
Result
closing_circle returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
694 CHAPTER 9. MORPHOLOGY
Result
closing_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
See also
erosion_golay, dilation_golay, opening_golay, hit_or_miss_golay,
thinning_golay, thickening_golay, golay_elements
Module
Foundation
Result
closing_rectangle1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or
no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
696 CHAPTER 9. MORPHOLOGY
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
closing
See also
dilation_rectangle1, erosion_rectangle1, opening_rectangle1, gen_rectangle1
Module
Foundation
Dilate a region.
dilation1 dilates the input regions with a structuring element. By applying dilation1 to a region, its
boundary gets smoothed. In the process, the area of the region is enlarged. Furthermore, disconnected regions
may be merged. Such regions, however, remain logically distinct region. The dilation is a set-theoretic region
operation. It uses the union operation.
Let M (StructElement) and R (Region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
[
dilation1(R, M ) := t−~vm (R)
m∈M
For each point m in M a translation of the region R is performed. The union of all these translations is the dilation
of R with M . dilation1 is similar to the operator minkowski_add1, the difference is that in dilation1
the structuring element is mirrored at the origin. The position of StructElement is meaningless, since the
displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that an
empty region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Attention
A dilation always results in enlarged regions. Closely spaced regions which may touch or overlap as a result of
the dilation are still treated as two separate regions. If the desired behavior is to merge them into one region, the
operator union1 has to be called first.
Parameter
Result
dilation1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
698 CHAPTER 9. MORPHOLOGY
the dilation are still treated as two separate regions. If the desired behavior is to merge them into one region, the
operator union1 has to be called first.
Parameter
Result
dilation2 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Complexity
Let F 1 be the area of an input region. Then the runtime complexity for one region is:
√
O(2 · Radius · F 1) .
Result
dilation_circle returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
700 CHAPTER 9. MORPHOLOGY
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
minkowski_add1, minkowski_add2, expand_region, dilation1, dilation2,
dilation_rectangle1
See also
gen_circle, erosion_circle, closing_circle, opening_circle
Module
Foundation
√
O(3 · F) .
Result
dilation_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Otherwise, an exception is raised.
Parallelization Information
dilation_golay is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, union1, watersheds, class_ndim_norm
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
dilation1, dilation2, dilation_seq
See also
erosion_golay, opening_golay, closing_golay, hit_or_miss_golay, thinning_golay,
thickening_golay, golay_elements
Module
Foundation
HALCON 8.0.2
702 CHAPTER 9. MORPHOLOGY
threshold(Image,&Light,220.0,255.0);
dilation_rectangle1(Light,&Wide,50,50);
set_color(WindowHandle,"red");
disp_region(Wide,WindowHandle);
set_color(WindowHandle,"white");
disp_region(Light,WindowHandle);
Complexity
Let F 1 be the area of an input region and H be the height of the rectangle. Then the runtime complexity for one
region is:
√
O( F 1 · ld(H)) .
Result
dilation_rectangle1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty
or no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Result
dilation_seq returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Erode a region.
HALCON 8.0.2
704 CHAPTER 9. MORPHOLOGY
erosion1 erodes the input regions with a structuring element. By applying erosion1 to a region, its boundary
gets smoothed. In the process, the area of the region is reduced. Furthermore, connected regions may be split.
Such regions, however, remain logically one region. The erosion is a set-theoretic region operation. It uses the
intersection operation.
Let M (StructElement) and R (Region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
\
erosion1(R, M ) := t−~vm (R).
m∈M
For each point m in M a translation of the region R is performed. The intersection of all these translations is
the erosion of R with M . erosion1 is similar to the operator minkowski_sub1, the difference is that in
erosion1 the structuring element is mirrored at the origin. The position of StructElement is meaningless,
since the displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that the
maximum region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be eroded.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionErosion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .
Result
erosion1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Possible Successors
connection, reduce_domain, select_shape, area_center
Alternatives
minkowski_sub1, minkowski_sub2, erosion2, erosion_golay, erosion_seq
See also
transpose_region
Module
Foundation
HALCON 8.0.2
706 CHAPTER 9. MORPHOLOGY
√ √
O( F 1 · F 2 · Iterations) .
Result
erosion2 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Example
Complexity
Let F 1 be the area of an input region. Then the runtime complexity for one region is:
√
O(2 · Radius · F 1) .
Result
erosion_circle returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
708 CHAPTER 9. MORPHOLOGY
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n.
Attention
Not all values of Rotation are valid for any Golay element. For some of the values of Rotation, the resulting
regions are identical to the input regions.
Parameter
Result
erosion_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Result
erosion_rectangle1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or
no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
710 CHAPTER 9. MORPHOLOGY
Possible Successors
reduce_domain, select_shape, area_center, connection
Alternatives
erosion1, minkowski_sub1
See also
gen_rectangle1
Module
Foundation
Result
erosion_seq returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Possible Predecessors
threshold, regiongrowing, watersheds, class_ndim_norm
Possible Successors
connection, reduce_domain, select_shape, area_center
Alternatives
erosion_golay, erosion1, erosion2
See also
dilation_seq, hit_or_miss_seq, thinning_seq
Module
Foundation
n
[
P = (R ◦ Mi )
i=1
\n
Q = (P • Mi )
i=1
Regions larger than the structuring elements are preserved, while small gaps are closed.
Parameter
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
712 CHAPTER 9. MORPHOLOGY
Alternatives
opening, closing, connection, select_shape
Module
Foundation
M1 M2 M3 M4
h h h h h h h x h h x h
x x h h x x x x h h x x
h x h h x h h h h h h h
M5 M6 M7 M8
Parameter
. StructElements (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Generated structuring elements.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of structuring element to generate.
Default Value : "noise"
List of values : Type ∈ {"noise"}
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 1
Suggested values : Row ∈ {0, 1, 10, 50, 100, 200, 300, 400}
Typical range of values : −∞ ≤ Row ≤ ∞ (lin)
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Default Value : 1
Suggested values : Column ∈ {0, 1, 10, 50, 100, 200, 300, 400}
Typical range of values : −∞ ≤ Column ≤ ∞ (lin)
Result
gen_struct_elements returns H_MSG_TRUE if all parameters are correct. Otherwise, an exception is
raised.
Parallelization Information
gen_struct_elements is reentrant and processed without parallelization.
Possible Successors
fitting, hit_or_miss, opening, closing, erosion2, dilation2
See also
golay_elements
Module
Foundation
HALCON 8.0.2
714 CHAPTER 9. MORPHOLOGY
• ◦
· · • ◦ · ·
• • • ◦ · • · • ◦ · • ◦ · • •
· • · ◦ · · ◦ • • · · •
◦ ◦ ◦ ◦ ◦ · • •
l(8,9) l(10,11) l(12,13) l(14,15)
• •
• · · · · •
• · · • · • · · • • • · · • · •
• • ◦ · · ◦ · • · ◦ · ·
• · · · · ◦ · ·
m(0,1) m(2,3) m(4,5) m(6,7)
· ·
◦ · · · · ◦
· · • · · • · • · ◦ · • · • · ·
◦ • • · · • · • · • · ·
· · • • • • • •
m(8,9) m(10,11) m(12,13) m(14,15)
◦ ◦
◦ · · · · ◦
◦ · · ◦ · • · · ◦ ◦ ◦ · · • · ◦
◦ • • · · • · • · • · ·
◦ · · · · • · ·
d(0,1) d(2,3) d(4,5) d(6,7)
· ·
• · · · · •
· · ◦ · · • · ◦ · • · ◦ · • · ·
• • ◦ · · ◦ · • · ◦ · ·
· · ◦ ◦ ◦ ◦ ◦ ◦
d(8,9) d(10,11) d(12,13) d(14,15)
• •
◦ • ◦ ◦ • ◦
• ◦ ◦ • • • · ◦ • ◦ • ◦ · • • •
◦ • • ◦ · • ◦ • ◦ • · ◦
• ◦ ◦ ◦ ◦ • ◦ ◦
f(0,1) f(2,3) f(4,5) f(6,7)
◦ ◦
• · ◦ ◦ · •
◦ ◦ • ◦ · • • • ◦ • ◦ • • • · ◦
• • ◦ ◦ • ◦ ◦ • ◦ ◦ • ◦
◦ ◦ • • • ◦ • •
f(8,9) f(10,11) f(12,13) f(14,15)
• ◦
◦ · • ◦ · ◦
• • • ◦ · • · • ◦ ◦ • ◦ · • · •
◦ • ◦ ◦ · ◦ ◦ • • ◦ · •
◦ ◦ ◦ ◦ ◦ ◦ • •
f2(0,1) f2(2,3) f2(4,5) f2(6,7)
◦ •
◦ · ◦ • · ◦
◦ ◦ ◦ • · • · ◦ • ◦ ◦ • · • · ◦
◦ • ◦ • · ◦ • • ◦ ◦ · ◦
• • • • • ◦ ◦ ◦
f2(8,9) f2(10,11) f2(12,13) f2(14,15)
• ·
· · • · · ·
• • ◦ · · • · ◦ · · • · · • · •
· • · · · · · • • · · •
· · · · · · ◦ ◦
k(0,1) k(2,3) k(4,5) k(6,7)
· ◦
· · · • · ·
· · · ◦ · • · · ◦ · · • · • · ·
· • · • · · • • · · · ·
◦ • • • • · · ·
k(8,9) k(10,11) k(12,13) k(14,15)
• •
• · · · · •
• · · • · ◦ · · • • • · · ◦ · •
• ◦ · · · · · ◦ · · · ·
• · · · · · · ·
c(0,1) c(2,3) c(4,5) c(6,7)
· ·
· · · ·
· ·
· · • · · ◦ • · · · • ◦ · ·
· ◦ • · · • · ◦ · • · ·
· · • • • • • •
c(8,9) c(10,11) c(12,13) c(14,15)
Parameter
HALCON 8.0.2
716 CHAPTER 9. MORPHOLOGY
Possible Successors
hit_or_miss
Alternatives
gen_region_points, gen_struct_elements, gen_region_polygon_filled
See also
dilation_golay, erosion_golay, opening_golay, closing_golay, hit_or_miss_golay,
thickening_golay
References
J. Serra: "‘Image Analysis and Mathematical Morphology"’. Volume I. Academic Press, 1982
Module
Foundation
Result
hit_or_miss returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Otherwise, an exception is raised.
Parallelization Information
hit_or_miss is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
golay_elements, gen_struct_elements, threshold, regiongrowing, connection,
union1, watersheds, class_ndim_norm
Possible Successors
difference, reduce_domain, select_shape, area_center, connection
Alternatives
hit_or_miss_golay, hit_or_miss_seq, erosion2, dilation2
See also
thinning, thickening, gen_region_points, gen_region_polygon_filled
Module
Foundation
HALCON 8.0.2
718 CHAPTER 9. MORPHOLOGY
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. RegionHitMiss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the hit-or-miss operation.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:
√
O(6 · F) .
Result
hit_or_miss_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or
no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. RegionHitMiss (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Result of the hit-or-miss operation.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
Complexity
Let F be the area of an input region, and R be the number of rotations. Then the runtime complexity for one region
is:
√
O(R · 6 · F) .
Result
hit_or_miss_seq returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
720 CHAPTER 9. MORPHOLOGY
For each point m in M a translation of the region R is performed. The union of all these translations is the
Minkowski addition of R with M . minkowski_add1 is similar to the operator dilation1, the difference
is that in dilation1 the structuring element is mirrored at the origin. The position of StructElement is
meaningless, since the displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that an
empty region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Attention
A Minkowski addition always results in enlarged regions. Closely spaced regions which may touch or overlap as
a result of the dilation are still treated as two separate regions. If the desired behavior is to merge them into one
region, the operator union1 has to be called first.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be dilated.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionMinkAdd (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Dilated regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .
Result
minkowski_add1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Result
minkowski_add2 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
HALCON 8.0.2
722 CHAPTER 9. MORPHOLOGY
Erode a region.
minkowski_sub1 computes the Minkowski subtraction of the input regions with a structuring element. By
applying minkowski_sub1 to a region, its boundary gets smoothed. In the process, the area of the region is
reduced. Furthermore, connected regions may be split. Such regions, however, remain logically one region. The
Minkowski subtraction is a set-theoretic region operation. It uses the intersection operation.
Let M (StructElement) and R (Region) be two regions, where M is the structuring element and R is the
region to be processed. Furthermore, let m be a point in M . Then the displacement vector ~vm = (dx, dy) is
defined as the difference of the center of gravity of M and the vector m.
~ Let t~vm (R) denote the translation of a
region R by a vector ~v . Then
\
minkowski_sub1(R, M ) := t~vm (R)
m∈M
For each point m in M a translation of the region R is performed. The intersection of all these translations is the
Minkowski subtraction of R with M . minkowski_sub1 is similar to the operator erosion1, the difference
is that in erosion1 the structuring element is mirrored at the origin. The position of StructElement is
meaningless, since the displacement vectors are determined with respect to the center of gravity of M .
The parameter Iterations determines the number of iterations which are to be performed with the structuring
element. The result of iteration n − 1 is used as input for iteration n. From the above definition it follows that the
maximum region is generated in case of an empty structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be eroded.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionMinkSub (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.
Result
minkowski_sub1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
724 CHAPTER 9. MORPHOLOGY
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be eroded.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element.
. RegionMinkSub (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Eroded regions.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 0
Suggested values : Row ∈ {0, 10, 16, 32, 64, 100, 128}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Default Value : 0
Suggested values : Column ∈ {0, 10, 16, 32, 64, 100, 128}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of iterations.
Default Value : 1
Suggested values : Iterations ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 30, 40, 50}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O( F 1 · F 2 · Iterations) .
Result
minkowski_sub2 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Result
morph_hat returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
726 CHAPTER 9. MORPHOLOGY
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Thinning of a region.
morph_skiz first performs a sequential thinning ( thinning_seq) of the input region with the element ’l’ of
the Golay alphabet. The number of iterations is determined by the parameter Iterations1. Then a sequential
thinning of the resulting region with the element ’e’ of the Golay alphabet is carried out. The number of iterations
for this step is determined by the parameter Iterations2. The skiz operation serves to compute a kind of
skeleton of the input regions, and to prune the branches of the resulting skeleton. If the skiz operation is applied to
the complement of the region, the region and the resulting skeleton are separated.
If very large values or ’maximal’ are passed for Iterations1 or Iterations2, the processing stops if no
more changes occur.
Parameter
Result
morph_skiz returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Open a region.
HALCON 8.0.2
728 CHAPTER 9. MORPHOLOGY
/* simulation of opening */
my_opening(Hobject In, Hobject StructElement, Hobject *Out)
{
Hobject H;
erosion1(In,StructElement,&H,1);
minkowski_add1(H,StructElement,Out,1);
clear_obj(H);
}
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √
O(2 · F1 · F 2) .
Result
opening returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
/* simulation of opening_circle */
my_opening_circle(Hobject In, double Radius, Hobject *Out)
{
Hobject Circle, tmp;
gen_circle(&Circle,100.0,100.0,Radius);
erosion1(Region,Circle,&tmp,1);
minkowski_add1(tmp,Circle,&Out,1);
clear_obj(Circle); clear_obj(tmp);
}
HALCON 8.0.2
730 CHAPTER 9. MORPHOLOGY
closing_circle(Light,&H,2.5);
/* selecting the large regions */
opening_circle(H,&Large,20.5);
Complexity
Let F 1 be the area of the input region. Then the runtime complexity for one region is:
√
O(4 · F 1 · Radius) .
Result
opening_circle returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Parameter
Result
opening_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
732 CHAPTER 9. MORPHOLOGY
Parameter
Result
opening_rectangle1 returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or
no input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
The opening_seg operation is defined as a sequence of the following operators: erosion1, connection
and dilation1 (see example). Only one iteration is done in erosion1 and dilation1.
opening_seg serves to separate overlapping regions whose area of overlap is smaller than StructElement.
It should be noted that the resulting regions can overlap without actually merging (see expand_region).
opening_seg uses the center of gravity as the reference point of the structuring element.
Structuring elements (StructElement) can be generated with operators such as gen_circle,
gen_rectangle1, gen_rectangle2, gen_ellipse, draw_region, gen_region_polygon,
gen_region_points, etc.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be opened.
. StructElement (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Structuring element (position-invariant).
. RegionOpening (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Opened regions.
Example
/* Simulation of opening_seg */
my_opening_seg(Hobject Region, Hobject StructElement, Hobject *Opening)
{
Hobject H1,H2;
erosion1(Region,StructElement,&H1,1);
connection(H1,&H2);
dilation1(H2,StructElement,Opening,1);
clear_obj(H1); clear_obj(H2);
}
Complexity
Let F 1 be the area of the input region, and F 2 be the area of the structuring element. Then the runtime complexity
for one region is:
√ √ √
q
O( F 1 · F 2 · F 1) .
Result
opening_seg returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
734 CHAPTER 9. MORPHOLOGY
Alternatives
erosion1, connection, dilation1
Module
Foundation
Result
pruning returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Result
thickening returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
HALCON 8.0.2
736 CHAPTER 9. MORPHOLOGY
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Add the result of a hit-or-miss operation to a region (using a Golay structuring element).
thickening_golay performs a thickening of the input regions using morphological operations and structur-
ing elements from the Golay alphabet. The operator first applies a hit-or-miss-transformation to Region (cf.
hit_or_miss_golay), and then adds the detected points to the input region. The following structuring ele-
ments are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
The rotation number Rotation determines which rotation of the element should be used. The Golay elements,
together with all possible rotations, are described with the operator golay_elements.
Attention
Not all values of Rotation are valid for any Golay element.
Parameter
Result
thickening_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
738 CHAPTER 9. MORPHOLOGY
√
O(Iterations · 6 · F) .
Result
thickening_seq returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Result
thinning returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input region
can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Remove the result of a hit-or-miss operation from a region (using a Golay structuring element).
thinning_golay performs a thinning of the input regions using morphological operations and structuring
elements from the Golay alphabet. The operator first applies a hit-or-miss-transformation to Region (cf.
hit_or_miss_golay), and then removes the detected points from the input region. The following structuring
elements are available:
’l’, ’m’, ’d’, ’c’, ’e’, ’i’, ’f’, ’f2’, ’h’, ’k’.
HALCON 8.0.2
740 CHAPTER 9. MORPHOLOGY
The rotation number Rotation determines which rotation of the element should be used. The Golay elements,
together with all possible rotations, are described with the operator golay_elements.
Attention
Not all values of Rotation are valid for any Golay element.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. RegionThin (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Result of the thinning operator.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "h"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Rotation of the Golay element. Depending on the element, not all rotations are valid.
Default Value : 0
List of values : Rotation ∈ {0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15}
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:
√
O(6 · F) .
Result
thinning_golay returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
’l’ Skeleton, similar to skeleton. This structuring element is also used in morph_skiz.
’m’ A skeleton with many “hairs” and multiple (parallel) branches.
’d’ A skeleton without multiple branches, but with many gaps, similar to morph_skeleton.
’c’ Uniform erosion of the region.
’e’ One pixel wide lines are shortened. This structuring element is also used in morph_skiz.
’i’ Isolated points are removed. (Only Iterations = 1 is useful.)
’f’ Y-junctions are eliminated. (Only Iterations = 1 is useful.)
’f2’ One pixel long branches and corners are removed. (Only Iterations = 1 is useful.)
’h’ A kind of inner boundary, which, however, is thicker than the result of boundary, is generated. (Only
Iterations = 1 is useful.)
’k’ Junction points are eliminated, but also new ones are generated.
The Golay elements, together with all possible rotations, are described with the operator golay_elements.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. RegionThin (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Result of the thinning operator.
. GolayElement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Structuring element from the Golay alphabet.
Default Value : "l"
List of values : GolayElement ∈ {"l", "m", "d", "c", "e", "i", "f", "f2", "h", "k"}
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Number of iterations. For ’f’, ’f2’, ’h’ and ’i’ the only useful value is 1.
Default Value : 20
Suggested values : Iterations ∈ {"maximal", 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 30, 40, 50, 70, 100, 150,
200}
Typical range of values : 1 ≤ Iterations (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of an input region. Then the runtime complexity for one region is:
√
O(Iterations · 6 · F) .
Result
thinning_seq returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no input
region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
HALCON 8.0.2
742 CHAPTER 9. MORPHOLOGY
See also
hit_or_miss_seq, erosion_golay, difference, thinning_golay, thinning,
thickening_seq
Module
Foundation
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
OCR
10.1 Hyperboxes
close_all_ocrs ( )
T_close_all_ocrs ( )
743
744 CHAPTER 10. OCR
Example
HTuple OcrHandle,Class,Confidence;
long orc_handle;
read_ocr("testnet",&orc_handle);
/* image processing */
create_tuple(&OcrHandle,1);
set_i(OcrHandle,orc_handle,0);
T_do_ocr_multi(Character,Image,OcrHandle,&Class,&Confidence);
close_ocr(orc_handle);
Result
If the parameter OcrHandle is valid, the operator close_ocr returns the value H_MSG_TRUE. Otherwise an
exception will be raised.
Parallelization Information
close_ocr is reentrant and processed without parallelization.
Possible Predecessors
write_ocr_trainf
Possible Successors
read_ocr
Module
OCR/OCV
Parameter
. WidthPattern (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Width of the input layer of the network.
Default Value : 8
Suggested values : WidthPattern ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 1 ≤ WidthPattern ≤ 100
. HeightPattern (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Height of the input layer of the network.
Default Value : 10
Suggested values : HeightPattern ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 1 ≤ HeightPattern ≤ 100
HALCON 8.0.2
746 CHAPTER 10. OCR
HTuple WidthPattern,HeightPattern,Interpolation,
Features,OcrHandle;
create_tuple(&WidthPattern,1);
set_i(WidthPattern,8,0);
create_tuple(&HeightPattern,1);
set_i(HeightPattern,10,0);
create_tuple(&Interpolation,1);
set_i(Interpolation,1,0);
create_tuple(&Features,1);
set_s(Features,"default",0);
create_tuple(&Character,26+26+10);
set_s(Character,"a",0);
set_s(Character,"b",1);
/* ... */
set_s(Character,"A",27);
set_s(Character,"B",28);
/* ... */
set_s(Character,"1",53);
set_s(Character,"2",54);
/* ... */
T_create_ocr_class_box(WidthPattern,HeightPattern,Interpolation,
Features,Character,&OcrHandle);
Result
If the parameters are correct, the operator create_ocr_class_box returns the value H_MSG_TRUE. Oth-
erwise an exception will be raised.
Parallelization Information
create_ocr_class_box is processed completely exclusively without parallelization.
Possible Predecessors
reset_obj_db
Possible Successors
traind_ocr_class_box, trainf_ocr_class_box, info_ocr_class_box, write_ocr,
ocr_change_char
Alternatives
create_ocr_class_mlp, create_ocr_class_svm
See also
affine_trans_image, ocr_change_char, moments_region_2nd_invar,
moments_region_2nd_rel_invar, moments_region_3rd_invar,
moments_region_central
Module
OCR/OCV
Classify characters.
The operator do_ocr_multi assigns a class to every Character (character). For gray value features the
gray values from the surrounding rectangles of the regions are used. The gray values will be taken from the
parameter Image. For each character the corresponding class will be returned in Class and a confidence value
will be returned in Confidence. The confidence value indicates the similarity between the input pattern and the
assigned character.
Parameter
. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Characters to be recognized.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Gray values for the characters.
. OcrHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_box ; (Htuple .) Hlong
ID of the OCR classifier.
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Class (name) of the characters.
Number of elements : Class = Character
. Confidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Confidence values of the characters.
Number of elements : Confidence = Character
Example
char Class[128];
long orc_handle;
read_ocr("testnet",&orc_handle);
read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
do_ocr_multi(SingleCharacter,Image,orc_handle,&Class,_);
smallest_rectangle1(SingleCharacter,_,&col,&row,);
set_tposition(row,col);
write_string(WindowHandle,Class);
}
Result
If the input parameters are set correctly, the operator do_ocr_multi returns the value H_MSG_TRUE. Other-
wise an exception will be raised.
Parallelization Information
do_ocr_multi is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
traind_ocr_class_box, trainf_ocr_class_box, read_ocr, connection, sort_region
HALCON 8.0.2
748 CHAPTER 10. OCR
Alternatives
do_ocr_single
See also
write_ocr
Module
OCR/OCV
HTuple Classes,Confidences;
long orc_handle;
HTuple OcrHandle;
read_ocr("testnet",&orc_handle);
create_tuple(&OcrHandle,1);
set_i(OcrHandle,orc_handle,0);
read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
T_do_ocr_single(SingleCharacter,Image,
OcrHandle,&Classes,&Confidences);
printf("best = %s (%g)\n",
get_s(Classes,0),get_d(Confidences,0));
printf("second = %s (%g)\n\n",
get_s(Classes,1),get_d(Confidences,1));
}
Result
If the input parameters are correct, the operator do_ocr_single returns the value H_MSG_TRUE. Otherwise
an exception will be raised.
Parallelization Information
do_ocr_single is reentrant and processed without parallelization.
Possible Predecessors
traind_ocr_class_box, trainf_ocr_class_box, read_ocr, connection, sort_region
Alternatives
do_ocr_multi
See also
write_ocr
Module
OCR/OCV
HTuple OcrHandle,WidthPattern,HeightPattern,Interpolation,
WidthMaxChar,HeightMaxChar,Features,Characters;
T_info_ocr_class_box(OcrHandle,&WidthPattern,&HeightPattern,&Interpolation,
&WidthMaxChar,&HeightMaxChar,&Features,&Characters);
printf("NetSize: %d x %d\n",get_i(WidthPattern,0),get_i(HeightPattern,0));
printf("MaxChar: %d x %d\n",get_i(WidthMaxChar,0),get_i(HeightMaxChar,0));
printf("Interpolation: %d\n",get_i(Interpolation,0));
printf("Features: ");
for (i=0; i<length_tuple(Features); i++)
printf("%s ",get_s(Features,i));
printf("\n");
printf("Characters: ");
for (i=0; i<length_tuple(Characters); i++)
printf(" %d %s\n",i,get_s(Characters,i));
HALCON 8.0.2
750 CHAPTER 10. OCR
Result
The operator info_ocr_class_box always returns H_MSG_TRUE.
Parallelization Information
info_ocr_class_box is reentrant and processed without parallelization.
Possible Predecessors
read_ocr, create_ocr_class_box
Possible Successors
write_ocr
Module
OCR/OCV
Result
If the number of characters in Character is identical with the number of the characters of the network, the
operator ocr_change_char returns the value H_MSG_TRUE. Otherwise an exception will be raised.
Parallelization Information
ocr_change_char is processed completely exclusively without parallelization.
Possible Predecessors
read_ocr
Possible Successors
do_ocr_multi, do_ocr_single, write_ocr
Module
OCR/OCV
HALCON 8.0.2
752 CHAPTER 10. OCR
See also
write_ocr, do_ocr_multi, traind_ocr_class_box, trainf_ocr_class_box
Module
OCR/OCV
Parameter
char name[128];
long orc_handle;
read_ocr("testnet",&orc_handle);
read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
set_color(WindowHandle,"red");
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
clear_window(WindowHandle);
disp_region(SingleCharacter,WindowHandle);
printf("class of character %d ?\n",i);
scanf("%s\n",name);
traind_ocr_class_box(SingleCharacter,Image,OcrHandle,name,&AvgConfidence);
}
Result
If the parameters are correct, the operator traind_ocr_class_box returns the value H_MSG_TRUE. Oth-
erwise an exception will be raised.
Parallelization Information
traind_ocr_class_box is processed completely exclusively without parallelization.
Possible Predecessors
create_ocr_class_box, read_ocr
Possible Successors
traind_ocr_class_box, write_ocr, do_ocr_multi, do_ocr_single
Alternatives
trainf_ocr_class_box
Module
OCR/OCV
HALCON 8.0.2
754 CHAPTER 10. OCR
The operator trainf_ocr_class_box trains the classifier OcrHandle via the indicated training files. Any
number of files can be indicated. The parameter AvgConfidence provides information about the success of
the training: It contains the average confidence of the trained characters measured by a re-classification. The
confidence of mismatched characters is set to 0 (thus, the average confidence will be decreased significantly).
Attention
The names of the characters in the file must fit the network.
Parameter
Result
If the file name is correct and the data fit the network, the operator trainf_ocr_class_box returns the value
H_MSG_TRUE. Otherwise an exception will be raised.
Parallelization Information
trainf_ocr_class_box is processed completely exclusively without parallelization.
Possible Predecessors
create_ocr_class_box, read_ocr
Possible Successors
traind_ocr_class_box, write_ocr, do_ocr_multi, do_ocr_single
Alternatives
traind_ocr_class_box
Module
OCR/OCV
Parameter
10.2 Lexica
clear_all_lexica ( )
T_clear_all_lexica ( )
Clear a lexicon.
clear_lexicon clears a lexicon and releases its resources.
Parameter
HALCON 8.0.2
756 CHAPTER 10. OCR
Parallelization Information
clear_lexicon is processed completely exclusively without parallelization.
See also
create_lexicon
Module
OCR/OCV
Parameter
. Name (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Unique name for the new lexicon.
Default Value : "lex1"
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; const char *
Name of a text file containing words for the new lexicon.
Default Value : "words.txt"
. LexiconHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lexicon ; Hlong *
Handle of the lexicon.
Parallelization Information
import_lexicon is processed completely exclusively without parallelization.
Possible Successors
do_ocr_word_mlp, do_ocr_word_svm
Alternatives
create_lexicon
See also
lookup_lexicon, suggest_lexicon
Module
OCR/OCV
HALCON 8.0.2
758 CHAPTER 10. OCR
Parameter
10.3 Neural-Nets
clear_all_ocr_class_mlp ( )
T_clear_all_ocr_class_mlp ( )
HALCON 8.0.2
760 CHAPTER 10. OCR
’height’ Height of the character before scaling the character to the standard size (not scale-invariant, see
smallest_rectangle1, 1 feature).
’zoom_factor’ Difference in size between the character and the values of WidthCharacter and
HeightCharacter (not scale-invariant, 1 feature).
’foreground’ Fraction of pixels in the foreground (1 feature).
’foreground_grid_9’ Fraction of pixels in the foreground in a 3 × 3 grid within the smallest enclosing rectangle of
the character (9 features).
’foreground_grid_16’ Fraction of pixels in the foreground in a 4 × 4 grid within the smallest enclosing rectangle
of the character (16 features).
’compactness’ Compactness of the character (see compactness, 1 feature).
’convexity’ Convexity of the character (see convexity, 1 feature).
’moments_region_2nd_invar’ Normalized 2nd moments of the character (see
moments_region_2nd_invar, 3 features).
’moments_region_2nd_rel_invar’ Normalized 2nd relative moments of the character (see
moments_region_2nd_rel_invar, 2 features).
’moments_region_3rd_invar’ Normalized 3rd moments of the character (see moments_region_3rd_invar,
4 features).
’moments_central’ Normalized central moments of the character (see moments_region_central, 4 fea-
tures).
’moments_gray_plane’ Normalized gray value moments and the angle of the gray value plane (see
moments_gray_plane, 4 features).
’phi’ Sinus and cosinus of the orientation (angle) of the character (see elliptic_axis, 2 feature).
’num_connect’ Number of connected components (see connect_and_holes, 1 feature).
’num_holes’ Number of holes (see connect_and_holes, 1 feature).
’cooc’ Values of the binary cooccurrence matrix (see gen_cooc_matrix, 8 features).
’num_runs’ Number of runs in the region normalized by the area (1 feature).
’chord_histo’ Frequency of the runs per row (HeightCharacter features).
After the classifier has been created, it is trained using trainf_ocr_class_mlp. After this, the classifier can
be saved using write_ocr_class_mlp. Alternatively, the classifier can be used immediately after training to
classify characters using do_ocr_single_class_mlp or do_ocr_multi_class_mlp.
HALCON provides a number of pretrained OCR classifiers (see Solution Guide I, chapter ’OCR’, section ’Pre-
trained OCR Fonts’). These pretrained OCR classifiers can be read directly with read_ocr_class_mlp and
make it possible to read a wide variety of different fonts without the need to train an OCR classifier. Therefore, it
is recommended to try if one of the pretrained OCR classifiers can be used successfully. If this is the case, it is not
necessary to create and train an OCR classifier.
A comparison of the MLP and the support vector machine (SVM) (see create_ocr_class_svm) typically
shows that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better
recognition rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical
applications. Please note that this guideline assumes optimal tuning of the parameters.
Parameter
HALCON 8.0.2
762 CHAPTER 10. OCR
Result
If the parameters are valid, the operator create_ocr_class_mlp returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
create_ocr_class_mlp is processed completely exclusively without parallelization.
Possible Successors
trainf_ocr_class_mlp
Alternatives
create_ocr_class_svm, create_ocr_class_box
See also
do_ocr_single_class_mlp, do_ocr_multi_class_mlp, clear_ocr_class_mlp,
create_class_mlp, train_class_mlp, classify_class_mlp
Module
OCR/OCV
HALCON 8.0.2
764 CHAPTER 10. OCR
Result
If the parameters are valid, the operator do_ocr_multi_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
do_ocr_multi_class_mlp is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
trainf_ocr_class_mlp, read_ocr_class_mlp
Alternatives
do_ocr_word_mlp, do_ocr_single_class_mlp
See also
create_ocr_class_mlp, classify_class_mlp
Module
OCR/OCV
Alternatives
do_ocr_multi_class_mlp
See also
create_ocr_class_mlp, classify_class_mlp
Module
OCR/OCV
HALCON 8.0.2
766 CHAPTER 10. OCR
Parameter
. Character (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Input character.
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Htuple . Hlong
Handle of the OCR classifier.
. Transform (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should the feature vector be transformed with the preprocessing?
Default Value : "true"
List of values : Transform ∈ {"true", "false"}
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Feature vector of the character.
Result
If the parameters are valid, the operator get_features_ocr_class_mlp returns the value H_MSG_TRUE.
If necessary an exception handling is raised.
Parallelization Information
get_features_ocr_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_mlp
See also
create_ocr_class_mlp
Module
OCR/OCV
HALCON 8.0.2
768 CHAPTER 10. OCR
Result
If the parameters are valid, the operator get_params_ocr_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
get_params_ocr_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
create_ocr_class_mlp, read_ocr_class_mlp
Possible Successors
do_ocr_single_class_mlp, do_ocr_multi_class_mlp
See also
trainf_ocr_class_mlp, get_params_class_mlp
Module
OCR/OCV
Compute the information content of the preprocessed feature vectors of an OCR classifier.
get_prep_info_ocr_class_mlp computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’prin-
cipal_components’ or ’canonical_variates’. The OCR classifier OCRHandle must have been created with
create_ocr_class_mlp. The preprocessing methods are described with create_class_mlp. The in-
formation content is derived from the variations of the transformed components of the feature vector, i.e., it is
computed solely based on the training data, independent of any error rate on the training data. The informa-
tion content is computed for all relevant components of the transformed feature vectors (NumInput for ’princi-
pal_components’ and min(NumOutput−1, NumInput) for ’canonical_variates’, see create_class_mlp),
and is returned in InformationCont as a number between 0 and 1. To convert the information content into
a percentage, it simply needs to be multiplied by 100. The cumulative information content of the first n compo-
nents is returned in the n-th component of CumInformationCont, i.e., CumInformationCont contains the
sums of the first n elements of InformationCont. To use get_prep_info_ocr_class_mlp, a sufficient
number of samples must be stored in the training files given by TrainingFile (see write_ocr_trainf).
InformationCont and CumInformationCont can be used to decide how many components of
the transformed feature vectors contain relevant information. An often used criterion is to require that
the transformed data must represent x% (e.g., 90%) of the total data. This can be decided eas-
ily from the first value of CumInformationCont that lies above x%. The number thus obtained
can be used as the value for NumComponents in a new call to create_ocr_class_mlp. The
call to get_prep_info_ocr_class_mlp already requires the creation of a classifier, and hence
the setting of NumComponents in create_ocr_class_mlp to an initial value. However, if
get_prep_info_ocr_class_mlp is called it is typically not known how many components are rele-
vant, and hence how to set NumComponents in this call. Therefore, the following two-step approach should
typically be used to select NumComponents: In a first step, a classifier with the maximum number for
NumComponents is created (NumInput for ’principal_components’ and min(NumOutput − 1, NumInput)
for ’canonical_variates’). Then, the training samples are saved in a training file using write_ocr_trainf.
Subsequently, get_prep_info_ocr_class_mlp is used to determine the information content of the com-
ponents, and with this NumComponents. After this, a new classifier with the desired number of components is
created, and the classifier is trained with trainf_ocr_class_mlp.
Parameter
Result
If the parameters are valid, the operator get_prep_info_ocr_class_mlp returns the value
H_MSG_TRUE. If necessary an exception handling is raised.
get_prep_info_ocr_class_mlp may return the error 9211 (Matrix is not positive definite) if
Preprocessing = ’canonical_variates’ is used. This typically indicates that not enough training samples
have been stored for each class.
Parallelization Information
get_prep_info_ocr_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
create_ocr_class_mlp, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
clear_ocr_class_mlp, create_ocr_class_mlp
Module
OCR/OCV
HALCON 8.0.2
770 CHAPTER 10. OCR
HALCON provides a number of pretrained OCR classifiers (see Solution Guide I, chapter ’OCR’, section ’Pre-
trained OCR Fonts’). These pretrained OCR classifiers make it possible to read a wide variety of different fonts
without the need to train an OCR classifier. Note that the pretrained OCR classifiers were trained with symbols
that are printed dark on light.
Parameter
Result
If the parameters are valid, the operator trainf_ocr_class_mlp returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
trainf_ocr_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
trainf_ocr_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
create_ocr_class_mlp, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
do_ocr_single_class_mlp, do_ocr_multi_class_mlp, write_ocr_class_mlp
Alternatives
read_ocr_class_mlp
See also
train_class_mlp
Module
OCR/OCV
HALCON 8.0.2
772 CHAPTER 10. OCR
Parameter
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_mlp ; Hlong
Handle of the OCR classifier.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name.
Result
If the parameters are valid, the operator write_ocr_class_mlp returns the value H_MSG_TRUE. If neces-
sary an exception handling is raised.
Parallelization Information
write_ocr_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
trainf_ocr_class_mlp
Possible Successors
clear_ocr_class_mlp
See also
create_ocr_class_mlp, read_ocr_class_mlp, write_class_mlp, read_class_mlp
Module
OCR/OCV
10.4 Support-Vector-Machines
clear_all_ocr_class_svm ( )
T_clear_all_ocr_class_svm ( )
clear_ocr_class_svm clears the OCR classifier given by OCRHandle and frees all memory required for the
classifier. After calling clear_ocr_class_svm, the classifier can no longer be used. The handle OCRHandle
becomes invalid.
Parameter
. OCRHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ocr_svm ; Hlong
Handle of the OCR classifier.
Result
If OCRHandle is valid the operator clear_ocr_class_svm returns the value H_MSG_TRUE. If necessary,
an exception handling is raised.
Parallelization Information
clear_ocr_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
do_ocr_single_class_svm, do_ocr_multi_class_svm
See also
create_ocr_class_svm, read_ocr_class_svm, write_ocr_class_svm,
trainf_ocr_class_svm
Module
OCR/OCV
HALCON 8.0.2
774 CHAPTER 10. OCR
After the classifier has been created, it is trained using trainf_ocr_class_svm. After this, the classifier can
be saved using write_ocr_class_svm. Alternatively, the classifier can be used immediately after training to
classify characters using do_ocr_single_class_svm or do_ocr_multi_class_svm.
A comparison of SVM and the multi-layer perceptron (MLP) (see create_ocr_class_mlp) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be prefered in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameter
. WidthCharacter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Width of the rectangle to which the gray values of the segmented character are zoomed.
Default Value : 8
Suggested values : WidthCharacter ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 4 ≤ WidthCharacter ≤ 20
. HeightCharacter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Height of the rectangle to which the gray values of the segmented character are zoomed.
Default Value : 10
Suggested values : HeightCharacter ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20}
Typical range of values : 4 ≤ HeightCharacter ≤ 20
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Interpolation mode for the zooming of the characters.
Default Value : "constant"
List of values : Interpolation ∈ {"none", "constant", "weighted"}
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char *
Features to be used for classification.
Default Value : "default"
List of values : Features ∈ {"default", "pixel", "pixel_invar", "pixel_binary", "gradient_8dir",
"projection_horizontal", "projection_horizontal_invar", "projection_vertical", "projection_vertical_invar",
"ratio", "anisometry", "width", "height", "zoom_factor", "foreground", "foreground_grid_9",
"foreground_grid_16", "compactness", "convexity", "moments_region_2nd_invar",
"moments_region_2nd_rel_invar", "moments_region_3rd_invar", "moments_central",
"moments_gray_plane", "phi", "num_connect", "num_holes", "cooc", "num_runs", "chord_histo"}
. Characters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char *
All characters of the character set to be read.
Default Value : ["0","1","2","3","4","5","6","7","8","9"]
. KernelType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
The kernel type.
Default Value : "rbf"
List of values : KernelType ∈ {"linear", "rbf", "polynomial_inhomogeneous",
"polynomial_homogeneous"}
. KernelParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Additional parameter for the kernel function.
Default Value : 0.02
Suggested values : KernelParam ∈ {0.01, 0.02, 0.05, 0.1, 0.5}
. Nu (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Regularization constant of the SVM.
Default Value : 0.05
Suggested values : Nu ∈ {0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.3}
Restriction : (Nu > 0.0) ∧ (Nu < 1.0)
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
The mode of the SVM.
Default Value : "one-versus-one"
List of values : Mode ∈ {"one-versus-all", "one-versus-one"}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of preprocessing used to transform the feature vectors.
Default Value : "normalization"
List of values : Preprocessing ∈ {"none", "normalization", "principal_components",
"canonical_variates"}
HALCON 8.0.2
776 CHAPTER 10. OCR
Result
If the parameters are valid the operator create_ocr_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
create_ocr_class_svm is processed completely exclusively without parallelization.
Possible Successors
trainf_ocr_class_svm
Alternatives
create_ocr_class_mlp, create_ocr_class_box
See also
do_ocr_single_class_svm, do_ocr_multi_class_svm, clear_ocr_class_svm,
create_class_svm, train_class_svm, classify_class_svm
Module
OCR/OCV
do_ocr_multi_class_svm computes the best class for each of the characters given by the regions
Character and the gray values Image with the SVM-based OCR classifier OCRHandle and returns the classes
in Class. In contrast to do_ocr_single_class_svm, do_ocr_multi_class_svm can classify multi-
ple characters in one call, and therefore typically is faster than a loop that uses do_ocr_single_class_svm
to classify single characters. However, do_ocr_multi_class_svm can only return the best class
of each character. Before calling do_ocr_multi_class_svm, the classifier must be trained with
trainf_ocr_class_svm.
Parameter
HALCON 8.0.2
778 CHAPTER 10. OCR
HALCON 8.0.2
780 CHAPTER 10. OCR
Parameter
Compute the information content of the preprocessed feature vectors of an SVM-based OCR classifier.
get_prep_info_ocr_class_svm computes the information content of the training vectors that have
been transformed with the preprocessing given by Preprocessing. Preprocessing can be set to
’principal_components’ or ’canonical_variates’. The OCR classifier OCRHandle must have been created
with create_ocr_class_svm. The preprocessing methods are described with create_class_svm.
The information content is derived from the variations of the transformed components of the feature vec-
tor, i.e., it is computed solely based on the training data, independent of any error rate on the training
data. The information content is computed for all relevant components of the transformed feature vec-
tors (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’, see create_class_svm), and is returned in InformationCont as a number between 0 and
1. To convert the information content into a percentage, it simply needs to be multiplied by 100. The cumulative
information content of the first n components is returned in the n-th component of CumInformationCont,
i.e., CumInformationCont contains the sums of the first n elements of InformationCont. To use
get_prep_info_ocr_class_svm, a sufficient number of samples must be stored in the training files given
by TrainingFile (see write_ocr_trainf).
InformationCont and CumInformationCont can be used to decide how many components of
the transformed feature vectors contain relevant information. An often used criterion is to require that
the transformed data must represent x% (e.g., 90%) of the total data. This can be decided eas-
ily from the first value of CumInformationCont that lies above x%. The number thus obtained
can be used as the value for NumComponents in a new call to create_ocr_class_svm. The
call to get_prep_info_ocr_class_svm already requires the creation of a classifier, and hence
the setting of NumComponents in create_ocr_class_svm to an initial value. However, if
get_prep_info_ocr_class_svm is called it is typically not known how many components are relevant, and
hence how to set NumComponents in this call. Therefore, the following two-step approach should typically be
used to select NumComponents: In a first step, a classifier with the maximum number for NumComponents is
created (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’). Then, the training samples are saved in a training file using write_ocr_trainf. Subsequently,
get_prep_info_ocr_class_svm is used to determine the information content of the components, and with
HALCON 8.0.2
782 CHAPTER 10. OCR
this NumComponents. After this, a new classifier with the desired number of components is created, and the
classifier is trained with trainf_ocr_class_svm.
Parameter
Result
If the parameters are valid the operator get_prep_info_ocr_class_svm returns the value H_MSG_TRUE.
If necessary, an exception handling is raised.
get_prep_info_ocr_class_svm may return the error 9211 (Matrix is not positive definite) if
Preprocessing = ’canonical_variates’ is used. This typically indicates that not enough training samples
have been stored for each class.
Parallelization Information
get_prep_info_ocr_class_svm is reentrant and processed without parallelization.
Possible Predecessors
create_ocr_class_svm, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
clear_ocr_class_svm, create_ocr_class_svm
Module
OCR/OCV
Return the index of a support vector from a trained OCR classifier that is based on support vector machines.
The operator get_support_vector_ocr_class_svm maps support vectors of a trained SVM-based
OCR classifier (given in OCRHandle) to the original training data set. The index of the SV is speci-
fied with IndexSupportVector. The index is counted from 0, i.e., IndexSupportVector must be
a number between 0 and IndexSupportVectors − 1, where IndexSupportVectors can be deter-
mined with get_support_vector_num_ocr_class_svm. The index of this SV in the training data
is returned in Index. get_support_vector_ocr_class_svm can, for example, be used to visu-
alize the support vectors. To do so, the train file that has been used to train the SVM must be read with
read_ocr_trainf. The value returned in Index must be incremented by 1 and can then be used to select
the support vectors with select_obj from the training characters. If more than one train file has been used
in trainf_ocr_class_svm Index behaves as if all train files had been merged into one train file with
concat_ocr_trainf.
Parameter
HALCON 8.0.2
784 CHAPTER 10. OCR
MaxError have the same meaning as in reduce_class_svm and are described there. Please note that classi-
fication time can also be significantly reduced with a preprocessing step in create_ocr_class_svm, which
possibly introduces less errors.
Parameter
HALCON 8.0.2
786 CHAPTER 10. OCR
Parameter
Result
If the parameters are valid the operator trainf_ocr_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
trainf_ocr_class_svm may return the error 9211 (Matrix is not positive definite) if Preprocessing =
’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Parallelization Information
trainf_ocr_class_svm is processed completely exclusively without parallelization.
Possible Predecessors
create_ocr_class_svm, write_ocr_trainf, append_ocr_trainf,
write_ocr_trainf_image
Possible Successors
do_ocr_single_class_svm, do_ocr_multi_class_svm, write_ocr_class_svm
Alternatives
read_ocr_class_svm
See also
train_class_svm
Module
OCR/OCV
FileName. write_ocr_class_svm is typically called after the classifier has been trained with
trainf_ocr_class_svm. The classifier can be read with read_ocr_class_svm.
Parameter
10.5 Tools
T_segment_characters ( const Hobject Region, const Hobject Image,
Hobject *ImageForeground, Hobject *RegionForeground,
const Htuple Method, const Htuple EliminateLines,
const Htuple DotPrint, const Htuple StrokeWidth,
const Htuple CharWidth, const Htuple CharHeight,
const Htuple ThresholdOffset, const Htuple Contrast,
Htuple *UsedThreshold )
’local_contrast_best’ This method extracts text that differ locally from the background. Therefore, it is suited
for images with inhomogeneous illumination. The enhancment of the text borders, leads to a more accurate
determinaton of the outline of the text. Which is especially useful if the background is highly textured.
The parameter Contrast defines the minimum contrast,i.e., the minimum gray value difference between
symobls and background.
’local_auto_shape’ The minimum contrast is estimated automatically such that the number of very small regions
is reduced. This method is especially suitable for noisy images. The parameter ThresholdOffset can
be used to adjust the threshold. Let g(x, y) be the gray value at position (x, y) in the input Image. The
threshold condition is determined by:
g(x, y) ≤ UsedThreshold + ThresholdOffset.
Select EliminateLines if the extraction of characters is disturbed by lines that are horizontal or vertical with
respect to the lines of text and set its value to ’true’. The elimination is influenced by the maximum of CharWidth
and the maximum of CharHeight. For further information see the description of these parameters.
DotPrint: Should be set to ’true’ if dot prints should be read, else to ’false’.
HALCON 8.0.2
788 CHAPTER 10. OCR
StrokeWidth: Specifies the stroke width of the text. It is used to calculate internally used mask sizes to
determine the characters. This mask sizes are also influenced through the parameters DotPrint, the average
CharWidth, and the average CharHeight.
CharWidth: This can be a tuple with up to three values. The first value is the average width of a character. The
second is the minimum width of a character and the third is the maximum width of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on the average CharWidth. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character width to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character width to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] sets the average character width to 10, the minimum to 5, and the maximum to 20.
CharHeight: This can be a tuple with up to three values. The first value is the average height of a character. The
second is the minimum height of a character and the third is the maximum height of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on the average CharHeight. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character height to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character height to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] this sets the average character height to 10, the minimum to 5, and the maximum to 20.
ThresholdOffset: This parameter can be used to adjust the threshold, which is used when the segmentation
method ’local_auto_shape’ is chosen.
Contrast: Defines the minimum contrast between the text and the background. This parameter is used if the
segmentation method ’local_contrast_best’ is selected.
UsedThreshold: After the execution, this parameter returns the threshold used to segment the characters.
ImageForeground returns the image that was internally used for the segmentation.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Area in the image where the text lines are located.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. ImageForeground (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject *
Image used for the segmentation.
. RegionForeground (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Region of characters.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method to segment the characters.
Default Value : "local_auto_shape"
List of values : Method ∈ {"local_contrast_best", "local_auto_shape"}
. EliminateLines (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Eliminate horizontal and vertical lines?
Default Value : "false"
List of values : EliminateLines ∈ {"true", "false"}
. DotPrint (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Should dot print characters be detected?
Default Value : "false"
List of values : DotPrint ∈ {"true", "false"}
. StrokeWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Stroke width of a character.
Default Value : "medium"
List of values : StrokeWidth ∈ {"ultra_light", "light", "medium", "bold"}
. CharWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Width of a character.
Default Value : 25
Typical range of values : 1 ≤ CharWidth
Restriction : CharWidth ≥ 1
Result
If the input parameters are set correctly, the operator segment_characters returns the value H_MSG_TRUE.
Otherwise an exception will be raised.
Parallelization Information
segment_characters is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
text_line_orientation
Possible Successors
select_characters, connection
Alternatives
threshold
Module
Foundation
HALCON 8.0.2
790 CHAPTER 10. OCR
CharWidth and the minimum CharHeight. But some parameters affect the result of a certain step in partic-
ular. A closer description follows below. With the parameter StopAfter you can terminate after a specified
step.
In the first step, ’step1_select_candidates’, CharWidth and the CharHeight are used to select the candidates.
The result of this step is also affected by ClutterSizeMax.
In the next step, ’step2_partition_characters’, the parameter PartitionMethod and the parameter
PartitionLines influence the result.
Step three, ’step3_connect_fragments’, uses the the parameters ConnectFragments and DotPrint. If dot-
printed characters have to be detected and some dots are not connected to the character, there are two ways to
overcome this problem: You can increase the FragmentDistance and/or decrease the StrokeWidth.
In the last step, ’step4_select_characters’, the result is affected by the parameters DiacriticMarks and
Punctuation.
DotPrint: Should be set to ’true’ if dot prints should be read, else to ’false’.
StrokeWidth: Specifies the stroke width of the text. It is used to calculate internally used mask sizes to
determine the characters. This mask sizes are also influenced through the parameters DotPrint, the average
CharWidth, and the average CharHeight.
CharWidth: This can be a tuple with up to three values. The first value is the average width of a character. The
second is the minimum width of a character and the third is the maximum width of a character. If the minimum is
not set or equal -1, the operator automatically sets these value depending on average CharWidth. The same is
the case if the maximum is not set. Some examples:
[10] sets the average character width to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character width to 10, the minimum value is calculated by the system, and the maximum
is set to 20.
[10,5,20] sets the average character width to 10, the minimum to 5, and the maximum to 20.
CharHeight: This can be a tuple with up to three values. The first value is the average height of a character. The
second is the minimum height of a character and the third is the maximum height of a character. If the minimum
is not set or equal -1, the operator automatically sets these value depending on average CharHeight. The same
is the case if the maximum is not set. Some examples:
[10] sets the average character height to 10, the minimum and maximum are calculated by the operator.
[10,-1,20] sets the average character height to 10 the minimum value is calculated by the system and the maximum
is set to 20.
[10,5,20] this sets the average character height to 10, the minimum to 5 and the maximum to 20.
Punctuation: Set this parameter to ’true’ if the operator also has to detect punctuation marks (e.g. .,:’‘"),
otherwise they will be suppressed.
DiacriticMarks: Set this parameter to ’true’ if the text in your application contains diacritic marks (e.g. â,é,ö),
or to ’false’ to suppress them.
PartitionMethod: If neighboring characters are printed close to each other, they may be partly merged. With
this parameter you can specify the method to partition such characters. The possible values are ’none’, which
means no partitioning is perfomed. ’fixed_width’ means that the partitioning assumes a constant character width.
If the width of the extracted region is well above the average CharWidth, the region ist split into parts that have
the given average CharWidth. The partitioning starts at the left border of the region. ’variable_width’ means
that the characters are partitioned at the position where they have the thinnest connection. This method can be
selected for characters that are printed with a variable-width font or if many consecutive characters are extracted as
one symbol. It could be helpful to call text_line_slant and/or use text_line_orientation before
calling select_characters.
PartitionLines: If some text lines or some characters of different text lines are connected, set this parameter
to ’true’.
FragmentDistance: This parameter influences the connection of character fragments. If too much is con-
nected, set the parameter to ’narrow’ or ’medium’. In the case that more fragments should be connected, set
the parameter to ’medium’ or ’wide’. The connection is also influenced by the maximum of CharWidth and
CharHeight. See also ConnectFragments.
ConnectFragments: Set this parameter to ’true’ if the extracted symbols are fragmented, i.e., if a symbol is
not extracted as one region but broken up into several parts. See also FragmentDistance and StopAfter in
the step ’step3_connect_fragments’.
ClutterSizeMax: If the extracted characters contain clutter, i.e., small regions near the actual symbols, increase
this value. If parts of the symbols are missing, decrease this value.
StopAfter: Use this parameter in the case the operator does not produce the desired results. By modifying this
value the operator stops after the execution of the selected step and provides the corresponding results. To end on
completion, set StopAfter to ’completion’.
Parameter
HALCON 8.0.2
792 CHAPTER 10. OCR
Result
If the input parameters are set correctly, the operator select_characters returns the value H_MSG_TRUE.
Otherwise an exception will be raised.
Parallelization Information
select_characters is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
segment_characters, text_line_slant
Possible Successors
do_ocr_single, do_ocr_multi
Alternatives
connection
Module
Foundation
With the calculated angle OrientationAngle and operators like affine_trans_image, the region
Region of the image Image can be rotated such, that the text lines lie horizontally in the image. This may
simplify the character segmentation for OCR applications.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Area of text lines.
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. CharHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) Hlong
Height of the text lines.
Default Value : 25
Typical range of values : 1 ≤ CharHeight
Restriction : CharHeight ≥ 1
. OrientationFrom (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Minimum rotation of the text lines.
Default Value : -0.523599
Typical range of values : -1.570796 ≤ OrientationFrom ≤ 1.570796
Restriction : ((−pi/2) ≤ OrientationFrom) ∧ (OrientationFrom ≤ OrientationTo)
. OrientationTo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; (Htuple .) double
Maximum rotation of the text lines.
Default Value : 0.523599
Typical range of values : -1.570796 ≤ OrientationTo ≤ 1.570796
Restriction : ((−pi/2) ≤ OrientationTo) ∧ (OrientationTo ≤ (pi/2))
. OrientationAngle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; (Htuple .) double *
Calculated rotation angle of the text lines.
Example (Syntax: HDevelop)
read_image(Image,’letters’)
text_line_orientation(Image,Image,50,rad(-80),rad(80),OrientationAngle)
rotate_image(Image,ImageRotate,-OrientationAngle/rad(180)*180,’constant’)
Result
If the input parameters are set correctly, the operator text_line_orientation returns the value
H_MSG_TRUE. Otherwise an exception will be raised.
Parallelization Information
text_line_orientation is reentrant and automatically parallelized (on tuple level).
Possible Successors
rotate_image, affine_trans_image, affine_trans_image_size
Module
Foundation
HALCON 8.0.2
794 CHAPTER 10. OCR
of the orientation angle are stored in a tuple, the position of a value in the tuple corresponding to the position of
the region in the input tuple.
CharHeight specifies the approximately high of the existing text lines in the region Region. It´s assumed, that
the text lines are darker than the background.
The search area can be restricted by the parameters SlantFrom and SlantTo, whereby also the runtime of the
operator is influenced.
With the calculated slant angle SlantAngle and operators for affine transformations, the slant can be removed
from the characters. This may simplify the character separation for OCR applications. To work correctly all
characters of a region should have nearly the same slant.
Parameter
hom_mat2d_identity(HomMat2DIdentity)
read_image(Image,’dot_print_slanted’)
/* correct slant */
text_line_slant(Image,Image,50,rad(-45),rad(45),SlantAngle)
hom_mat2d_slant(HomMat2DIdentity,-SlantAngle,’x’,0,0,HomMat2DSlant)
affine_trans_image(Image,Image,HomMat2DSlant,’constant’,’true’)
Result
If the input parameters are set correctly, the operator text_line_slant returns the value H_MSG_TRUE.
Otherwise an exception will be raised.
Parallelization Information
text_line_slant is reentrant and automatically parallelized (on tuple level).
Possible Successors
hom_mat2d_slant, affine_trans_image, affine_trans_image_size
Module
Foundation
10.6 Training-Files
char name[128];
char class[128];
read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
create_tuple(&Class,num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
set_color(WindowHandle,"red");
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
clear_window(WindowHandle);
disp_region(SingleCharacter,WindowHandle);
printf("class of character %d ?\n",i);
scanf("%s\n",class);
append_ocr_trainf(Character,Image,name,class);
}
Result
If the parameters are correct, the operator append_ocr_trainf returns the value H_MSG_TRUE. Otherwise
an exception will be raised.
Parallelization Information
append_ocr_trainf is processed completely exclusively without parallelization.
Possible Predecessors
threshold, connection, create_ocr_class_box, read_ocr
HALCON 8.0.2
796 CHAPTER 10. OCR
Possible Successors
trainf_ocr_class_box, info_ocr_class_box, write_ocr, do_ocr_multi,
do_ocr_single
Alternatives
write_ocr_trainf, write_ocr_trainf_image
Module
OCR/OCV
HALCON 8.0.2
798 CHAPTER 10. OCR
Parameter
char name[128];
HTuple Class,Name;
read_image(&Image,"character.tiff");
bin_threshold(Image,&Dark);
connection(Dark,&Character);
count_obj(Character,&num);
create_tuple(&Class,num);
open_window(0,0,-1,-1,0,"","",&WindowHandle);
set_color(WindowHandle,"red");
for (i=0; i<num; i++) {
select_obj(Character,&SingleCharacter,i);
clear_window(WindowHandle);
disp_region(SingleCharacter,WindowHandle);
printf("class of character %d ?\n",i);
scanf("%s\n",name);
set_s(Class,name,i);
}
create_tuple(&Name,1);
set_s(Class,Name,"trainfile");
T_write_ocr_trainf(Character,Image,Class,Name);
Result
If the parameters are correct, the operator write_ocr_trainf returns the value H_MSG_TRUE. Otherwise
an exception will be raised.
Parallelization Information
write_ocr_trainf is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, create_ocr_class_box, read_ocr
Possible Successors
trainf_ocr_class_box, info_ocr_class_box, write_ocr, do_ocr_multi,
do_ocr_single
Module
OCR/OCV
HALCON 8.0.2
800 CHAPTER 10. OCR
The operator write_ocr_trainf_image is used to prepare the training with the operator
trainf_ocr_class_box. Hereby regions, representing characters, including their gray values (region and
pixel) and the corresponding class name will be written into a file. An arbitrary number of regions within one
image is supported. For each character (region) in Character the corresponding class name must be specified
in Class. If no file extension is specified in FileName the extension ’.trf’ is appended to the file name. In
contrast to write_ocr_trainf one image per character is passed. The domain of this image defines the pixels
which belong to the character. The file format can be defined by the parameter ’ocr_trainf_version’ of the operator
set_system.
Parameter
Object
11.1 Information
count_obj ( const Hobject Objects, Hlong *Number )
T_count_obj ( const Hobject Objects, Htuple *Number )
801
802 CHAPTER 11. OBJECT
The operator get_channel_info gives information about the components of an image object. The following
requests (Request) are currently possible:
’creator’ Output of the names of the procedures which initially created the image components (not the object).
’type’ Output of the type of image component (’byte’, ’int1’, ’int2’, ’uint2’ ’int4’, ’real’, ’direction’, ’cyclic’,
’complex’, ’vector_field’). The component 0 is of type ’region’ or ’xld’.
In the tuple Channel the numbers of the components about which information is required are stated. After car-
rying out get_channel_info, Information contains a tuple of strings (one string per entry in Channel)
with the required information.
Parameter
’image’ Object with region (definition domain) and at least one channel.
’region’ Object with a region without gray values.
’xld_cont’ XLD object as contour
’xld_poly’ XLD object as polygon
’xld_parallel’ XLD object with parallel polygons
Parameter
HALCON 8.0.2
804 CHAPTER 11. OBJECT
Module
Foundation
circle(&Circle,100.0,100.0,100.0);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_TRUE): %d\n",IsDefined);
clear_obj(Circle);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_FALSE): %d\n",IsDefined);
gen_rectangle1(&Rectangle,200.0,200.0,300.0,300.0);
test_obj_def(Circle,IsDefined);
printf("Result for test_obj_def (H_MSG_TRUE!!!): %d\n",IsDefined);
Complexity
The runtime complexity is O(1).
Result
The operator test_obj_def returns the value H_MSG_TRUE if the parameters are correct. The
behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>).
Parallelization Information
test_obj_def is reentrant and processed without parallelization.
Possible Predecessors
clear_obj, gen_circle, gen_rectangle1
See also
set_check, clear_obj, reset_obj_db
Module
Foundation
11.2 Manipulation
HALCON 8.0.2
806 CHAPTER 11. OBJECT
concat_obj can be used to concatenate objects of different image object types (e.g., images and XLD contours)
into a single object. This is only recommended if it is necessary to accumulate in a single object variable, for
example, the results of an image processing sequence. It should be noted that the only operators that can handle
such object tuples of mixed type are concat_obj, copy_obj, select_obj, and disp_obj. For technical
reasons, object tuples of mixed type must not be created in HDevelop.
Parameter
. Objects1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject
Object tuple 1.
. Objects2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject
Object tuple 2.
. ObjectsConcat (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object-array ; Hobject *
Concatenated objects.
Example
gen_circle(&Circle,200.0,400.0,23.0);
gen_rectangle1(&Rectangle,23.0,44.0,203.0,201.0);
concat_obj(Circle,Rectangle,&CirclAndRectangle);
clear_obj(Circle); clear_obj(Rectangle);
disp_region(CircleAndRectangle,WindowHandle);
Complexity
Runtime complexity: O(|Objects1| + |Objects2|);
Memory complexity of the result objects: O(|Objects1| + |Objects2|)
Result
concat_obj returns H_MSG_TRUE if all objects are contained in the HALCON database. If the input is empty
the behavior can be set via set_system(’no_object_result’,<Result>). If necessary, an exception
is raised.
Parallelization Information
concat_obj is reentrant and processed without parallelization.
See also
count_obj, copy_obj, select_obj, disp_obj
Module
Foundation
count_obj(Regions,&Num);
for (i=1; i<=Num; i++);
{
copy_obj(Regions,&Single,i,1);
T_get_region_polygon(Single,5.0,&Row,&Column);
T_disp_polygon(WindowHandleTuple,Row,Column);
destroy_tuple(Row);
destroy_tuple(Column);
clear_obj(Single);
}
Complexity
Runtime complexity: O(|Objects| + NumObj);
Memory complexity of the result object: O(NumObj)
Result
copy_obj returns H_MSG_TRUE if all objects are contained in the HALCON database and all
parameters are correct. If the input is empty the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
copy_obj is reentrant and processed without parallelization.
Possible Predecessors
count_obj
Alternatives
select_obj
See also
count_obj, concat_obj, obj_to_integer, copy_image
Module
Foundation
HALCON 8.0.2
808 CHAPTER 11. OBJECT
Parameter
. EmptyObject (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object ; Hobject *
No objects.
Parallelization Information
gen_empty_obj is reentrant and processed without parallelization.
Module
Foundation
Parameter
Complexity
Runtime complexity: O(|Objects| + Number)
Result
obj_to_integer returns H_MSG_TRUE if all parameters are correct. If the input is empty the behavior can
be set via set_system(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
obj_to_integer is reentrant and processed without parallelization.
Possible Predecessors
test_obj_def
Alternatives
copy_obj, select_obj, copy_image, gen_image_proto
See also
integer_to_obj, count_obj
Module
Foundation
HALCON 8.0.2
810 CHAPTER 11. OBJECT
count_obj(Regions,&Num);
for (i=1; i<=Num; i++)
{
select_obj(Regions,&Single,i);
T_get_region_polygon(Single,5.0,&Row,&Column);
T_disp_polygon(WindowHandleTuple,Row,Column);
destroy_tuple(Row);
destroy_tuple(Column);
clear_obj(Single);
}
Complexity
Runtime complexity: O(|Objects|)
Result
select_obj returns H_MSG_TRUE if all objects are contained in the HALCON database and
all parameters are correct. If the input is empty the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Parallelization Information
select_obj is reentrant and processed without parallelization.
Possible Predecessors
count_obj
Alternatives
copy_obj
See also
count_obj, concat_obj, obj_to_integer
Module
Foundation
Regions
12.1 Access
T_get_region_chain ( const Hobject Region, Htuple *Row,
Htuple *Column, Htuple *Chain )
3 2 1
4 ∗ 0
5 6 7
The operator get_region_chain returns the code in the form of a tuple. In case of an empty region the
parameters Row and Column are zero and Chain is the empty tuple.
Attention
Holes of the region are ignored. Only one region may be passed, and it must have exactly one connection compo-
nent.
Parameter
811
812 CHAPTER 12. REGIONS
Possible Predecessors
sobel_amp, threshold, skeleton, edges_image, gen_rectangle1, gen_circle
Possible Successors
approx_chain, approx_chain_simple
See also
copy_obj, get_region_contour, get_region_polygon
Module
Foundation
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Output region.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.y-array ; Htuple . Hlong *
Line numbers of contour pixels.
. Columns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.x-array ; Htuple . Hlong *
Column numbers of the contour pixels.
Number of elements : Columns = Rows
Result
The operator get_region_convex returns the value H_MSG_TRUE.
Parallelization Information
get_region_convex is reentrant and processed without parallelization.
Possible Predecessors
threshold, skeleton, dyn_threshold
Possible Successors
disp_polygon
Alternatives
shape_trans
See also
select_obj, get_region_contour
Module
Foundation
get_region_points returns the coordinates in the form of tuples. An empty region is passed as empty tuple.
Attention
Only one region may be passed.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
This region is accessed.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; Htuple . Hlong *
Line numbers of the pixels in the region
. Columns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; Htuple . Hlong *
Column numbers of the pixels in the region.
Number of elements : Columns = Rows
Result
The operator get_region_points normally returns the value H_MSG_TRUE. If more than one connection
component is passed an exception handling is caused. The behavior in case of empty input (no input regions
available) is set via the operator set_system(’no_object_result’,<Result>).
Parallelization Information
get_region_points is reentrant and processed without parallelization.
Possible Predecessors
sobel_amp, threshold, connection
HALCON 8.0.2
814 CHAPTER 12. REGIONS
Alternatives
get_region_runs
See also
copy_obj, gen_region_points
Module
Foundation
The operator get_region_runs returns the region data in the form of chord tuples. The chord representation
is caused by examining a region line by line with ascending line number (= from “top” to “bottom”). Every line is
passed from left to right (ascending column number); storing all starting and ending points of region segments (=
chords). Thus a region can be described by a sequence of chords, a chord being defined by line number, starting
and ending points (column number). The operator get_region_runs returns the three components of the
chords in the form of tuples. In case of an empty region three empty tuples are returned.
Attention
Only one region may be passed.
Parameter
12.2 Creation
gen_checker_region ( Hobject *RegionChecker, Hlong WidthRegion,
Hlong HeightRegion, Hlong WidthPattern, Hlong HeightPattern )
HALCON 8.0.2
816 CHAPTER 12. REGIONS
Parameter
gen_checker_region(&Checker,512,512,32,64);
set_draw(WindowHandle,"fill");
set_part(WindowHandle,0,0,511,511);
disp_region(Checker,WindowHandle);
Complexity
The required storage (in bytes) for the region is:
O((WidthRegion ∗ HeightRegion)/WidthPattern)
Result
The operator gen_checker_region returns the value H_MSG_TRUE if the parameter values are correct.
Otherwise an exception handling is raised. The clipping according to the current image format is set via the
operator set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_checker_region is reentrant and processed without parallelization.
Possible Successors
paint_region
Alternatives
gen_grid_region, gen_region_polygon_filled, gen_region_points,
Create a circle.
The operator gen_circle generates one or more circles described by the center and Radius. If several circles
shall be generated the coordinates must be passed in the form of tuples.
gen_circle only creates symmetric circles. To achieve this, the radius is rounded internally to a multiple of 0.5.
If an integer number is specified for the radius (i.e., 1, 2, 3, ...) an even diameter is obtained, and hence the circle
can only be symmetric with respect to a center with coordinates that have a fractional part of 0.5. Consequently,
internally the coordinates of the center are adapted to the closest coordinates that have a fractional part of 0.5. Here,
integer coordinates are rounded down to the next smaller values with a fractional part of 0.5. For odd diameters
(i.e., radius = 1.5, 2.5, 3.5, ...), the circle can only be symmetric with respect to a center with integer coordinates.
Hence, internally the coordinates of the center are rounded to the nearest integer coordinates. It should be noted
that the above algorithm may lead to the fact that circles with an even diameter are not contained in circles with
the next larger odd diameter, even if the coordinates specified in Row and Column are identical.
If the circle extends beyond the image edge it is clipped to the current image format if the value of the system flag
’clip_region’ is set to ’true’ ( set_system).
Parameter
. Circle (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Generated circle.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y(-array) ; (Htuple .) double / Hlong
Line index of center.
Default Value : 200.0
Suggested values : Row ∈ {0.0, 10.0, 50.0, 100.0, 200.0, 300.0}
Typical range of values : 1.0 ≤ Row ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x(-array) ; (Htuple .) double / Hlong
Column index of center.
Default Value : 200.0
Suggested values : Column ∈ {0.0, 10.0, 50.0, 100.0, 200.0, 300.0}
Typical range of values : 1.0 ≤ Column ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius(-array) ; (Htuple .) double / Hlong
Radius of circle.
Default Value : 100.5
Suggested values : Radius ∈ {1.0, 1.5, 2.0, 2.5, 3, 3.5, 4, 4.5, 5.5, 6.5, 7.5, 9.5, 11.5, 15.5, 20.5, 25.5, 31.5,
50.5}
Typical range of values : 1.0 ≤ Radius ≤ 1024.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Radius > 0.0
Example
open_window(0,0,-1,-1,"root","visible","",&WindowHandle);
HALCON 8.0.2
818 CHAPTER 12. REGIONS
read_image(&Image,"meer");
gen_circle(&Circle,300.0,200.0,150.5);
reduce_domain(Image,Circle,Mask);
disp_color(Mask,WindowHandle);
Complexity
Runtime complexity: O(Radius ∗ 2)
Storage complexity (byte): O(Radius ∗ 8)
Result
If the parameter values are correct, the operator gen_circle returns the value H_MSG_TRUE. Oth-
erwise an exception handling is raised. The clipping according to the current image format is set via
the operator set_system(’clip_region’,<’true’/’false’>). If an empty region is cre-
ated by clipping (the circle is completely outside of the image format) the operator set_system
(’store_empty_region’,<true/false>) determines whether the empty region is put out.
Parallelization Information
gen_circle is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain
Alternatives
gen_ellipse, gen_region_polygon_filled, gen_region_points, gen_region_runs,
draw_circle
See also
disp_circle, set_shape, smallest_circle, reduce_domain
Module
Foundation
Create an ellipse.
The operator gen_ellipse generates one or more ellipses with the center (Row, Column), the orientation
Phi and the half-radii Radius1 and Radius2. The angle is indicated in arc measure according to the x axis in
mathematically positive direction. More than one region can be created by passing tuples of parameter values.
The center must be located within the image coordinates. The coordinate system runs from (0,0) (upper left corner)
to (Width-1,Height-1). See get_system and reset_obj_db in this context. If the ellipse reaches beyond the
edge of the image it is clipped to the current image format according to the value of the system flag ’clip_region’ (
set_system).
Parameter
open_window(0,0,-1,-1,"root","visible","",&WindowHandle);
set_insert(WindowHandle,"xor");
do {
get_mbutton(WindowHandle,&Row,&Column,&Button);
gen_ellipse(&Ellipse,(double)Row,(double)Column,Column / 300.0,
(Row % 100)+1.0,(Column % 50) + 1.0);
disp_region(Ellipse,WindowHandle);
clear_obj(Ellipse);
} while(Button != 1);
Complexity
Runtime complexity: O(Radius1 ∗ 2)
Storage complexity (byte): O(Radius1 ∗ 8)
Result
If the parameter values are correct, the operator gen_ellipse returns the value H_MSG_TRUE. Otherwise
an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_ellipse is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain
Alternatives
gen_circle, gen_region_polygon_filled, draw_ellipse
See also
disp_ellipse, set_shape, smallest_circle, reduce_domain
HALCON 8.0.2
820 CHAPTER 12. REGIONS
Module
Foundation
read_image(&Image,"fabrik");
gen_grid_region(&Raster,10,10,"lines",512,512);
reduce_domain(Image,Raster,&Mask);
sobel_amp(Mask,GridSobel,"sum_abs",3);
disp_image(GridSobel,WindowHandle);
Complexity
The necessary storage (in bytes) for the region is:
O((ImageW idth/ColumnSteps) ∗ (ImageHeight/RowSteps))
Result
If the parameter values are correct the operator gen_grid_region returns the value H_MSG_TRUE. Other-
wise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_grid_region is reentrant and processed without parallelization.
Possible Successors
reduce_domain, paint_region
Alternatives
gen_region_line, gen_region_polygon, gen_region_points, gen_region_runs
See also
gen_checker_region, reduce_domain
Module
Foundation
HALCON 8.0.2
822 CHAPTER 12. REGIONS
HALCON 8.0.2
824 CHAPTER 12. REGIONS
read_image(&Image,"fabrik");
open_window(0,0,-1,-1,"root","visible","",&WindowHandle);
disp_image(Image,WindowHandle);
draw_rectangle1(WindowHandle,&Row1,&Column1,&Row2,&Column2);
gen_rectangle1(&Rect,(double)Row1,(double)Column1,
(double)Row2,(double)Column2);
reduce_domain(Image,Rect,&Mask);
emphasize(Mask,&Emphasize,9,9,1.0);
disp_image(Emphasize,WindowHandle);
HALCON 8.0.2
826 CHAPTER 12. REGIONS
Result
If the parameter values are correct, the operator gen_rectangle1 returns the value H_MSG_TRUE. Other-
wise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>).
Parallelization Information
gen_rectangle1 is reentrant and processed without parallelization.
Possible Successors
paint_region, reduce_domain
Alternatives
gen_rectangle2, gen_region_polygon, fill_up, gen_region_runs,
gen_region_points, gen_region_line
See also
draw_rectangle1, reduce_domain, smallest_rectangle1
Module
Foundation
HALCON 8.0.2
828 CHAPTER 12. REGIONS
(0.5,0.5), (0.5,1.5), (1.5,1.5), (1.5,0.5), and (0.5,0.5). Consequently, when passing this contour again to
gen_region_contour_xld, the resulting region consists of the points (1,1), (1,2), (2,2), and (2,1).
Parameter
. Contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject
Input contour.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Created region.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Fill mode of the region.
Default Value : "filled"
Suggested values : Mode ∈ {"filled", "margin"}
Parallelization Information
gen_region_contour_xld is reentrant and processed without parallelization.
Possible Predecessors
gen_contour_polygon_xld, gen_contour_polygon_rounded_xld
Alternatives
gen_region_polygon, gen_region_polygon_xld
See also
set_system
Module
Foundation
Parallelization Information
gen_region_histo is reentrant and processed without parallelization.
Possible Predecessors
gray_histo
See also
disp_channel, set_paint
Module
Foundation
HALCON 8.0.2
830 CHAPTER 12. REGIONS
The indicated coordinates stand for two consecutive pixels in the tupel.
Parameter
HALCON 8.0.2
832 CHAPTER 12. REGIONS
/* Polygon-approximation */
T_get_region_polygon(Region,7,&Row,&Column);
/* store it as a region */
T_gen_region_polygon(&Pol,Row,Column);
destroy_tuple(Row);
destroy_tuple(Column);
/* fill up the hole */
fill_up(Pol,&Filled);
Result
If the base points are correct the operator gen_region_polygon returns the value H_MSG_TRUE. Other-
wise an exception handling is raised. The clipping according to the current image format is set via the operator
set_system(’clip_region’,<’true’/’false’>). If an empty region is created (by the clipping or
by an empty input) the operator set_system(’store_empty_region’,<true/false>) determines
whether the region is returned or an empty object tuple.
Parallelization Information
gen_region_polygon is reentrant and processed without parallelization.
Possible Predecessors
get_region_polygon, draw_polygon
Alternatives
gen_region_polygon_filled, gen_region_points, gen_region_runs
See also
fill_up, reduce_domain, get_region_polygon, draw_polygon
Module
Foundation
/* Polygon approximation */
T_get_region_polygon(Region,7,&Row,&Column);
T_gen_region_polygon_filled(&Pol,Row,Column);
/* fill up with original gray value */
reduce_domain(Image,Pol,&New);
Result
If the base points are correct the operator gen_region_polygon_filled returns the value H_MSG_TRUE.
Otherwise an exception handling is raised. The clipping according to the current image format is set via the
operator set_system(’clip_region’,<’true’/’false’>). If an empty region is created (by the
clipping or by an empty input) the operator set_system(’store_empty_region’,<true/false>)
determines whether the region is returned or an empty object tuple.
Parallelization Information
gen_region_polygon_filled is reentrant and processed without parallelization.
Possible Predecessors
get_region_polygon, draw_polygon
Alternatives
gen_region_polygon, gen_region_points, draw_polygon
See also
gen_region_polygon, reduce_domain, get_region_polygon, gen_region_runs
Module
Foundation
HALCON 8.0.2
834 CHAPTER 12. REGIONS
.
Parameter
HALCON 8.0.2
836 CHAPTER 12. REGIONS
The number of output regions is limited by the system parameter ’max_outp_obj_par’, which can be read via
get_system(::’max_outp_obj_par’:<Anzahl>).
Attention
label_to_region is not implemented for images of type ’real’. The input images must not contain negative
gray values.
Parameter
12.3 Features
area_center ( const Hobject Regions, Hlong *Area, double *Row,
double *Column )
T_area_center ( const Hobject Regions, Htuple *Area, Htuple *Row,
Htuple *Column )
threshold(&Image,&Seg,120.0,255.0);
connection(Seg,&Connected);
T_area_center(Connected,&Area,&Row,&Column);
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator area_center returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
area_center is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
See also
select_shape
Module
Foundation
Calculation: If F is the area of the region and max is the maximum distance from the center to all contour pixels,
the shape factor C is defined as:
F
C=
(max2 ∗ π)
The shape factor C of a circle is 1. If the region is long or has holes C is smaller than 1. The operator
circularity especially responds to large bulges, holes and unconnected regions.
In case of an empty region the operator circularity returns the value 0 (if no other behavior was set (see
set_system)). If more than one region is passed the numerical values of the shape factor are stored in a tuple,
the position of a value in the tuple corresponding to the position of the region in the input tuple.
Parameter
HALCON 8.0.2
838 CHAPTER 12. REGIONS
Example
Result
The operator circularity returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
circularity is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
roundness, compactness, convexity, eccentricity
See also
area_center, select_shape
Module
Foundation
Calculation: If L is the length of the contour (see contlength) and F the area of the region the shape factor
C is defined as:
L2
C=
4F π
The shape factor C of a circle is 1. If the region is long or has holes C is larger than 1. The operator
compactness responds to the course of the contour (roughness) and to holes. In case of an empty region
the operator compactness returns the value 0 if no other behavior was set (see set_system). If more than
one region is passed the numerical values of the shape factor are stored in a tuple, the position of a value in the
tuple corresponding to the position of the region in the input tuple.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Compactness (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Compactness of the input region(s).
Assertion : (Compactness ≥ 1.0) ∨ (Compactness = 0)
Result
The operator compactness returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
compactness is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
compactness, convexity, eccentricity
See also
contlength, area_center, select_shape
Module
Foundation
HALCON 8.0.2
840 CHAPTER 12. REGIONS
#include "HIOStream.h"
#if !defined(USE_IOSTREAM_H)
using namespace std;
#endif
#include "HalconCpp.h"
HWindow w;
HRegionArray reg;
cout << "Draw " << NumOfElements << " regions " << endl;
w.Click ();
return(0);
}
Result
The operator contlength returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
contlength is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
get_region_contour
Alternatives
compactness
See also
area_center, get_region_contour
Module
Foundation
Calculation: If Fc is the area of the convex hull and Fo the original area of the region the shape factor C is defined
as:
Fo
C=
Fc
The shape factor C is 1 if the region is convex (e.g., rectangle, circle etc.). If there are indentations or holes C is
smaller than 1.
In case of an empty region the operator convexity returns the value 0 (if no other behavior was set (see
set_system)). If more than one region is passed the numerical values of the contour length are stored in a tuple,
the position of a value in the tuple corresponding to the position of the region in the input tuple.
Parameter
HALCON 8.0.2
842 CHAPTER 12. REGIONS
Possible Predecessors
threshold, regiongrowing, connection
See also
select_shape, area_center, shape_trans
Module
Foundation
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Attention
It should be noted that, like for all region-moments-based operators, the region’s pixels are regarded as math-
ematical, infinitely small points that are represented by the center of the pixels (see the documentation of
elliptic_axis). This can lead to non-empty regions that have Rb = 0. In these cases, the output features
that require a division by Rb are set to 0. In particular, regions that contain a single point or regions whose points
lie exactly on a straight line (e.g., one pixel high horizontal regions or one pixel wide vertical regions) have an
anisometry of 0.
Parameter
HALCON 8.0.2
844 CHAPTER 12. REGIONS
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system
(’no_object_result’,<Result>)).
Attention
It should be noted that, like for all region-moments-based operators, the region’s pixels are regarded as mathemat-
ical, infinitely small points that are represented by the center of the pixels. This means that Ra and Rb can assume
the value 0. In particular, for an empty region and a region containing a single point Ra = Rb = 0 is returned.
Furthermore, for regions whose points lie exactly on a straight line (e.g., one pixel high horizontal regions or one
pixel wide vertical regions), Rb = 0 is returned.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Ra (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Main radius (normalized to the area).
Assertion : Ra ≥ 0.0
. Rb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Secondary radius (normalized to the area).
Assertion : (Rb ≥ 0.0) ∧ (Rb ≤ Ra)
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Angle between main radius and x axis (arc measure).
Assertion : ((−pi/2) < Phi) ∧ (Phi ≤ (pi/2))
Example
read_image(&Image,"fabrik");
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
regiongrowing(Image,&Seg,5,5,6.0,100);
T_elliptic_axis(Seg,&Ra,&Rb,&Phi);
T_area_center(Seg,_t,&Row,&Column);
T_gen_ellipse(&Ellipses,Row,Column,Phi,Ra,Rb);
set_draw(WindowHandle,"margin");
disp_region(Ellipses,WindowHandle);
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator elliptic_axis returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
elliptic_axis is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
gen_ellipse
Alternatives
smallest_rectangle2, orientation_region
See also
moments_region_2nd, select_shape, set_shape
References
R. Haralick, L. Shapiro “Computer and Robot Vision” Addison-Wesley, 1992, pp. 73-75
Module
Foundation
HALCON 8.0.2
846 CHAPTER 12. REGIONS
• Regions1 is empty:
In this case all regions in Regions2 are permutatively checked for neighborhood.
• Regions1 consists of one region:
The regions of Regions1 are compared to all regions in Regions2.
• Regions1 consists of the same number of regions as Regions2:
Here all regions at the n-th position in Regions1 and Regions2 are checked for the neighboring relation.
The operator find_neighbors uses the chessboard distance between neighboring regions. It can be specified
by the parameter MaxDistance. Neighboring regions are located at the n-th position in RegionIndex1 and
RegionIndex2, i.e., the region with index RegionIndex1[n] from Regions1 is the neighbor of the region
with index RegionIndex2[n] from Regions2.
Attention
Covered regions are not found!
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Starting regions.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
. MaxDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Maximal distance of regions.
Default Value : 1
Suggested values : MaxDistance ∈ {1, 2, 3, 4, 5, 6, 7, 8, 10, 15, 20, 50}
Typical range of values : 1 ≤ MaxDistance ≤ 255
Minimum Increment : 1
Recommended Increment : 1
. RegionIndex1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of the found regions from Regions1.
. RegionIndex2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of the found regions from Regions2.
Result
The operator find_neighbors returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
find_neighbors is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection
See also
spatial_relation, select_region_spatial, expand_region, distance_transform,
interjacent, boundary
Module
Foundation
The returned indices can be used, e.g., in select_obj to select the regions containing the test pixel.
Attention
If the regions overlap more than one region might contain the pixel. In this case all these regions are returned. If
no region contains the indicated pixel the empty tuple (= no region) is returned.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to be examined.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; (Htuple .) Hlong
Line index of the test pixel.
Default Value : 100
Typical range of values : −∞ ≤ Row ≤ ∞ (lin)
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; (Htuple .) Hlong
Column index of the test pixel.
Default Value : 100
Typical range of values : −∞ ≤ Column ≤ ∞ (lin)
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Index of the regions containing the test pixel.
Complexity √
If F is the area of the region and N is the number of regions the mean runtime complexity is O(ln( F ) ∗ N ).
Result
The operator get_region_index returns the value H_MSG_TRUE if the parameters are correct. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
get_region_index is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
select_region_point
See also
get_mbutton, get_mposition, test_region_point
Module
Foundation
HALCON 8.0.2
848 CHAPTER 12. REGIONS
the furthest apart. Additionally the operator get_region_thickness returns the Histogramm of the thick-
nesses of the region. The length of the histogram corresponds to the largest occurring thickness in the observed
region.
Attention
Only one region may be passed. If the region has several connection components, only the first one is investigated.
All other components are ignored.
Parameter
The parameter Similarity describes the similarity between the two regions based on the hamming distance
Distance:
Distance
Similarity = 1 −
|Regions1| + |Regions2|
If both regions are empty Similarity is set to 0. The regions with the same index from both input parameters
are always compared.
Attention
In both input parameters the same number of regions must be passed.
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Hamming distance of two regions.
Assertion : Distance ≥ 0
. Similarity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Similarity of two regions.
Assertion : (0 ≤ Similarity) ∧ (Similarity ≤ 1)
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
hamming_distance returns the value H_MSG_TRUE if the number of objects in both parameters is the same and
is not 0. The behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set) is
set via set_system(’empty_region_result’,<Result>). If necessary an exception handling han-
dling is raised.
Parallelization Information
hamming_distance is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
intersection, complement, area_center
See also
hamming_change_region
Module
Foundation
The parameter Similarity describes the similarity between the two regions based on the hamming distance
Distance:
Distance
Similarity = 1 −
|N orm(Regions1)| + |Regions2|
HALCON 8.0.2
850 CHAPTER 12. REGIONS
If both regions are empty Similarity is set to 0. The regions with the same index from both input parameters
are always compared.
Attention
In both input parameters the same number of regions must be passed.
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
. Norm (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Type of normalization.
Default Value : "center"
List of values : Norm ∈ {"center"}
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Hamming distance of two regions.
Assertion : Distance ≥ 0
. Similarity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Similarity of two regions.
Assertion : (0 ≤ Similarity) ∧ (Similarity ≤ 1)
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
hamming_distance_norm returns the value H_MSG_TRUE if the number of objects in both parameters is the same
and is not 0. The behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
hamming_distance_norm is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
intersection, complement, area_center
See also
hamming_change_region
Module
Foundation
Attention
If several inner circles are present at a region only the most upper left solution is returned.
Parameter
read_image(&Image,"fabrik");
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
regiongrowing(Image,&Seg,5,5,6.0,100);
select_shape(Seg,&H,"area","and",100.0,2000.0);
T_inner_circle(H,&Row,&Column,&Radius);
T_gen_circle(&Circles,Row,Column,Radius);
set_draw(WindowHandle,"margin");
disp_region(Circles,WindowHandle);
Complexity √
If F is the area of the region and R is the radius of the inner circle the runtime complexity is O( F ∗ R).
Result
The operator inner_circle returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>), the behavior in case of empty region is set via set_system
(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
inner_circle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
gen_circle, disp_circle
Alternatives
erosion_circle, inner_rectangle1
See also
set_shape, select_shape, smallest_circle
Module
Foundation
HALCON 8.0.2
852 CHAPTER 12. REGIONS
If more than one region is passed in Regions the results are stored in tuples, the index of a value in the tuple
corresponding to the index of the input region. For empty regions all parameters have the value 0 if no other
behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region to be examined.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; (Htuple .) Hlong *
Row coordinate of the upper left corner point.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; (Htuple .) Hlong *
Column coordinate of the upper left corner point.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; (Htuple .) Hlong *
Row coordinate of the lower right corner point.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x(-array) ; (Htuple .) Hlong *
Column coordinate of the lower right corner point.
Result
The operator inner_rectangle1 returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
inner_rectangle1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
disp_rectangle1, gen_rectangle1
Alternatives
inner_circle
See also
smallest_rectangle1, select_shape
Module
Foundation
Calculation: Z0 and S0 are the coordinates of the center of a region R with the area F . Then the moments Mij
are defined by: X
Mij = (Z0 − Z)i (S0 − S)j
(Z,S)∈R
p
Ib = h − h2 − M 20 ∗ M 02 + M 112
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. M11 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Product of inertia of the axes through the center parallel to the coordinate axes.
. M20 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order (line-dependent).
. M02 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order (column-dependent).
. Ia (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
The one main axis of inertia.
. Ib (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
The other main axis of inertia.
Complexity √
If F is the area of the region the mean runtime complexity is O( F ).
Result
The operator moments_region_2nd returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (region is the empty set) is set
via set_system(’empty_region_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
moments_region_2nd is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd_invar
See also
elliptic_axis
Module
Foundation
Calculation: Z0 and S0 are the coordinates of the center of a region R with the area F . Then the moments Mij
are defined by:
1 X
Mij = 2 (Z0 − Z)i (S0 − S)j
F
(Z,S)∈R
HALCON 8.0.2
854 CHAPTER 12. REGIONS
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. PHI1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. PHI2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
Result
The operator moments_region_2nd_rel_invar returns the value H_MSG_TRUE if the input is not
empty. The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_2nd_rel_invar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation
Calculation: x and y are the coordinates of the center of a region R with the area Z. Then the moments Mpq are
defined by: X
Mpq = M Z(xi , yi )(xi − x)p (yi − y)q
i=1
m10 m01
wherein are x = m00 and y = m00 .
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. M21 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order (line-dependent).
. M12 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order (column-dependent).
. M03 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order (column-dependent).
HALCON 8.0.2
856 CHAPTER 12. REGIONS
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
Complexity √
If Z is the area of the region the mean runtime complexity is O( Z).
Result
The operator moments_region_3rd_invar returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_3rd_invar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. I1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. I2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. I3 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 2nd order.
. I4 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Moment of 3rd order.
Complexity √
If Z is the area of the region the mean runtime complexity is O( Z).
HALCON 8.0.2
858 CHAPTER 12. REGIONS
Result
The operator moments_region_central returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_central is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
Result
The operator moments_region_central_invar returns the value H_MSG_TRUE if the input is not
empty. The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
moments_region_central_invar is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
moments_region_2nd
See also
elliptic_axis
Module
Foundation
Orientation of a region.
The operator orientation_region calculates the orientation of the region. The operator is based on
elliptic_axis. In addition the point on the contour with maximal distance to the center of gravity is cal-
culated. If the column coordinate of this point is less than the column coordinate of the center of gravity the value
of π is added to the angle.
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system
(’no_object_result’,<Result>)).
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region(s) to be examined.
. Phi (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Orientation of region (arc measure).
Assertion : (−pi ≤ Phi) ∧ (Phi < pi)
Complexity√
If F is the area of a region the mean runtime complexity is O( F ).
Result
The operator orientation_region returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
orientation_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Possible Successors
disp_arrow
Alternatives
elliptic_axis, smallest_rectangle2
HALCON 8.0.2
860 CHAPTER 12. REGIONS
See also
moments_region_2nd, line_orientation
Module
Foundation
If more than one region is passed the results are stored in tuples, the index of a value in the tuple corresponding to
the index of a region in the input.
In case of empty region all parameters have the value 0.0 if no other behavior was set (see set_system).
Parameter
HALCON 8.0.2
862 CHAPTER 12. REGIONS
Alternatives
compactness
See also
contlength
References
R. Haralick, L. Shapiro “Computer and Robot Vision” Addison-Wesley, 1992, pp. 61
Module
Foundation
The operator runlength_features calculates for every input region from Regions the number of runs
necessary for storing this region with the aid of runlength coding. Furthermore the so-called "‘K-factor"’ is deter-
mined, which indicates by how much the number of runs differs from the ideal of the square in which this value is
1.0.
The K-factor (KFactor) is calculated according to the formula:
NumRuns
KFactor = √
Area
wherein Area indicates the area of the region. It should be noted that the K-factor can be smaller than 1.0 (in case
of long horizontal regions).
The L-factor (LFactor) indicates the mean number of runs for each line index occurring in the region.
MeanLength indicates the mean length of the runs. The parameter Bytes indicates how many bytes are neces-
sary for coding the region with runlengths.
Attention
All features calculated by the operator runlength_features are not rotation invariant because the runlength
coding depends on the direction. The operator runlength_features does not serve for calculating shape
features but for controlling and analysing the efficiency of the runlength coding.
Parameter
HALCON 8.0.2
864 CHAPTER 12. REGIONS
Attention
If the regions overlap more than one region might contain the pixel. In this case all these regions are returned. If
no region contains the indicated pixel the empty tuple (= no region) is returned.
Parameter
read_image(&Image,"fabrik");
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
disp_image(Image);
regiongrowing(Image,&Seg,3,3,5.0,0);
set_color(WindowHandle,"red");
set_draw(WindowHandle,"margin");
do {
printf("Select the region with the mouse (End right buttonn \n");
get_mbutton(WindowHandle,&Row,&Column,&Button);
select_region_point(Seg,&Single,Row,Column);
disp_region(Single,WindowHandle);
clear(Single);
} while(Button != 4);
Complexity √
If F is the area of the region and N is the number of regions, the mean runtime complexity is O(ln( F ) ∗ N ).
Result
The operator select_region_point returns the value H_MSG_TRUE if the parameters are correct.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). If necessary an exception handling is raised.
Parallelization Information
select_region_point is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
test_region_point
See also
get_mbutton, get_mposition
Module
Foundation
• Regions1 is empty:
In this case all regions in Regions2 are permutatively checked for neighborhood.
• Regions1 consists of one region:
The regions of Regions1 are compared to all regions in Regions2.
• Regions1 consists of the same number of regions as Regions2:
The regions at the n-th position in Regions1 and Regions2 are each checked for a neighboring relation.
The operator select_region_spatial calculates the centers of the regions to be compared and decides
according to the angle between the center straight lines and the x axis whether the direction relation is fulfilled.
The relation is fulfilled within the area of -45 degree to +45 degree around the coordinate axes. Thus, the direction
relation can be understood in such a way that the center of the second region must be located left (or right, above,
below) of the center of the first region. The indices of the regions fulfilling the direction relation are located at the
n-th position in RegionIndex1 and RegionIndex2, i.e., the region with the index RegionIndex2[n] has
the indicated relation with the region with the index RegionIndex1[n]. Access to regions via the index can be
obtained via the operator copy_obj.
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Starting regions
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions
. Direction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Desired neighboring relation.
Default Value : "left"
List of values : Direction ∈ {"left", "right", "above", "below"}
. RegionIndex1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices in the input tuples (Regions1 or Regions2), respectively.
. RegionIndex2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices in the input tuples (Regions1 or Regions2), respectively.
Result
The operator select_region_spatial returns the value H_MSG_TRUE if Regions2 is not empty. The
behavior in case of empty parameter Regions2 (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
HALCON 8.0.2
866 CHAPTER 12. REGIONS
Parallelization Information
select_region_spatial is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
area_center, intersection
See also
spatial_relation, find_neighbors, copy_obj, obj_to_integer
Module
Foundation
’dist_mean’: Mean distance from the region border to the center (see operator roundness)
’dist_deviation:’ Deviation of the distance from the region border from the center (see operator roundness)
’roundness’: Roundness (see operator roundness)
’num_sides’: Number of polygon sides (see operator roundness)
’connect_num’: Number of connection components (see operator connect_and_holes)
’holes_num’: Number of holes (see operator connect_and_holes)
’max_diameter’: Maximum diameter of the region (see operator diameter_region)
’orientation’: Orientation of the region (see operator orientation_region)
’euler_number’: Euler number (see operator euler_number)
’rect2_phi’: Orientation of the smallest surrounding rectangle (see operator smallest_rectangle2)
’rect2_len1’: Half the length of the smallest surrounding rectangle (see operator smallest_rectangle2)
’rect2_len2’: Half the width of the smallest surrounding rectangle (see operator smallest_rectangle2)
’moments_m11’: Geometric moments of the region (see operator moments_region_2nd)
’moments_m20’: Geometric moments of the region (see operator moments_region_2nd)
’moments_m02’: Geometric moments of the region (see operator moments_region_2nd)
’moments_ia’: Geometric moments of the region (see operator moments_region_2nd)
’moments_ib’: Geometric moments of the region (see operator moments_region_2nd)
’moments_m11_invar’: Geometric moments of the region (see operator moments_region_2nd_invar)
’moments_m20_invar’: Geometric moments of the region (see operator moments_region_2nd_invar)
’moments_m02_invar’: Geometric moments of the region (see operator moments_region_2nd_invar)
’moments_phi1’: Geometric moments of the region (see operator moments_region_2nd_rel_invar)
’moments_phi2’: Geometric moments of the region (see operator moments_region_2nd_rel_invar)
’moments_m21’: Geometric moments of the region (see operator moments_region_3rd)
’moments_m12’: Geometric moments of the region (see operator moments_region_3rd)
’moments_m03’: Geometric moments of the region (see operator moments_region_3rd)
’moments_m30’: Geometric moments of the region (see operator moments_region_3rd)
’moments_m21_invar’: Geometric moments of the region (see operator moments_region_3rd_invar)
’moments_m12_invar’: Geometric moments of the region (see operator moments_region_3rd_invar)
’moments_m03_invar’: Geometric moments of the region (see operator moments_region_3rd_invar)
’moments_m30_invar’: Geometric moments of the region (see operator moments_region_3rd_invar)
’moments_i1’: Geometric moments of the region (see operator moments_region_central)
’moments_i2’: Geometric moments of the region (see operator moments_region_central)
’moments_i3’: Geometric moments of the region (see operator moments_region_central)
’moments_i4’: Geometric moments of the region (see operator moments_region_central)
’moments_psi1’: Geometric moments of the region (see operator moments_region_central_invar)
’moments_psi2’: Geometric moments of the region (see operator moments_region_central_invar)
’moments_psi3’: Geometric moments of the region (see operator moments_region_central_invar)
’moments_psi4’: Geometric moments of the region (see operator moments_region_central_invar)
If only one feature (Features) is used the value of Operation is meaningless. Several features are processed
in the sequence in which they are entered.
HALCON 8.0.2
868 CHAPTER 12. REGIONS
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions to be examined.
. SelectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Regions fulfilling the condition.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Shape features to be checked.
Default Value : "area"
List of values : Features ∈ {"area", "row", "column", "width", "height", "row1", "column1", "row2",
"column2", "circularity", "compactness", "contlength", "convexity", "rectangularity", "ra", "rb", "phi",
"anisometry", "bulkiness", "struct_factor", "outer_radius", "inner_radius", "inner_width", "inner_height",
"max_diameter", "dist_mean", "dist_deviation", "roundness", "num_sides", "orientation", "connect_num",
"holes_num", "euler_number", "rect2_phi", "rect2_len1", "rect2_len2", "moments_m11", "moments_m20",
"moments_m02", "moments_ia", "moments_ib", "moments_m11_invar", "moments_m20_invar",
"moments_m02_invar", "moments_phi1", "moments_phi2", "moments_m21", "moments_m12",
"moments_m03", "moments_m30", "moments_m21_invar", "moments_m12_invar", "moments_m03_invar",
"moments_m30_invar", "moments_i1", "moments_i2", "moments_i3", "moments_i4", "moments_psi1",
"moments_psi2", "moments_psi3", "moments_psi4"}
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Linkage type of the individual features.
Default Value : "and"
List of values : Operation ∈ {"and", "or"}
. Min (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double / Hlong / const char *
Lower limits of the features or ’min’.
Default Value : 150.0
Typical range of values : 0.0 ≤ Min ≤ 99999.0
Minimum Increment : 0.001
Recommended Increment : 1.0
. Max (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double / Hlong / const char *
Upper limits of the features or ’max’.
Default Value : 99999.0
Typical range of values : 0.0 ≤ Max ≤ 99999.0
Minimum Increment : 0.001
Recommended Increment : 1.0
Restriction : Max ≥ Min
Example
Result
The operator select_shape returns the value H_MSG_TRUE if the input is not empty. The be-
havior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
select_shape is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
select_shape, select_gray, shape_trans, reduce_domain, count_obj
Alternatives
select_shape_std
See also
area_center, circularity, compactness, contlength, convexity, rectangularity,
elliptic_axis, eccentricity, inner_circle, smallest_circle,
smallest_rectangle1, smallest_rectangle2, inner_rectangle1, roundness,
connect_and_holes, diameter_region, orientation_region, moments_region_2nd,
moments_region_2nd_invar, moments_region_2nd_rel_invar, moments_region_3rd,
moments_region_3rd_invar, moments_region_central,
moments_region_central_invar, select_obj
Module
Foundation
’distance_dilate’ The minimum distance in the maximum norm from the edge of Pattern to the edge of every
region from Regions is determined (see distance_rr_min_dil).
’distance_contour’ The minimum Euclidean distance from the edge of Pattern to the edge of every region
from Regions is determined. (see distance_rr_min).
’distance_center’ The Euclidean distance from the center of Pattern to the center of every region from
Regions is determined.
’covers’ It is examined how well the region Pattern fits into the regions from Regions. If there is no shift
so that Pattern is a subset of Regions the overlap is 0. If Pattern corresponds to the region after a
corresponding shift the overlap is 100. Otherwise the area of the opening of Regions with Pattern is put
into relation with the area of Regions (in percent).
’fits’ It is examined whether Pattern can be shifted in such a way that it fits in Regions. If this is possible the
corresponding region is copied from Regions. The parameters Min and Max are ignored.
’overlaps_abs’ The area of the intersection of Pattern and every region in Regions is computed.
’overlaps_rel’ The area of the intersection of Pattern and every region in Regions is computed. The relative
overlap is the ratio of the area of the intersection and the are of the respective region in Regions (in percent).
Parameter
HALCON 8.0.2
870 CHAPTER 12. REGIONS
regiongrowing(Image,&Seg,3,3,5.0,0);
gen_circle(&C,100.0,100.0,MinRadius);
select_shape_proto(Seg,C,"fits",0.0,0.0);
Result
The operator select_shape_proto returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
select_shape_proto is reentrant and processed without parallelization.
Possible Predecessors
connection, draw_region, gen_circle, gen_rectangle1, gen_rectangle2,
gen_ellipse
Possible Successors
select_gray, shape_trans, reduce_domain, count_obj
Alternatives
select_shape
See also
opening, erosion1, distance_rr_min_dil, distance_rr_min
Module
Foundation
’rectangle2’ The smallest surrounding rectangle with any orientation is determined via the operator
smallest_rectangle2. If the area difference in percent is larger than Percent the region is adopted.
Note that as a more robust alternative the operator select_shape with Feature set to ’rectangularity’
can be used instead.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input regions to be selected.
. SelectedRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Regions with desired shape.
. Shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Shape features to be checked.
Default Value : "max_area"
List of values : Shape ∈ {"max_area", "rectangle1", "rectangle2"}
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Similarity measure.
Default Value : 70.0
Suggested values : Percent ∈ {10.0, 30.0, 50.0, 60.0, 70.0, 80.0, 90.0, 95.0, 100.0}
Typical range of values : 0.0 ≤ Percent ≤ 100.0 (lin)
Minimum Increment : 0.1
Recommended Increment : 10.0
Parallelization Information
select_shape_std is reentrant and processed without parallelization.
Possible Predecessors
threshold, regiongrowing, connection, smallest_rectangle1, smallest_rectangle2
Alternatives
intersection, complement, area_center, select_shape
See also
smallest_rectangle1, smallest_rectangle2, rectangularity
Module
Foundation
HALCON 8.0.2
872 CHAPTER 12. REGIONS
Parameter
read_image(&Image,"fabrik");
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
regiongrowing(Image,&Seg,5,5,6.0,100);
select_shape(Seg,&H,"area","and",100.0,2000.0);
T_smallest_circle(H,&Row,&Column,&Radius);
T_gen_circle(&Circles,Row,Column,Radius);
set_draw(WindowHandle,"margin");
disp_region(Circles,WindowHandle);
Complexity √
If F is the area of the region, then the mean runtime complexity is O( F .
Result
The operator smallest_circle returns the value H_MSG_TRUE if the input is not empty. The
behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
smallest_circle is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
gen_circle, disp_circle
Alternatives
elliptic_axis, smallest_rectangle1, smallest_rectangle2
See also
set_shape, select_shape, inner_circle
Module
Foundation
If more than one region is passed in Regions, the results are stored in tuples, the index of a value in the tuple
corresponding to the index of a region in the input. In case of empty region all parameters have the value 0 if no
other behavior was set (see set_system).
Attention
In case of empty region the result of Row1,Column1, Row2 and Column2 (all are 0) can lead to confusion.
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be examined.
. Row1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y(-array) ; (Htuple .) Hlong *
Line index of upper left corner point.
. Column1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x(-array) ; (Htuple .) Hlong *
Column index of upper left corner point.
. Row2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y(-array) ; (Htuple .) Hlong *
Line index of lower right corner point.
. Column2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.x(-array) ; (Htuple .) Hlong *
Column index of lower right corner point.
Complexity
If F is the area of the region the mean runtime complexity is O(sqrt(F )).
Result
The operator smallest_rectangle1 returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
smallest_rectangle1 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
disp_rectangle1, gen_rectangle1
Alternatives
smallest_rectangle2, area_center
See also
select_shape
Module
Foundation
HALCON 8.0.2
874 CHAPTER 12. REGIONS
Parameter
read_image(Image,’fabrik’)
open_window(0,0,-1,-1,’root’,’visible’,’’,WindowHandle)
regiongrowing(Image,Seg,5,5,6,100)
smallest_rectangle2(Seg,Row,Column,Phi,Length1,Length2)
gen_rectangle2(Rectangle,Row,Column,Phi,Length1,Length2)
set_draw(WindowHandle,’margin’)
disp_region(Rectangle,WindowHandle)
Complexity
If F is
√the area2of the region and N is the number of supporting points of the convex hull, the runtime complexity
is O( F + N ).
Result
The operator smallest_rectangle2 returns the value H_MSG_TRUE if the input is not empty.
The behavior in case of empty input (no input regions available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception handling is
raised.
Parallelization Information
smallest_rectangle2 is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, regiongrowing, connection, runlength_features
Possible Successors
disp_rectangle2, gen_rectangle2
Alternatives
elliptic_axis, smallest_rectangle1
See also
smallest_circle, set_shape
Module
Foundation
The operator spatial_relation selects regions located by Percent percent “left”, “right”, “above” or
“below” other regions. Regions1 and Regions2 contain the regions to be compared. Regions1 can have
three states:
• Regions1 is empty:
In this case all regions in Regions2 are permutatively checked for neighborhood.
• Regions1 consists of one region:
The regions of Regions1 are compared to all regions in Regions2.
• Regions1 consists of the same number of regions as Regions2:
Regions1 and Regions2 are checked for a neighboring relation.
The percentage Percent is interpreted in such a way that the area of the second region has to be located really
left/right or above/below the region margins of the first region by at least Percent percent. The indices of
the regions that fulfill at least one of these conditions are then located at the n-th position in the output parame-
ters RegionIndex1 and RegionIndex2. Additionally the output parameters Relation1 and Relation2
contain at the n-th position the type of relation of the region pair (RegionIndex1[n], RegionIndex2[n]),
i.e., region with index RegionIndex2[n] has the relation Relation1[n] and Relation2[n] with region with
index RegionIndex1[n].
Possible values for Relation1 and Relation2 are:
In RegionIndex1 and RegionIndex2 the indices of the regions in the tuples of the input regions (Regions1
or Regions2), respectively, are entered as image identifiers. Access to chosen regions via the index can be
obtained by the operator copy_obj.
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Starting regions.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
. Percent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Percentage of the area of the comparative region which must be located left/right or above/below the region
margins of the starting region.
Default Value : 50
Suggested values : Percent ∈ {0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Typical range of values : 0 ≤ Percent ≤ 100 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (0 ≤ Percent) ∧ (Percent ≤ 100)
. RegionIndex1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of the regions in the tuple of the input regions which fulfill the pose relation.
. RegionIndex2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Indices of the regions in the tuple of the input regions which fulfill the pose relation.
. Relation1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Horizontal pose relation in which RegionIndex2[n] stands with RegionIndex1[n].
. Relation2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Vertical pose relation in which RegionIndex2[n] stands with RegionIndex1[n].
Result
The operator spatial_relation returns the value H_MSG_TRUE if Regions2 is not empty and Percent
is correctly choosen. The behavior in case of empty parameter Regions2 (no input regions available) is set via
the operator set_system(’no_object_result’,<Result>). The behavior in case of empty region (the
region is the empty set) is set via set_system(’empty_region_result’,<Result>). If necessary an
exception handling is raised.
Parallelization Information
spatial_relation is reentrant and processed without parallelization.
HALCON 8.0.2
876 CHAPTER 12. REGIONS
Possible Predecessors
threshold, regiongrowing, connection
Alternatives
area_center, intersection
See also
select_region_spatial, find_neighbors, copy_obj, obj_to_integer
Module
Foundation
12.4 Geometric-Transformations
T_affine_trans_region ( const Hobject Region,
Hobject *RegionAffineTrans, const Htuple HomMat2D,
const Htuple Interpolate )
As an effect, you might get unexpected results when creating affine transformations based on coordinates that
are derived from the region, e.g., by operators like area_center. For example, if you use this operator to
calculate the center of gravity of a rotationally symmetric region and then rotate the region around this point using
hom_mat2d_rotate, the resulting region will not lie on the original one. In such a case, you can compensate
this effect by applying the following translations to HomMat2D before using it in affine_trans_region:
hom_mat2d_translate(HomMat2D, 0.5, 0.5, HomMat2DTmp)
hom_mat2d_translate_local(HomMat2DTmp, -0.5, -0.5, HomMat2DAdapted)
affine_trans_region(Region, RegionAffinTrans, HomMat2DAdapted, ’false’)
Parameter
HALCON 8.0.2
878 CHAPTER 12. REGIONS
read_image(&Image,"monkey");
threshold(Image,&Seg,128.0,255.0);
mirror_region(Seg,&Mirror,"row",512);
disp_region(Mirror,WindowHandle);
Parallelization Information
mirror_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
affine_trans_region
See also
zoom_region
Module
Foundation
Translate a region.
move_region translates the input regions by the vector given by (Row, Column). If necessary, the resulting
regions are clipped with the current image format.
Parameter
HALCON 8.0.2
880 CHAPTER 12. REGIONS
The radii and angles are inclusive, which means that the first row of the virtual target image contains the circle
with radius RadiusStart and the last row contains the circle with radius RadiusEnd. For complete circles,
where the difference between AngleStart and AngleEnd equals 2π (360 degrees), this also means that the
first column of the target image will be the same as the last.
1
To avoid this, do not make this difference 2π, but 2π(1 − Width ) degrees instead.
The parameter Interpolation is used to select the interpolation method ’bilinear’ or ’nearest_neighbor’.
Setting Interpolation to ’bilinear’ leads to smoother region boundaries, especially if regions are enlarged.
However, the runtime increases significantly.
If more than one region is passed in Region, their polar transformations are computed individually and stored
as a tuple in PolarTransRegion. Please note that the indices of an input region and its transformation only
correspond if the system variable ’store_empty_regions’ is set to ’true’ (see set_system). Otherwise empty
output regions are discarded and the length of the input tuple Region is most likely not equal to the length of the
output tuple PolarTransRegion.
Attention
If Width or Height are chosen greater than the dimensions of the current image, the system variable
’clip_region’ should be set to ’false’ (see set_system). Otherwise, an output region that does not lie within the
dimensions of the current image can produce an error message.
Parameter
HALCON 8.0.2
882 CHAPTER 12. REGIONS
HALCON 8.0.2
884 CHAPTER 12. REGIONS
Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image, projective_trans_image_size,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
Module
Foundation
x + x0
Column =
2
y + y0
Row = .
2
If Row and Column are set to the origin, the in morphology often used transposition results. Hence
transpose_region is often used to reflect (transpose) a structuring element.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region to be reflected.
. Transposed (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .region(-array) ; Hobject *
Transposed region.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate of the reference point.
Default Value : 0
Suggested values : Row ∈ {0, 64, 128, 256, 512}
Typical range of values : 0 ≤ Row ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate of the reference point.
Default Value : 0
Suggested values : Column ∈ {0, 64, 128, 256, 512}
Typical range of values : 0 ≤ Column ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let F be the area of the input region. Then the runtime complexity for one region is
√
O( F ) .
Result
transpose_region returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty or no
input region can be set via:
• no region: set_system(’no_object_result’,<RegionResult>)
• empty region: set_system(’empty_region_result’,<RegionResult>)
Zoom a region.
zoom_region enlarges or reduces the regions given in Region in the x- and y-direction by the given scale
factors ScaleWidth and ScaleHeight.
Parameter
HALCON 8.0.2
886 CHAPTER 12. REGIONS
12.5 Sets
complement ( const Hobject Region, Hobject *RegionComplement )
T_complement ( const Hobject Region, Hobject *RegionComplement )
The resulting region is defined as the input region (Region) with all points from Sub removed.
Attention
Empty regions are valid for both parameters. On output, empty regions may result. The value of the system flag
’store_empty_region’ determines the behavior in this case.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions to be processed.
. Sub (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
The union of these regions is subtracted from Region.
. RegionDifference (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Resulting region.
Example
Complexity
Let N be the number of regions, F _1 be their average√ F _2 be the total area of all regions in Sub. Then
area, and√
the runtime complexity is O(F _1 ∗ log(F _1) + N ∗ ( F _1 + F _2)).
Result
difference always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty in-
put region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling
is raised.
Parallelization Information
difference is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape, disp_region
See also
intersection, union1, union2, complement, symm_difference
Module
Foundation
HALCON 8.0.2
888 CHAPTER 12. REGIONS
Let N be the number of regions in Region1, F1 be their average√ √ F2 be the total area of all regions in
area, and
Region2. Then the runtime complexity is O(F1 log (F1 ) + N ∗ ( F1 + F2 )).
Result
intersection always returns H_MSG_TRUE. The behavior in case of empty input (no regions given) can
be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty input
region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling is
raised.
Parallelization Information
intersection is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
See also
union1, union2, complement
Module
Foundation
Result
symm_difference always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
symm_difference is reentrant and processed without parallelization.
Possible Successors
select_shape, disp_region
See also
intersection, union1, union2, complement, difference
Module
Foundation
Complexity √ √
Let F be the sum of all areas of the input regions. Then the runtime complexity is O(log( F ) ∗ F ).
Result
union1 always returns H_MSG_TRUE. The behavior in case of empty input (no regions given) can be set via
set_system(’no_object_result’,<Result>) and the behavior in case of an empty input region via
set_system(’empty_region_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
union1 is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
union2
HALCON 8.0.2
890 CHAPTER 12. REGIONS
See also
intersection, complement
Module
Foundation
12.6 Tests
test_equal_region ( const Hobject Regions1, const Hobject Regions2,
Hlong *IsEqual )
Parameter
. Regions1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Test regions.
. Regions2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Comparative regions.
Number of elements : Regions1 = Regions2
. IsEqual (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
boolean result value.
Complexity √ √
If F is the area of a region the runtime complexity is O(1) or O( F ) if the result is TRUE, O( F ) if the result is
FALSE.
Result
The operator test_equal_region returns the value H_MSG_TRUE if the parameters are correct.
The behavior in case of empty input (no input objects available) is set via the operator set_system
(’no_object_result’,<Result>). If the number of objects differs an exception is raised. Else
test_equal_region returns the value H_MSG_TRUE
Parallelization Information
test_equal_region is reentrant and processed without parallelization.
Alternatives
intersection, complement, area_center
See also
test_equal_obj
Module
Foundation
HALCON 8.0.2
892 CHAPTER 12. REGIONS
12.7 Transformation
background_seg ( const Hobject Foreground, Hobject *BackgroundRegions )
T_background_seg ( const Hobject Foreground,
Hobject *BackgroundRegions )
Complexity
Let F be the area of the background, H and W be the height
√ and√ width of the image, and N be the number of
resulting regions. Then the runtime complexity is O(H + F ∗ N ).
Result
background_seg always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
background_seg is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
Alternatives
complement, connection
See also
threshold, hysteresis_threshold, skeleton, expand_region, set_system, sobel_amp,
edges_image, roberts, bandpass_image
Module
Foundation
HALCON 8.0.2
894 CHAPTER 12. REGIONS
clip_region clips the input regions to the rectangle given by the four control parameters. clip_region is
more efficient than calling intersection with a rectangle generated by gen_rectangle1.
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Region to be clipped.
. RegionClipped (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Clipped regions.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.y ; Hlong
Row coordinate of the upper left corner of the rectangle.
Default Value : 0
Suggested values : Row1 ∈ {0, 128, 200, 256}
Typical range of values : −∞ ≤ Row1 ≤ ∞ (lin)
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.origin.x ; Hlong
Column coordinate of the upper left corner of the rectangle.
Default Value : 0
Suggested values : Column1 ∈ {0, 128, 200, 256}
Typical range of values : −∞ ≤ Column1 ≤ ∞ (lin)
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle.corner.y ; Hlong
Row coordinate of the lower right corner of the rectangle.
Default Value : 256
Suggested values : Row2 ∈ {128, 200, 256, 512}
Typical range of values : 0 ≤ Row2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .rectangle.corner.x ; Hlong
Column coordinate of the lower right corner of the rectangle.
Default Value : 256
Suggested values : Column2 ∈ {128, 200, 256, 512}
Typical range of values : 0 ≤ Column2 ≤ 511 (lin)
Minimum Increment : 1
Recommended Increment : 10
Result
clip_region returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case
of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
clip_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
intersection, gen_rectangle1, clip_region_rel
Module
Foundation
clip_region_rel clips a region to a rectangle lying within the region. The size of the rectangle is determined
by the enclosing rectangle of the region, which is reduced by the values given in the four control parameters. All
four parameters must contain numbers larger or equal to zero, and determine by which amount the rectangle is
reduced at the top (Top), at the bottom (Bottom), at the left (Left), and at the right (Right). If all parameters
are set to zero, the region remains unchanged.
Parameter
HALCON 8.0.2
896 CHAPTER 12. REGIONS
read_image(&Image,"affe");
set_colored(WindowHandle,12);
threshold(Image,&Light,150.0,255.0);
count_obj(Light,&Number1);
printf("Nummber of regions after threshold = %d\n",Number1);
disp_region(Light,WindowHandle);
connection(Light,&Many);
count_obj(Many,&Number2);
printf("Nummber of regions after threshold = %d\n",Number2);
disp_region(Many,WindowHandle);
Complexity
Let F be the area√of the√input region and N be the number of generated connected components. Then the runtime
complexity is O( F ∗ N ).
Result
connection always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an empty in-
put region via set_system(’empty_region_result’,<Result>). If necessary, an exception handling
is raised.
Parallelization Information
connection is reentrant and processed without parallelization.
Possible Predecessors
auto_threshold, threshold, dyn_threshold, erosion1
Possible Successors
select_shape, select_gray, shape_trans, set_colored, dilation1, count_obj,
reduce_domain, add_channels
Alternatives
background_seg
See also
set_system, union1
Module
Foundation
HALCON 8.0.2
898 CHAPTER 12. REGIONS
Complexity
The runtime complexity is O(Width ∗ Height).
Result
distance_transform returns H_MSG_H_MSG_TRUE if all parameters are correct.
Parallelization Information
distance_transform is reentrant and processed without parallelization.
Possible Predecessors
threshold, dyn_threshold, regiongrowing
Possible Successors
threshold
See also
skeleton
References
P. Soille: “Morphological Image Analysis, Principles and Applications”; Springer Verlag Berlin Heidelberg New
York, 1999.
G. Borgefors: “Distance Transformations in Arbitrary Dimensions”; Computer Vision, Graphics, and Image Pro-
cessing, Vol. 27, pages 321–345, 1984.
P.E. Danielsson: “Euclidean Distance Mapping”; Computer Graphics and Image Processing, Vol. 14, pages 227–
248, 1980.
Module
Foundation
’image’ The input regions are expanded iteratively until they touch another region or the image border. In this
case, the image border is defined to be the rectangle ranging from (0,0) to (row_max,col_max). Here,
(row_max,col_max) corresponds to the lower right corner of the smallest surrounding rectangle of all input re-
gions (i.e., of all regions that are passed in Regions and ForbiddenArea). Because expand_region
processes all regions simultaneously, gaps between regions are distributed evenly to all regions. Overlapping
regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to the respective regions. Because the intersection with the original region is
computed after the shrinking operation gaps in the output regions may result, i.e., the segmentation is not
complete. This can be prevented by calling expand_region a second time with the complement of the
original regions as “forbidden area.”
HALCON 8.0.2
900 CHAPTER 12. REGIONS
Parameter
. Regions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Regions for which the gaps are to be closed, or which are to be separated.
. ForbiddenArea (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Regions in which no expansion takes place.
. RegionExpanded (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Expanded or separated regions.
. Iterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong / const char *
Number of iterations.
Default Value : "maximal"
Suggested values : Iterations ∈ {"maximal", 0, 1, 2, 3, 5, 7, 10, 15, 20, 30, 50, 70, 100, 200}
Typical range of values : 0 ≤ Iterations ≤ 1000 (lin)
Minimum Increment : 1
Recommended Increment : 1
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Expansion mode.
Default Value : "image"
List of values : Mode ∈ {"image", "region"}
Example
read_image(&Image,"fabrik");
threshold(Image,&Light,100.0,255.0);
disp_region(Light,WindowHandle);
connection(Light,&Seg);
expand_region(Seg,EMPTY_REGION,&Exp1,"maximal","image");
set_colored(WindowHandle,12);
set_draw(WindowHandle,"margin");
disp_region(Exp1,WindowHandle);
Result
expand_region always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty
input region via set_system(’empty_region_result’,<Result>), and the behavior in case of an
empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an ex-
ception handling is raised.
Parallelization Information
expand_region is reentrant and processed without parallelization.
Possible Predecessors
pouring, threshold, dyn_threshold, regiongrowing
Alternatives
dilation1
See also
expand_gray, interjacent, skeleton
Module
Foundation
Parameter
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject
Input regions containing holes.
. RegionFillUp (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Regions without holes.
Result
fill_up returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
fill_up is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
fill_up_shape
See also
boundary
Module
Foundation
HALCON 8.0.2
902 CHAPTER 12. REGIONS
Example
read_image(&Image,"affe");
threshold(Image,&Seg,120.0,255.0);
fill_up_shape(Seg,&Filled,"area",0.0,200.0);
Result
fill_up_shape returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input
(no regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in
case of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
fill_up_shape is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
Alternatives
fill_up
See also
select_shape, connection, area_center
Module
Foundation
’medial_axis’ This mode is used for regions that do not touch or overlap. The operator will find separating lines
between the regions which partition the background evenly between the input regions. This corresponds to
the following calls:
complement(’full’,Region,Tmp) skeleton(Tmp,Result)
’border’ If the input regions do not touch or overlap this mode is equivalent to boundary(Region,Result),
i.e., it replaces each region by its boundary. If regions are touching they are aggregated into one region. The
corresponding output region then contains the boundary of the aggregated region, as well as the one pixel
wide separating line between the original regions. This corresponds to the following calls:
boundary(Region,Tmp1,’inner’) union1(Tmp1,Tmp2)
skeleton(Tmp2,Result)
HALCON 8.0.2
904 CHAPTER 12. REGIONS
’mixed’ In this mode the operator behaves like the mode ’medial_axis’ for non-overlapping regions. If regions
touch or overlap, again separating lines between the input regions are generated on output, but this time
including the “touching line” between regions, i.e., touching regions are separated by a line in the output
region. This corresponds to the following calls:
erosion1(Region,Mask,Tmp1,1) union1(Tmp1,Tmp2)
complement(full,Tmp2,Tmp3) skeleton(Tmp3,Result)
where Mask denotes the following “cross mask”:
×
× × ×
×
Parameter
read_image(&Image,"wald1_rot") ;
mean(Image,&Mean,31,31) ;
dyn_threshold(Mean,&Seg,20) ;
interjacent(Seg,&Graph,"medial_axis") ;
disp_region(Graph,WindowHandle) ;
Result
interjacent always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty input
region via set_system(’empty_region_result’,<Result>), and the behavior in case of an empty
result region via set_system(’store_empty_region’,<true/false>). If necessary, an exception
handling is raised.
Parallelization Information
interjacent is reentrant and processed without parallelization.
Possible Predecessors
threshold, connection, regiongrowing, pouring
Possible Successors
select_shape, disp_region
See also
expand_region, junctions_skeleton, boundary
Module
Foundation
junctions_skeleton detects junctions and end points in a skeleton (see skeleton). The junctions in
the input region Region are output as a region in JuncPoints, while the end points are output as a region in
EndPoints.
To obtain reasonable results with junctions_skeleton the input region Region must not contain lines
which are more than one pixel wide. Regions obtained by skeleton meet this condition, while regions obtained
by morph_skeleton do not meet this condition in general.
Parameter
Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ).
Result
junctions_skeleton always returns the value H_MSG_TRUE. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of
an empty input region via set_system(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
junctions_skeleton is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
skeleton
Possible Successors
area_center, connection, get_region_points, difference
See also
pruning, split_skeleton_region
Module
Foundation
HALCON 8.0.2
906 CHAPTER 12. REGIONS
The operator merge_regions_line_scan connects adjacent regions, which were segmentated from adja-
cent images with the height ImageHeight. This operator was especially designed to process regions that were
extracted from images grabbed by a line scan camera. CurrRegions contains the regions from the current image
and PrevRegions the regions from the previous one.
With the help of the parameter MergeBorder two cases can be distinguished: If the top (first) line of the current
image touches the bottom (last) line of the previous image, MergeBorder must be set to ’top’, otherwise set
MergeBorder to ’bottom’.
If the operator merge_regions_line_scan is used recursivly, the parameter MaxImagesRegion deter-
mines the maximum number of images which are covered by a merged region. All older region parts are removed.
The operator merge_regions_line_scan returns two region arrays. PrevMergedRegions contains
all those regions from the previous input regions PrevRegions, which could not be merged with a current
region. CurrMergedRegions collects all current regions together with the merged parts from the previ-
ous images. Merged regions will exceed the original image, because the previous regions are moved upward
(MergeBorder=’top’) or downward (MergeBorder=’bottom’) according to the image height. For this the
system parameter ’clip_region’ (see also set_system) will internaly be set to ’false’.
Parameter
The positions where the input region is split are determined by the following approach: First, initial split positions
are determined such that they are equally distributed over the horizontal extent of the input region, i.e., such that all
the resulting parts would have the same width. For this, the number n of resulting parts is determined by dividing
the width of the input region by Distance and rounding the result to the closest integer value. The distance
between the initial split positions is now calculated by dividing the width of the input region by n. Note that the
distance between these initial split positions is typically not identical to Distance. Then, the final split positions
are determined in the neighborhood of the initial split positions such that the input region is split at positions where
it has the least vertical extent within this neighborhood. The maximum deviation of the final split position from
the initial split position is Distance*Percent*0.01.
The resulting regions are returned in Partitioned. Note that the input region is only partitioned if its width is
larger than 1.5 times Distance.
Parameter
HALCON 8.0.2
908 CHAPTER 12. REGIONS
Parameter
Height ∗ Width
Number = ,
2
read_image(&Image,"affe") ;
mean_image(Image,&Mean,5,5) ;
dyn_threshold(Mean,&Points,25) ;
rank_region(Points,Textur,15,15,30) ;
gen_circle(&Mask,10,10,3) ;
opening1(Textur,Mask,&Seg) ;
Complexity
Let F be the area of the input region. Then the runtime complexity is O(F ∗ 8).
Result
rank_region returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case
of an empty input region via set_system(’empty_region_result’,<Result>). If necessary, an
exception handling is raised.
Parallelization Information
rank_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
threshold, connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape, disp_region
Alternatives
closing_rectangle1, expand_region
See also
rank_image, mean_image
Module
Foundation
HALCON 8.0.2
910 CHAPTER 12. REGIONS
’inner_center’ The point on the skeleton of the input region having the smallest distance to the center of gravity
of the input region.
Attention
If Type = ’outer_circle’ is selected it might happen that the resulting circular region does not completely cover
the input region. This is because internally the operators smallest_circle and gen_circle are used to
compute the outer circle.√As described in the documentation of smallest_circle, the calculated radius can
be too small by up to 1/ 2 − 0.5 pixels. Additionally,√the circle that is generated by gen_circle is translated
by up to 0.5 pixels in both directions, i.e., by up to 1/ 2 pixels. Consequently, when adding up both effects, the
original region might protrude beyond the returned circular region by at most 1 pixel.
Parameter
HALCON 8.0.2
912 CHAPTER 12. REGIONS
Complexity
Let F be the area of the enclosing rectangle of the input region. Then the runtime complexity is O(F ) (per region).
Result
skeleton returns H_MSG_TRUE if all parameters are correct. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>) and the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>). If necessary, an exception
handling is raised.
Parallelization Information
skeleton is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
sobel_amp, edges_image, bandpass_image, threshold, hysteresis_threshold
Possible Successors
junctions_skeleton, pruning
Alternatives
morph_skeleton, thinning
See also
gray_skeleton, sobel_amp, edges_image, roberts, bandpass_image, threshold
References
Eckardt, U. “Verdünnung mit Perfekten Punkten”, Proceedings 10. DAGM-Symposium, IFB 180, Zurich, 1988
Module
Foundation
’character’ The regions will be treated like characters in a row and will be sorted according to their order in the
line: If two regions overlap horizontally, they will be sorted with respect to their column values, otherwise
they will be sorted with regard to their row values. To be able to sort a line correctly, all regions in the line
must overlap each other vertically. Furthermore, the regions in adjacent rows must not overlap.
’first_point’ The point with the lowest column value in the first row of the region.
’last_point’ The point with the highest column value in the last row of the region.
’upper_left’ Upper left corner of the surrounding rectangle.
’upper_right’ Upper right corner of the surrounding rectangle.
’lower_left’ Lower left corner of the surrounding rectangle.
’lower_right’ Lower right corner of the surrounding rectangle.
The parameter Order determines whether the sorting order is increasing or decreasing: using ’true’ the order will
be increasing, using ’false’ the order will be decreasing.
Parameter
HALCON 8.0.2
914 CHAPTER 12. REGIONS
Example
read_image(&Image,"fabrik");
edges_image (Image, &ImaAmp, &ImaDir, "lanser2", 0.5, "nms", 8, 16);
threshold (ImaAmp, &RawEdges, 8, 255);
skeleton (RawEdges, &Skeleton);
junctions_skeleton (Skeleton, &EndPoints, &JuncPoints);
difference (Skeleton, JuncPoints, &SkelWithoutJunc);
connection (SkelWithoutJunc, &SingleBranches);
select_shape (SingleBranches, &SelectedBranches, "area", "and", 16, 99999);
split_skeleton_lines (SelectedBranches, 3, &BeginRow, &BeginCol, &EndRow,
&EndCol);
Result
split_skeleton_lines always returns the value H_MSG_TRUE. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
split_skeleton_lines is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, select_shape, skeleton, junctions_skeleton, difference
Possible Successors
select_lines, partition_lines, disp_line
See also
split_skeleton_region, detect_edge_segments
Module
Foundation
Example
read_image(&Image,"fabrik");
edges_image (Image, &ImaAmp, &ImaDir, "lanser2", 0.5, "nms", 8, 16);
threshold (ImaAmp, &RawEdges, 8, 255);
skeleton (RawEdges, &Skeleton);
junctions_skeleton (Skeleton, &EndPoints, &JuncPoints);
difference (Skeleton, JuncPoints, &SkelWithoutJunc);
connection (SkelWithoutJunc, &SingleBranches);
select_shape (SingleBranches, &SelectedBranches, "area", "and", 16, 99999);
split_skeleton_region (SelectedBranches, Lines, 3);
Result
split_skeleton_region always returns the value H_MSG_TRUE. The behavior in case of empty input (no
regions given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an
empty input region via set_system(’empty_region_result’,<Result>), and the behavior in case
of an empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an
exception handling is raised.
Parallelization Information
split_skeleton_region is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
connection, select_shape, skeleton, junctions_skeleton, difference
Possible Successors
count_obj, select_shape, select_obj, area_center, elliptic_axis,
smallest_rectangle2, get_region_polygon, get_region_contour
See also
split_skeleton_lines, get_region_polygon, gen_polygons_xld
Module
Foundation
HALCON 8.0.2
916 CHAPTER 12. REGIONS
Segmentation
13.1 Classification
add_samples_image_class_gmm ( const Hobject Image,
const Hobject ClassRegions, Hlong GMMHandle, double Randomize )
Add training samples from an image to the training data of a Gaussian Mixture Model.
add_samples_image_class_gmm adds training samples from the Image to the Gaussian Mixture
Model (GMM) given by GMMHandle. add_samples_image_class_gmm is used to store the
training samples before a classifier to be used for the pixel classification of multichannel images with
classify_image_class_gmm is trained. add_samples_image_class_gmm works analogously
to add_sample_class_gmm. The Image must have a number of channels equal to NumDim, as spec-
ified with create_class_gmm. The training regions for the NumClasses pixel classes are passed in
ClassRegions. Hence, ClassRegions must be a tuple containing NumClasses regions. The order of
the regions in ClassRegions determines the class of the pixels. If there are no samples for a particular class
in Image an empty region must be passed at the position of the class in ClassRegions. With this mecha-
nism it is possible to use multiple images to add training samples for all relevant classes to the GMM by calling
add_samples_image_class_gmm multiple times with the different images and suitably chosen regions. The
regions in ClassRegions should contain representative training samples for the respective classes. Hence, they
need not cover the entire image. The regions in ClassRegions should not overlap each other, because this
would lead to the fact that in the training data the samples from the overlapping areas would be assigned to multi-
ple classes, which may lead to a lower classification performance. Image data of integer type can be particularly
badly suited for modelling with a GMM. Randomize can be used to overcome this problem, as explained in
add_sample_class_gmm.
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Training image.
. ClassRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions of the classes to be trained.
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; Hlong
GMM handle.
. Randomize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Standard deviation of the Gaussian noise added to the training data.
Default Value : 0.0
Suggested values : Randomize ∈ {0.0, 1.5, 2.0}
Restriction : Randomize ≥ 0.0
917
918 CHAPTER 13. SEGMENTATION
Result
If the parameters are valid, the operator add_samples_image_class_gmm returns the value
H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
add_samples_image_class_gmm is processed completely exclusively without parallelization.
Possible Predecessors
create_class_gmm
Possible Successors
train_class_gmm, write_samples_class_gmm
Alternatives
read_samples_class_gmm
See also
classify_image_class_gmm, add_sample_class_gmm, clear_samples_class_gmm,
get_sample_num_class_gmm, get_sample_class_gmm
Module
Foundation
Add training samples from an image to the training data of a multilayer perceptron.
add_samples_image_class_mlp adds training samples from the image Image to the multilayer per-
ceptron (MLP) given by MLPHandle. add_samples_image_class_mlp is used to store the
training samples before a classifier to be used for the pixel classification of multichannel images with
classify_image_class_mlp is trained. add_samples_image_class_mlp works analogously to
add_sample_class_mlp. Because here the MLP is always used for classification, OutputFunction =
’softmax’ must be specified when the MLP is created with create_class_mlp. The image Image must have
a number of channels equal to NumInput, as specified with create_class_mlp. The training regions for
the NumOutput pixel classes are passed in ClassRegions. Hence, ClassRegions must be a tuple con-
taining NumOutput regions. The order of the regions in ClassRegions determines the class of the pixels. If
there are no samples for a particular class in Image an empty region must be passed at the position of the class
in ClassRegions. With this mechanism it is possible to use multiple images to add training samples for all
relevant classes to the MLP by calling add_samples_image_class_mlp multiple times with the different
images and suitably chosen regions. The regions in ClassRegions should contain representative training sam-
ples for the respective classes. Hence, they need not cover the entire image. The regions in ClassRegions
should not overlap each other, because this would lead to the fact that in the training data the samples from the
overlapping areas would be assigned to multiple classes, which may lead to slower convergence of the training and
a lower classification performance.
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Training image.
. ClassRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions of the classes to be trained.
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong
MLP handle.
Result
If the parameters are valid, the operator add_samples_image_class_mlp returns the value
H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
add_samples_image_class_mlp is processed completely exclusively without parallelization.
Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp, write_samples_class_mlp
Alternatives
read_samples_class_mlp
See also
classify_image_class_mlp, add_sample_class_mlp, clear_samples_class_mlp,
get_sample_num_class_mlp, get_sample_class_mlp, add_samples_image_class_svm
Module
Foundation
Add training samples from an image to the training data of a support vector machine.
add_samples_image_class_svm adds training samples from the image Image to the support vec-
tor machine (SVM) given by SVMHandle. add_samples_image_class_svm is used to store
the training samples before training a classifier for the pixel classification of multichannel images
with classify_image_class_svm. add_samples_image_class_svm works analogously to
add_sample_class_svm.
The image Image must have a number of channels equal to NumFeatures, as specified with
create_class_svm. The training regions for the NumClasses pixel classes are passed in ClassRegions.
Hence, ClassRegions must be a tuple containing NumClasses regions. The order of the regions in
ClassRegions determines the class of the pixels. If there are no samples for a particular class in Image,
an empty region must be passed at the position of the class in ClassRegions. With this mechanism it
is possible to use multiple images to add training samples for all relevant classes to the SVM by calling
add_samples_image_class_svm multiple times with the different images and suitably chosen regions.
The regions in ClassRegions should contain representative training samples for the respective classes. Hence,
they need not cover the entire image. The regions in ClassRegions should not overlap each other, because
this would lead to the fact that in the training data the samples from the overlapping areas would be assigned to
multiple classes, which may lead to slower convergence of the training and a lower classification performance.
A further application of this operator is the automatic novelty detection, where, e.g., anomalies in color or texture
can be detected. For this mode a training set that defines a sample region (e.g., skin regions for skin detection or
samples of the correct texture) is passed to the SVMHandle, which is created in the Mode ’novelty-detection’.
After training, regions that differ from the trained sample regions are detected (e.g., the rejection class for skin or
errors in texture).
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Training image.
. ClassRegions (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject
Regions of the classes to be trained.
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong
SVM handle.
Result
If the parameters are valid add_samples_image_class_svm returns the value H_MSG_TRUE. If neces-
sary, an exception handling is raised.
Parallelization Information
add_samples_image_class_svm is processed completely exclusively without parallelization.
HALCON 8.0.2
920 CHAPTER 13. SEGMENTATION
Possible Predecessors
create_class_svm
Possible Successors
train_class_svm, write_samples_class_svm
Alternatives
read_samples_class_svm
See also
classify_image_class_svm, add_sample_class_svm, clear_samples_class_svm,
get_sample_num_class_svm, get_sample_class_svm, add_samples_image_class_mlp
Module
Foundation
(gr , gc ) ∈ FeatureSpace
read_image(&Image,"combine");
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
disp_image(Image,WindowHandle);
fwrite_string("draw region of interest with the mouse");
fnew_line();
set_color(WindowHandle,"green");
draw_region(&Testreg,draw_region);
/* Texture transformation for 2-dimensional charachteristic */
texture_laws(Image,&T1,"el",2,5);
mean_image(T1,&M1,21,21);
clear_obj(T1);
texture_laws(M1,&T2,"es,",2,5);
mean_image(T2,&M2,21,21);
clear_obj(T2);
/* 2-dimensinal histogram of the test region */
histo_2dim(Testreg,M1,M2,&Histo);
/* All points occuring at least once */
threshold(Histo,&FeatureSpace,1.0,100000.0);
set_draw(WindowHandle,"fill");
set_color(WindowHandle,"red");
disp_region(FeatureSpace,WindowHandle);
fwrite_string("Characteristics area in red");
fnew_line();
/* Segmentation */
class_2dim_sup(M1,M2,FeatureSpace,&RegionClass2Dim);
set_color(WindowHandle,"blue");
disp_region(RegionClass2Dim,WindowHandle);
fwrite_string("Result of classification in blue");
fnew_line();
Complexity
Let A be the area of the input region. Then the runtime complexity is O(2562 + A).
Result
class_2dim_sup returns H_MSG_TRUE. If all parameters are correct, the behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_2dim_sup is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
histo_2dim, threshold, draw_region, dilation1, opening, shape_trans
Possible Successors
connection, select_shape, select_gray
Alternatives
class_ndim_norm, class_ndim_box, threshold
See also
histo_2dim
Module
Foundation
HALCON 8.0.2
922 CHAPTER 13. SEGMENTATION
(see reduce_domain). After this, all pixels in the images that are at most Threshold pixels from the cluster
center in the maximum norm, are determined. These pixels form one output region. Next, the pixels thus classified
are deleted from the histogram so that they are not taken into account for the next class. In this modified histogram,
again the maximum is extracted; it again serves as a cluster center. The above steps are repeated NumClasses
times; thus, NumClasses output regions result. Only pixels defined in both images are returned.
Attention
Both input images must have the same size.
Parameter
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
First input image.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte
Second input image.
. Classes (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Classification result.
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Threshold (maximum distance to the cluster’s center).
Default Value : 15
Suggested values : Threshold ∈ {0, 2, 5, 8, 12, 17, 20, 30, 50, 70}
. NumClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Number of classes (cluster centers).
Default Value : 5
Suggested values : NumClasses ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 30, 40, 50}
Example
read_image(&ColorImage,"patras");
decompose3(ColorImage,&Red,&Green,&Blue);
class_2dim_unsup(Red,Green,&Seg,15,5);
set_colored(WindowHandle,12);
disp_region(Seg,WindowHandle);
Result
class_2dim_unsup returns H_MSG_TRUE if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_2dim_unsup is reentrant and processed without parallelization.
Possible Predecessors
decompose2, decompose3, median_image, anisotropic_diffusion, reduce_domain
Possible Successors
select_shape, select_gray, connection
Alternatives
threshold, histo_2dim, class_2dim_sup, class_ndim_norm, class_ndim_box
Module
Foundation
read_image(&Image,"meer");
disp_image(Image,WindowHandle);
set_color(WindowHandle,"green");
fwrite_string("Draw the foreground");
fnew_line();
draw_region(&Reg1,WindowHandle);
reduce_domain(Image,Reg1,&Foreground);
set_color(WindowHandle,"red");
fwrite_string("Draw background");
fnew_line();
draw_region(&Reg2,WindowHandle);
reduce_domain(Image,Reg2,&Background);
fwrite_string("Start to learn");
fnew_line();
create_class_box(&ClassifHandle);
learn_ndim_box(Foreground,Background,Image,ClassifHandle);
fwrite_string("start classification");
fnew_line();
class_ndim_box(Image,&Res,ClassifHandle);
set_draw(WindowHandle,"fill");
disp_region(Res,WindowHandle);
close_class_box(ClassifHandle);
Complexity
Let N be the number of hyper-cuboids and A be the area of the input region. Then the runtime complexity is
O(N, A).
Result
class_ndim_box returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_ndim_box is local and processed completely exclusively without parallelization.
Possible Predecessors
create_class_box, learn_class_box, median_image, compose2, compose3, compose4,
compose5, compose6, compose7
Alternatives
class_ndim_norm, class_2dim_sup, class_2dim_unsup
See also
descript_class_box
Module
Foundation
HALCON 8.0.2
924 CHAPTER 13. SEGMENTATION
read_image(&Image,"meer:);
open_window(0,0,-1,-1,0,"visible","",&WindowHandle);
disp_image(Image,WindowHandle);
fwrite_string("draw region of interest with the mouse");
fnew_line();
set_color(WindowHandle,"green");
draw_region(&Testreg,draw_region);
/* Texture transformation for 3-dimensional charachteristic */
texture_laws(Image,&T1,"el",2,5);
mean_image(T1,&M1,21,21);
texture_laws(Image,&T2,"es",2,5);
mean_image(T2,&M2,21,21);
texture_laws(Image,&T3,"le",2,5);
mean_image(T3,&M3,21,21);
compose3(M1,M2,M3,&M);
/* Cluster for 3-dimensional characteristic area determine training area */
create_tuple(&Metric,1);
set_s(Metric,"euclid",0);
create_tuple(&Radius,1);
set_d(Radius,20.0,0);
create_tuple(&MinNumber,1);
set_i(MinNumber,5,0);
T_learn_ndim_norm(Testobj,EMPTY_REGION,&M,"euclid",Radius,MinNumber,
&Radius,&Center,_t);
/* Segmentation */
create_tuple(&RegionMode,1);
set_s(RegionMode,"multiple",0);
class_ndim_norm(M,&Regions,Metric,RegionMode,Radius,Center);
set_colored(WindowHandle,12);
disp_region(Regions,WindowHandle);
fwrite_string("Result of classification;");
fwrite_string("Each cluster in another color.");
fnew_line();
Complexity
Let N be the number of clusters and A be the area of the input region. Then the runtime complexity is O(N, A).
Result
class_ndim_norm returns H_MSG_TRUE if all parameters are correct. The behavior with respect to
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
class_ndim_norm is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
learn_ndim_norm, compose2, compose3, compose4, compose5, compose6, compose7
Possible Successors
connection, select_shape, reduce_domain, select_gray
Alternatives
class_ndim_box, class_2dim_sup, class_2dim_unsup
Module
Foundation
HALCON 8.0.2
926 CHAPTER 13. SEGMENTATION
Result
If the parameters are valid, the operator classify_image_class_gmm returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
classify_image_class_gmm is reentrant and processed without parallelization.
Possible Predecessors
train_class_gmm, read_class_gmm
See also
add_samples_image_class_gmm, create_class_gmm
Module
Foundation
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Input image.
. ClassRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented classes.
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; Hlong
MLP handle.
. RejectionThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Threshold for the rejection of the classification.
Default Value : 0.5
Suggested values : RejectionThreshold ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Restriction : (RejectionThreshold ≥ 0.0) ∧ (RejectionThreshold ≤ 1.0)
Example (Syntax: HDevelop)
Result
If the parameters are valid, the operator classify_image_class_mlp returns the value H_MSG_TRUE. If
necessary an exception handling is raised.
Parallelization Information
classify_image_class_mlp is reentrant and processed without parallelization.
Possible Predecessors
train_class_mlp, read_class_mlp
Alternatives
classify_image_class_svm, class_ndim_box, class_ndim_norm, class_2dim_sup
See also
add_samples_image_class_mlp, create_class_mlp
Module
Foundation
HALCON 8.0.2
928 CHAPTER 13. SEGMENTATION
must be trained with train_class_svm. Image must have NumFeatures channels, as specified with
create_class_svm. On output, ClassRegions contains NumClasses regions as the result of the classi-
fication.
To prevent that the SVM assigns pixels that lie outside the convex hull of the training data in the feature space to
one of the classes, it is useful in many cases to explicitly train a rejection class by adding samples for the rejection
class with add_samples_image_class_svm and by re-training the SVM with train_class_svm.
An alternative for explicitly defining a rejection class is to use an SVM in the mode ’novelty-detection’. Please
refer to the description in create_class_svm and add_samples_image_class_svm.
Parameter
. Image (input_object) . . . . . . (multichannel-)image ; Hobject : byte / cyclic / direction / int1 / int2 / uint2 /
int4 / real
Input image.
. ClassRegions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented classes.
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; Hlong
SVM handle.
Example (Syntax: HDevelop)
Result
If the parameters are valid the operator classify_image_class_svm returns the value H_MSG_TRUE. If
necessary, an exception handling is raised.
Parallelization Information
classify_image_class_svm is reentrant and processed without parallelization.
Possible Predecessors
train_class_svm, read_class_svm, reduce_class_svm
Alternatives
classify_image_class_mlp, class_ndim_box, class_ndim_norm, class_2dim_sup
See also
add_samples_image_class_svm, create_class_svm
Module
Foundation
HALCON 8.0.2
930 CHAPTER 13. SEGMENTATION
13.2 Edges
HALCON 8.0.2
932 CHAPTER 13. SEGMENTATION
Htuple SobelSize,MinAmplitude,MaxDistance,MinLength;
Htuple RowBegin,ColBegin,RowEnd,ColEnd;
create_tuple(&SobelSize,1);
set_i(SobelSize,5,0);
create_tuple(&MinAmplitude,1);
set_i(MinAmplitude,32,0);
create_tuple(&MaxDistance,1);
set_i(MaxDistance,3,0);
create_tuple(&MinLength,1);
set_i(MinLength,10,0);
T_detect_edge_segments(Image,SobelSize,MinAmplitude,MaxDistance,MinLength,
&RowBegin,&ColBegin,&RowEnd,&ColEnd);
Result
detect_edge_segments returns H_MSG_TRUE if all parameters are correct. If the input is empty the be-
haviour can be set via set_system(’no_object_result’,<Result>). If necessary, an exception han-
dling is raised.
Parallelization Information
detect_edge_segments is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
sigma_image, median_image
Possible Successors
select_lines, partition_lines, select_lines_longest, line_position,
line_orientation
Alternatives
sobel_amp, threshold, skeleton
Module
Foundation
HALCON 8.0.2
934 CHAPTER 13. SEGMENTATION
Parallelization Information
hysteresis_threshold is reentrant and automatically parallelized (on tuple level).
Alternatives
dyn_threshold, threshold, class_2dim_sup, fast_threshold
See also
edges_image, sobel_dir, background_seg
References
J. Canny, "‘Finding Edges and Lines in Images"’; Report, AI-TR-720, M.I.T. Artificial Intelligence Lab., Cam-
bridge, MA, 1983.
Module
Foundation
’hvnms’ A point is labeled as a local maximum if its gray value is larger than or equal to the gray values within
a seach space of ± 5 pixels, either horizontally or vertically. Non-maximum points are removed from the
region, gray values remain unchanged.
’loc_max’ A point is labeled as a local maximum if its gray value is larger than or equal to the gray values of its
eight neighbors.
Parameter
References
S.Lanser: "‘Detektion von Stufenkanten mittels rekursiver Filter nach Deriche"’; Diplomarbeit; Technische Uni-
versität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
J.Canny: "‘Finding Edges and Lines in Images"’; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cam-
bridge, MA; 1983.
Module
Foundation
’nms’ Each point in the image is tested whether its gray value is a local maximum perpendicular to its direction.
In this mode only the two neighbors closest to the given direction are examined. If one of the two gray values
is greater than the gray value of the point to be tested, it is suppressed (i.e., removed from the input region.
The corresponding gray value remains unchanged).
’inms’ Like ’nms’. However, the two gray values for the test are obtained by interpolation from four adjacent
points.
Parameter
. ImgAmp (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2
Amplitude (gradient magnitude) image.
. ImgDir (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : direction
Direction image.
. ImageResult (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject * : byte / uint2
Image with thinned edge regions.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Select non-maximum-suppression or interpolating NMS.
Default Value : "nms"
List of values : Mode ∈ {"nms", "inms"}
Result
nonmax_suppression_dir returns H_MSG_TRUE if all parameters are correct. The behavior with respect
to the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
nonmax_suppression_dir is reentrant and automatically parallelized (on tuple level, channel level).
Possible Predecessors
edges_image, sobel_dir, frei_dir
Possible Successors
threshold, hysteresis_threshold
Alternatives
nonmax_suppression_amp
See also
skeleton
References
S.Lanser: "‘Detektion von Stufenkanten mittels rekursiver Filter nach Deriche"’; Diplomarbeit; Technische Uni-
versität München, Institut für Informatik, Lehrstuhl Prof. Radig; 1991.
HALCON 8.0.2
936 CHAPTER 13. SEGMENTATION
J.Canny: "‘Finding Edges and Lines in Images"’; Report, AI-TR-720; M.I.T. Artificial Intelligence Lab., Cam-
bridge; 1983.
Module
Foundation
13.3 Regiongrowing
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
expand_gray closes gaps between the input regions, which resulted from the suppression of small regions in a
segmentation operator, (mode ’image’), for example, or separates overlapping regions ’region’). Both uses result
from the expansion of regions. The operator works by adding a one pixel wide “strip” to a region, in which the
gray values or color are different from the gray values or color of neighboring pixles on the region’s border by at
most Threshold (in each channel). For images of type ’cyclic’ (e.g., direction images), also points with a gray
value difference of at least 255 − Threshold are added to the output region.
The expansion takes place only in regions, which are designated as not “forbidden” (parameter
ForbiddenArea). The number of iterations is determined by the parameter Iterations. By passing ’maxi-
mal’, expand_gray iterates until convergence, i.e., until no more changes occur. By passing 0 for this parameter,
all non-overlapping regions are returned. The two modes of operation (’image’ and ’region’) are different in the
following ways:
’image’ The input regions are expanded iteratively until they touch another region or the image border, or the
expansion stops because of too high gray value differences. Because expand_gray processes all regions
simultaneously, gaps between regions are distributed evenly to all regions with a similar gray value. Over-
lapping regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to regions having a matching gray value or color.
Attention
Because regions are only expanded into areas having a matching gray value or color, usually gaps will remain
between the output regions, i.e., the segmentation is not complete.
Parameter
read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
regiongrowing(Image,&RawSegments,3,3,6.0,100);
set_colored(WindowHandle,12);
disp_region(RawSegments,WindowHandle);
expand_gray(RawSegments,Image,EMPTY_REGION,&Segments,"maximal","image",24);
clear_window(WindowHandle);
disp_region(Segments,WindowHandle)
Result
expand_gray always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions given)
can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty input
region via set_system(’empty_region_result’,<Result>), and the behavior in case of an empty
result region via set_system(’store_empty_region’,<true/false>). If necessary, an exception
handling is raised.
Parallelization Information
expand_gray is reentrant and processed without parallelization.
Possible Predecessors
connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
expand_gray_ref, expand_region
Module
Foundation
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
expand_gray_ref closes gaps between the input regions, which resulted from the suppression of small regions
in a segmentation operator, (mode ’image’), for example, or separates overlapping regions ’region’). Both uses
result from the expansion of regions. The operator works by adding a one pixel wide “strip” to a region, in
which the gray values or color are different from a reference gray value or color by at most Threshold (in each
HALCON 8.0.2
938 CHAPTER 13. SEGMENTATION
channel). For images of type ’cyclic’ (e.g., direction images), also points with a gray value difference of at least
255 − Threshold are added to the output region.
The expansion takes place only in regions, which are designated as not “forbidden” (parameter
ForbiddenArea). The number of iterations is determined by the parameter Iterations. By passing ’max-
imal’, expand_gray_ref iterates until convergence, i.e., until no more changes occur. By passing 0 for this
parameter, all non-overlapping regions are returned. The two modes of operation (’image’ and ’region’) are differ-
ent in the following ways:
’image’ The input regions are expanded iteratively until they touch another region or the image border, or the
expansion stops because of too high gray value differences. Because expand_gray_ref processes all
regions simultaneously, gaps between regions are distributed evenly to all regions with a similar gray value.
Overlapping regions are split by distributing the area of overlap evenly to both regions.
’region’ No expansion of the input regions is performed. Instead, only overlapping regions are split by distributing
the area of overlap evenly to regions having a matching gray value or color.
Attention
Because regions are only expanded into areas having a matching gray value or color, usually gaps will remain
between the output regions, i.e., the segmentation is not complete.
Parameter
read_image(&Image,"fabrik");
disp_image(Image,WindowHandle);
regiongrowing(Image,&RawSegments,3,3,6.0,100);
set_colored(WindowHandle,12);
disp_region(RawSegments,WindowHandle);
T_intensity(RawSegments,Image,&Mean,_t);
set_i(Thresh,24,0);
set_s(Iter,"maximal",0);
set_s(Mode,"image",0);
T_expand_gray_ref(RawSegments,Image,EMPTY_REGION,&Segments,Iter,Mode,
Mean,Thresh);
clear_window(WindowHandle);
disp_region(Segments,WindowHandle);
Result
expand_gray_ref always returns the value H_MSG_TRUE. The behavior in case of empty input (no regions
given) can be set via set_system(’no_object_result’,<Result>), the behavior in case of an empty
input region via set_system(’empty_region_result’,<Result>), and the behavior in case of an
empty result region via set_system(’store_empty_region’,<true/false>). If necessary, an ex-
ception handling is raised.
Parallelization Information
expand_gray_ref is reentrant and processed without parallelization.
Possible Predecessors
connection, regiongrowing, pouring, class_ndim_norm
Possible Successors
select_shape
See also
expand_gray, expand_region
Module
Foundation
HALCON 8.0.2
940 CHAPTER 13. SEGMENTATION
read_image(&Image,"fabrik");
gauss_image(Image,&Gauss,5);
expand_line(Gauss,&Reg,100,"mean","row",5.0);
set_colored(WindowHandle,12);
disp_region(Maxima,WindowHandle);
Parallelization Information
expand_line is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image, anisotropic_diffusion,
median_image, affine_trans_image, rotate_image
Possible Successors
intersection, opening, closing
Alternatives
regiongrowing_mean, expand_gray, expand_gray_ref
Module
Foundation
For rectangles larger than one pixel, ususally the images should be smoothed with a lowpass filter with a size of at
least Row × Column before calling regiongrowing (so that the gray values at the centers of the regtangles
are “representative” for the whole rectangle). If the image contains little noise and the rectangles are small, the
smoothing can be omitted in many cases.
The resulting regions are collections of rectangles of the chosen size Row × Column . Only regions containing at
least MinSize points are returned.
Regiongrowing is a very fast operation, and thus suited for time-critical applications.
Attention
Column and Row are automatically converted to odd values if necessary.
Parameter
. Image (input_object) . . . . . . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / int4 / real
Input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented regions.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Vertical distance between tested pixels (height of the raster).
Default Value : 3
Suggested values : Row ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21}
Typical range of values : 1 ≤ Row ≤ 99 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Row ≥ 1) ∧ odd(Row)
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Horizontal distance between tested pixels (height of the raster).
Default Value : 3
Suggested values : Column ∈ {1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21}
Typical range of values : 1 ≤ Column ≤ 99 (lin)
Minimum Increment : 2
Recommended Increment : 2
Restriction : (Column ≥ 1) ∧ odd(Column)
. Tolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Points with a gray value difference less then or equal to tolerance are accumulated into the same object.
Default Value : 6.0
Suggested values : Tolerance ∈ {1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 12.0, 14.0, 18.0, 25.0}
Typical range of values : 1.0 ≤ Tolerance ≤ 127.0 (lin)
Minimum Increment : 0.01
Recommended Increment : 1.0
Restriction : (0 ≤ Tolerance) ∧ (Tolerance < 127)
. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
Minimum size of the output regions.
Default Value : 100
Suggested values : MinSize ∈ {1, 5, 10, 20, 50, 100, 200, 500, 1000}
Typical range of values : 1 ≤ MinSize
Minimum Increment : 1
Recommended Increment : 5
Restriction : MinSize ≥ 1
Example
read_image(&Image,"fabrik");
mean_image(Image,&Mean,Row,Column);
regiongrowing(Mean,&Result,Row,Column,6,100);
Complexity
Let N be the number of found regions and M the number of points in one of these regions. Then the runtime
complexity is O(N ∗ log(M ) ∗ M ).
HALCON 8.0.2
942 CHAPTER 13. SEGMENTATION
Result
regiongrowing returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
regiongrowing is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, mean_image, gauss_image, smooth_image, median_image,
anisotropic_diffusion
Possible Successors
select_shape, reduce_domain, select_gray
Alternatives
regiongrowing_n, regiongrowing_mean, label_to_region
Module
Foundation
HALCON 8.0.2
944 CHAPTER 13. SEGMENTATION
a = max {|gA |}
b = max {|gB |}
M inT ≤ |a − b| ≤ M axT
’gray-max-ratio’: Ratio of the maximum gray values
a = max {|gA |}
b = max {|gB |}
a b
M inT ≤ min , ≤ M axT
b a
’gray-min-diff’: Difference of the minimum gray values
a = min {|gA |}
b = min {|gB |}
M inT ≤ |a − b| ≤ M axT
a = min {|gA |}
b = min {|gB |}
a b
M inT ≤ min , ≤ M axT
b a
’variance-diff’: Difference of the variances over all gray values (channels)
V ar(gB )
M inT ≤ ≤ M axT
V ar(gA )
’mean-abs-diff’: Difference of the sum of absolute values over all gray values (channels)
X
a= |gA (d) − gA (k)|
d,k,k<d
X
b= |gB (d) − gB (k)|
d,k,k<d
|a − b|
M inT ≤ ≤ M axT
Anzahl der Summen
HALCON 8.0.2
946 CHAPTER 13. SEGMENTATION
’mean-abs-ratio’: Ratio of the sum of absolute values over all gray values (channels)
X
a= |gA (d) − gA (k)|
d,k,k<d
X
b= |gB (d) − gB (k)|
d,k,k<d
a b
M inT ≤ min , ≤ M axT
b a
’max-abs-diff’: Difference of the maximum distance of the components
13.4 Threshold
auto_threshold ( const Hobject Image, Hobject *Regions, double Sigma )
T_auto_threshold ( const Hobject Image, Hobject *Regions,
const Htuple Sigma )
HALCON 8.0.2
948 CHAPTER 13. SEGMENTATION
read_image(&Image,"fabrik");
median_image(Image,&Median,"circle",3,"mirrored");
auto_threshold(Median,&Seg,2.0);
connection(Seg,&Connected);
set_colored(WindowHandle,12);
disp_obj(Connected,WindowHandle);
Parallelization Information
auto_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
anisotropic_diffusion, median_image, illuminate
Possible Successors
connection, select_shape, select_gray
Alternatives
bin_threshold, char_threshold
See also
gray_histo, gray_histo_abs, histo_to_thresh, smooth_funct_1d_gauss, threshold
Module
Foundation
read_image(&Image,"letters");
bin_threshold(Image,&Seg);
connection(Seg,&Connected);
set_shape(WindowHandle,"rectangle1");
set_colored(WindowHandle,6);
disp_region(Connected,WindowHandle);
Parallelization Information
bin_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
anisotropic_diffusion, median_image, illuminate
Possible Successors
connection, select_shape, select_gray
Alternatives
auto_threshold, char_threshold
See also
gray_histo, smooth_funct_1d_gauss, threshold
Module
Foundation
For example, if you choose Percent = 95 the operator locates the gray value whose frequency is at most 5
percent of the maximum frequency. Because char_threshold assumes that the characters are darker than the
background, the threshold is searched for “to the left” of the maximum.
In comparison to bin_threshold, this operator should be used if there is no clear minimum between the
histogram peaks corresponding to the characters and the background, respectively, or if there is no peak corre-
sponding to the characters at all. This may happen, e.g., if the image contains only few characters or in the case of
a non-uniform illumination.
Parameter
HALCON 8.0.2
950 CHAPTER 13. SEGMENTATION
read_image(&Image,"letters");
char_threshold(Image,Image,&Seg,0.0,5.0,&Threshold);
connection(Seg,&Connected);
set_colored(WindowHandle,12);
set_shape(WindowHandle,"rectangle1");
disp_region(Connected,WindowHandle);
Parallelization Information
char_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
anisotropic_diffusion, median_image, illuminate
Possible Successors
connection, select_shape, select_gray
Alternatives
bin_threshold, auto_threshold, gray_histo, smooth_funct_1d_gauss, threshold
Module
Foundation
This test is performed for all points of the domain (region) of Image, intersected with the domain of the translated
Pattern. All points fulfilling the above condition are aggregated in the output region. The two images may be
of different size. Typically, Pattern is smaller than Image.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Input image.
. Pattern (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Comparison image.
. Selected (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Points in which the two images are similar/different.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Mode: return similar or different pixels.
Default Value : "diff_outside"
Suggested values : Mode ∈ {"diff_inside", "diff_outside"}
. DiffLowerBound (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Lower bound of the tolerated gray value difference.
Default Value : -5
Suggested values : DiffLowerBound ∈ {0, -1, -2, -3, -5, -7, -10, -12, -15, -17, -20, -25, -30}
Typical range of values : -255 ≤ DiffLowerBound ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 2
Restriction : (-255 ≤ DiffLowerBound) ∧ (DiffLowerBound ≤ 255)
. DiffUpperBound (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Upper bound of the tolerated gray value difference.
Default Value : 5
Suggested values : DiffUpperBound ∈ {0, 1, 2, 3, 5, 7, 10, 12, 15, 17, 20, 25, 30}
Typical range of values : -255 ≤ DiffUpperBound ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 2
Restriction : (-255 ≤ DiffUpperBound) ∧ (DiffUpperBound ≤ 255)
. GrayOffset (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Offset gray value subtracted from the input image.
Default Value : 0
Suggested values : GrayOffset ∈ {-30, -25, -20, -17, -15, -12, -10, -7, -5, -3, -2, -1, 0, 1, 2, 3, 5, 7, 10, 12,
15, 17, 20, 25, 30}
Typical range of values : -255 ≤ GrayOffset ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 2
Restriction : (-255 ≤ GrayOffset) ∧ (GrayOffset ≤ 255)
. AddRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Hlong
Row coordinate by which the comparison image is translated.
Default Value : 0
Suggested values : AddRow ∈ {-200, -100, -20, -10, 0, 10, 20, 100, 200}
Typical range of values : -32000 ≤ AddRow ≤ 32000 (lin)
Minimum Increment : 1
Recommended Increment : 1
. AddCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Hlong
Column coordinate by which the comparison image is translated.
Default Value : 0
Suggested values : AddCol ∈ {-200, -100, -20, -10, 0, 10, 20, 100, 200}
Typical range of values : -32000 ≤ AddCol ≤ 32000 (lin)
Minimum Increment : 1
Recommended Increment : 1
Complexity
Let A be the number of valid pixels. Then the runtime complexity is O(A).
Result
check_difference returns H_MSG_TRUE if all parameters are correct. The behavior with respect to
HALCON 8.0.2
952 CHAPTER 13. SEGMENTATION
the input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
check_difference is reentrant and automatically parallelized (on tuple level).
Possible Successors
connection, select_shape, reduce_domain, select_gray, rank_region, dilation1,
opening
Alternatives
sub_image, dyn_threshold
Module
Foundation
Result
dual_threshold returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
dual_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
min_max_gray, sobel_amp, binomial_filter, gauss_image, reduce_domain,
diff_of_gauss, sub_image, derivate_gauss, laplace_of_gauss, laplace,
expand_region
Possible Successors
connection, dilation1, erosion1, opening, closing, rank_region, shape_trans,
skeleton
Alternatives
threshold, dyn_threshold, check_difference
See also
connection, select_shape, select_gray
Module
Foundation
go ≥ gt + Offset
go ≤ gt − Offset
HALCON 8.0.2
954 CHAPTER 13. SEGMENTATION
Typically, the threshold images are smoothed versions of the original image (e.g., by applying mean_image,
binomial_filter, gauss_image, etc.). Then the effect of dyn_threshold is similar to applying
threshold to a highpass-filtered version of the original image (see highpass_image).
With dyn_threshold, contours of an object can be extracted, where the objects’ size (diameter) is determined
by the mask size of the lowpass filter and the amplitude of the objects’ edges:
The larger the mask size is chosen, the larger the found regions become. As a rule of thumb, the mask size should
be about twice the diameter of the objects to be extracted. It is important not to set the parameter Offset to zero
because in this case too many small regions will be found (noise). Values between 5 and 40 are a useful choice.
The larger Offset is chosen, the smaller the extracted regions become.
All points of the input image fulfilling the above condition are stored jointly in one region. If necessary, the
connected components can be obtained by calling connection.
Attention
If Offset is chosen from −1 to 1 usually a very noisy region is generated, requiring large storage. If Offset
is chosen too large (> 60, say) it may happen that no points fulfill the threshold condition (i.e., an empty region is
returned). If Offset is chosen too small (< -60, say) it may happen that all points fulfill the threshold condition
(i.e., a full region is returned).
Parameter
Complexity
Let A be the area of the input region. Then the runtime complexity is O(A).
Result
dyn_threshold returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
dyn_threshold is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
mean_image, smooth_image, binomial_filter, gauss_image
Possible Successors
connection, select_shape, reduce_domain, select_gray, rank_region, dilation1,
opening, erosion1
Alternatives
check_difference, threshold
See also
highpass_image, sub_image
Module
Foundation
MinGray ≤ g ≤ MaxGray .
To reduce procesing time, the selection is done in two steps: At first all pixels along rows and columns with dis-
tances MinSize are processed. In the next step the neighborhood (size MinSize × MinSize) of all previously
selected points are processed.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / uint2 / direction / cyclic
Input image.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Segmented regions.
. MinGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Lower threshold for the gray values.
Default Value : 128
Suggested values : MinGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Typical range of values : 0.0 ≤ MinGray ≤ 255.0 (lin)
Minimum Increment : 1
Recommended Increment : 5.0
. MaxGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Upper threshold for the gray values.
Default Value : 255.0
Suggested values : MaxGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Typical range of values : 0.0 ≤ MaxGray ≤ 255.0 (lin)
Minimum Increment : 1
Recommended Increment : 5.0
. MinSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Hlong
Minimum size of objects to be extracted.
Default Value : 20
Suggested values : MinSize ∈ {5, 10, 15, 20, 25, 30, 40, 50, 60, 70, 100}
Typical range of values : 2 ≤ MinSize ≤ 200 (lin)
Minimum Increment : 1
Recommended Increment : 2
Complexity
Let A be the area of the ouput region and height the height of Image. Then the runtime complexity is O(A +
height/MinSize).
HALCON 8.0.2
956 CHAPTER 13. SEGMENTATION
Result
fast_threshold returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the
input images and output regions can be determined by setting the values of the flags ’no_object_result’,
’empty_region_result’, and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
fast_threshold is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
histo_to_thresh, min_max_gray, sobel_amp, binomial_filter, gauss_image,
reduce_domain, fill_interlace
Possible Successors
connection, dilation1, erosion1, opening, closing, rank_region, shape_trans,
skeleton
Alternatives
threshold, gen_grid_region, dilation_rectangle1, dyn_threshold
See also
class_2dim_sup, hysteresis_threshold
Module
Foundation
/* Calculate thresholds from a 12 bit uint2 image and threshold the image. */
gray_histo_abs (Image, Image, 4, AbsoluteHisto)
AbsoluteHisto := AbsoluteHisto[0:1023]
histo_to_thresh (AbsoluteHisto, 16, MinThresh, MaxThresh)
MinThresh := MinThresh*4
MaxThresh := MaxThresh*4+3
threshold (Image, Region, MinThresh, MaxThresh)
Parallelization Information
histo_to_thresh is reentrant and processed without parallelization.
Possible Predecessors
gray_histo
Possible Successors
threshold
See also
auto_threshold, bin_threshold, char_threshold
Module
Foundation
MinGray ≤ g ≤ MaxGray .
All points of an image fulfilling the condition are returned as one region. If more than one gray value interval is
passed (tuples for MinGray and MaxGray), one separate region is returned for each interval.
Parameter
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / vector_field
Input image.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Segmented region.
. MinGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Lower threshold for the gray values.
Default Value : 128.0
Suggested values : MinGray ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
HALCON 8.0.2
958 CHAPTER 13. SEGMENTATION
read_image(&Image,"fabrik");
sobel_amp(Image,&EdgeAmp,"sum_abs",3);
threshold(EdgeAmp,&Seg,50.0,255.0);
skeleton(Seg,&Rand);
connection(Rand,&Lines);
select_shape(Lines,&Edges,"area","and",10.0,1000000.0);
Complexity
Let A be the area of the input region. Then the runtime complexity is O(A).
Result
threshold returns H_MSG_TRUE if all parameters are correct. The behavior with respect to the input images
and output regions can be determined by setting the values of the flags ’no_object_result’, ’empty_region_result’,
and ’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
threshold is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
histo_to_thresh, min_max_gray, sobel_amp, binomial_filter, gauss_image,
reduce_domain, fill_interlace
Possible Successors
connection, dilation1, erosion1, opening, closing, rank_region, shape_trans,
skeleton
Alternatives
class_2dim_sup, hysteresis_threshold, dyn_threshold, bin_threshold,
char_threshold, auto_threshold, dual_threshold
See also
zero_crossing, background_seg, regiongrowing
Module
Foundation
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Border (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; Hobject *
Extracted level crossings.
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Threshold for the level crossings.
Default Value : 128
Suggested values : Threshold ∈ {0.0, 10.0, 30.0, 64.0, 128.0, 200.0, 220.0, 255.0}
Example
Result
threshold_sub_pix usually returns the value H_MSG_TRUE. If necessary, an exception is raised.
Parallelization Information
threshold_sub_pix is reentrant and processed without parallelization.
Alternatives
threshold
See also
zero_crossing_sub_pix
Module
2D Metrology
HALCON 8.0.2
960 CHAPTER 13. SEGMENTATION
HALCON 8.0.2
962 CHAPTER 13. SEGMENTATION
read_image(&Image,"mreut");
derivate_gauss(Image,&Laplace,3,"laplace");
zero_crossing_sub_pix(Laplace,&ZeroCrossings);
disp_xld(ZeroCrossings,WindowHandle);
Result
zero_crossing_sub_pix usually returns the value H_MSG_TRUE. If necessary, an exception is raised.
Parallelization Information
zero_crossing_sub_pix is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
laplace, laplace_of_gauss, diff_of_gauss, derivate_gauss
Alternatives
zero_crossing
See also
threshold_sub_pix
Module
2D Metrology
13.5 Topography
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method for the calculation of the partial derivatives.
Default Value : "facet"
List of values : Filter ∈ {"facet", "gauss"}
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Sigma of the Gaussian. If Filter is ’facet’, Sigma may be 0.0 to avoid the smoothing of the input image.
Suggested values : Sigma ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Restriction : Sigma ≥ 0.0
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum absolute value of the eigenvalues of the Hessian matrix.
Default Value : 5.0
Suggested values : Threshold ∈ {2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0}
Restriction : Threshold ≥ 0.0
. RowMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected minima.
. ColMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected minima.
. RowMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected maxima.
. ColMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected maxima.
. RowSaddle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of the detected saddle points.
. ColSaddle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of the detected saddle points.
Result
critical_points_sub_pix returns H_MSG_TRUE if all parameters are correct and no error oc-
curs during the execution. If the input is empty the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception handling is raised.
Parallelization Information
critical_points_sub_pix is reentrant and processed without parallelization.
Possible Successors
gen_cross_contour_xld, disp_cross
Alternatives
local_min_sub_pix, local_max_sub_pix, saddle_points_sub_pix
See also
local_min, local_max, plateaus, plateaus_center, lowlands, lowlands_center
Module
Foundation
HALCON 8.0.2
964 CHAPTER 13. SEGMENTATION
Parameter
. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Input image.
. LocalMaxima (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Extracted local maxima as a region.
Number of elements : LocalMaxima = Image
Example
read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
local_max(CornerResp,&Maxima);
set_colored(WindowHandle,12);
disp_region(Maxima,WindowHandle);
T_get_region_points(Maxima,&Row,&Col);
Parallelization Information
local_max is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
get_region_points, connection
Alternatives
nonmax_suppression_amp, plateaus, plateaus_center
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method for the calculation of the partial derivatives.
Default Value : "facet"
List of values : Filter ∈ {"facet", "gauss"}
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Sigma of the Gaussian. If Filter is ’facet’, Sigma may be 0.0 to avoid the smoothing of the input image.
Suggested values : Sigma ∈ {0.7, 0.8, 0.9, 1.0, 1.2, 1.5, 2.0, 3.0}
Restriction : Sigma ≥ 0.0
read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
local_min(CornerResp,&Minima);
set_colored(WindowHandle,12);
disp_region(Minima,WindowHandle);
T_get_region_points(Minima,&Row,&Col);
Parallelization Information
local_min is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
get_region_points, connection
HALCON 8.0.2
966 CHAPTER 13. SEGMENTATION
Alternatives
gray_skeleton, lowlands, lowlands_center
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
lowlands(CornerResp,&Minima);
set_colored(WindowHandle,12);
disp_region(Minima,WindowHandle);
T_area_center(Minima,_,&Row,&Col);
Parallelization Information
lowlands is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
lowlands_center, gray_skeleton, local_min
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
HALCON 8.0.2
968 CHAPTER 13. SEGMENTATION
lowlands_center(CornerResp,&Minima);
set_colored(WindowHandle,12);
disp_region(Minima,WindowHandle);
T_area_center(Minima,_,&Row,&Col);
Parallelization Information
lowlands_center is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
lowlands, gray_skeleton, local_min
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Input image.
. Plateaus (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Extracted plateaus as regions (one region for each plateau).
Example
read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
plateaus(CornerResp,&Maxima);
set_colored(WindowHandle,12);
disp_region(Maxima,WindowHandle);
T_area_center(Maxima,_,&Row,&Col);
Parallelization Information
plateaus is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
plateaus_center, nonmax_suppression_amp, local_max
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
. Image (input_object) . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 / real
Input image.
. Plateaus (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Centers of gravity of the extracted plateaus as regions (one region for each plateau).
Example
read_image(&Image,"fabrik");
corner_responce(Image,&CornerResp,5,0.04);
plateaus_center(CornerResp,&Maxima);
set_colored(WindowHandle,12);
disp_region(Maxima,WindowHandle);
T_area_center(Maxima,_,&Row,&Col);
Parallelization Information
plateaus_center is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image
Possible Successors
area_center, get_region_points, select_shape
Alternatives
plateaus, nonmax_suppression_amp, local_max
See also
monotony, topographic_sketch, corner_response, texture_laws
Module
Foundation
HALCON 8.0.2
970 CHAPTER 13. SEGMENTATION
’all’ This is the normal mode of operation. All steps of the segmentation are performed. The regions are assigned
to maxima, and overlapping regions are split.
’maxima’ The segmentation only extracts the local maxima of the input image. No corresponding regions are
extracted.
’regions’ The segmentation extracts the local maxima of the input image and the corresponding regions, which
are uniquely determined. Areas that were assigned to more than one maximum are not split.
In order to prevent the algorithm from splitting a uniform background that is different from the rest of the image,
the parameters MinGray and MaxGray determine gray value thresholds for regions in the image that should
be regarded as background. All parts of the image having a gray value smaller than MinGray or larger than
MaxGray are disregarded for the extraction of the maxima as well as for the assignment of regions. For a complete
segmentation of the image, MinGray = 0 und MaxGray = 255 should be selected. MinGray < MaxGray must
be observed.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image.
. Regions (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region-array ; Hobject *
Segmented regions.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Mode of operation.
Default Value : "all"
List of values : Mode ∈ {"all", "maxima", "regions"}
. MinGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
All gray values smaller than this threshold are disregarded.
Default Value : 0
Suggested values : MinGray ∈ {0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110}
Typical range of values : 0 ≤ MinGray ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : MinGray ≥ 0
. MaxGray (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; Hlong
All gray values larger than this threshold are disregarded.
Default Value : 255
Suggested values : MaxGray ∈ {100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240,
250, 255}
Typical range of values : 0 ≤ MaxGray ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 10
Restriction : (MaxGray ≤ 255) ∧ (MaxGray > MinGray)
Example
/* Segmentation of a 2D-histogram */
read_image(&Image,"monkey");
texture_laws(Image,&Texture,"el",2,5);
disp_image(Image,WindowHandle);
draw_region(&Region,draw_region);
reduce_domain(Texture,Region,&Testreg);
histo_2dim(Testreg,Texture,Region,&Histo);
pouring(Histo,Seg,"all",0,255);
Complexity
Let N be the number of pixels in the input image and M be the number of found segments, where the enclosing
rectangle of the segment i contains mi pixels. Furthermore, let Ki be the number of chords in segment i. Then the
runtime complexity is
Result
pouring usually returns the value H_MSG_TRUE. If necessary, an exception is raised.
Parallelization Information
pouring is processed under mutual exclusion against itself and without parallelization.
Possible Predecessors
binomial_filter, gauss_image, smooth_image, mean_image
Alternatives
watersheds, local_max
See also
histo_2dim, expand_region, expand_gray, expand_gray_ref
Module
Foundation
. Image (input_object) . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / int1 / int2 / uint2 / int4 / real
Input image.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Method for the calculation of the partial derivatives.
Default Value : "facet"
List of values : Filter ∈ {"facet", "gauss"}
HALCON 8.0.2
972 CHAPTER 13. SEGMENTATION
Example
read_image(&Cells,"meningg5");
gauss_image(Cells,&CellsGauss,9);
invert_image(CellsGauss,&CellsInvert);
watersheds(CellsInvert,&Bassins,&Watersheds);
set_colored(WindowHandle,12);
disp_region(Bassins,WindowHandle);
Result
watersheds always returns H_MSG_TRUE. The behavior with respect to the input images and output re-
gions can be determined by setting the values of the flags ’no_object_result’, ’empty_region_result’, and
’store_empty_region’ with set_system. If necessary, an exception is raised.
Parallelization Information
watersheds is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
binomial_filter, gauss_image, smooth_image, invert_image
Possible Successors
expand_region, select_shape, reduce_domain, opening
Alternatives
watersheds_threshold, pouring
References
L. Vincent, P. Soille: “Watersheds in Digital Space: An Efficient Algorithm Based on Immersion Simulations”;
IEEE Transactions on Pattern Analysis and Machine Intelligence; vol. 13, no. 6; pp. 583-598; 1991.
Module
Foundation
HALCON 8.0.2
974 CHAPTER 13. SEGMENTATION
System
14.1 Database
count_relation ( const char *RelationName, Hlong *NumOfTuples )
T_count_relation ( const Htuple RelationName, Htuple *NumOfTuples )
’image’: Image matrices. One matrix may also be the component of more than one image (no redundant storage).
’region’: Regions (the full and the empty region are always available). One region may of course also be the
component of more than one image object (no redundant storage).
’XLD’: eXtended Line Description: Contours, Polygons, paralles, lines, etc. XLD data types don’t have gray
values and are stored with subpixel accuracy.
’object’: Iconic objects. Composed of a region (called region) and optionally image matrices (called image).
’tuple’: In the compact mode, tuples of iconic objects are stored as a surrogate in this relation. Instead of working
with the individual object keys, only this tuple key is used. It depends on the host language, whether the
objects are passed individually (Prolog and C++) or as tuples (C, Smalltalk, Lisp, OPS-5).
Certain database objects will be created already by the operator reset_obj_db and therefore have to be avail-
able all the time (the undefined gray value component, the objects ’full’ (FULL_REGION in HALCON/C) and
’empty’ (EMPTY_REGION in HALCON/C) as well as the herein included empty and full region). By calling
get_channel_info, the operator therefore appears correspondingly also as ’creator’ of the full and empty
region. The procedure can be used for example to check the completeness of the clear_obj operation.
975
976 CHAPTER 14. SYSTEM
Parameter
. RelationName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Relation of interest of the HALCON database.
Default Value : "object"
List of values : RelationName ∈ {"image", "region", "XLD", "object", "tuple"}
. NumOfTuples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of tuples in the relation.
Example
reset_obj_db(512,512,3) ;
count_relation("image",&I1) ;
count_relation("region",&R1) ;
count_relation("XLD",&X1) ;
count_relation("object",&O1) ;
count_relation("tuple",&T1) ;
read_image(&X,"monkey") ;
count_relation("image",&I2) ;
count_relation("region",&R2) ;
count_relation("XLD",&X2) ;
count_relation("object",&O2) ;
count_relation("tuple",&T2) ;
/*
Result: I1 = 1 (undefined image)
R1 = 2 (full and empty region)
X1 = 0 (no XLD data)
O1 = 2 (full and empty objects)
T1 = 0 (always 0 in the normal mode)
Result
If the parameter is correct, the operator count_relation returns the value H_MSG_TRUE. Otherwise an
exception is raised.
Parallelization Information
count_relation is reentrant and processed without parallelization.
Possible Predecessors
reset_obj_db
See also
clear_obj
Module
Foundation
Parameter
HALCON 8.0.2
978 CHAPTER 14. SYSTEM
14.2 Error-Handling
Parameter
Herror err;
char message[MAX_STRING];
set_check("~give_error");
err = send_region(region,socket_id);
set_check("give_error");
if (err != H_MSG_TRUE) {
get_error_text((long)err,message);
fprintf(stderr,"my error message: %s\n",message);
exit(1);
}
Result
The operator get_error_text always returns the value H_MSG_TRUE.
Parallelization Information
get_error_text is reentrant and processed without parallelization.
Possible Predecessors
set_check
See also
set_check
Module
Foundation
HALCON 8.0.2
980 CHAPTER 14. SYSTEM
Parallelization Information
get_spy is reentrant and processed without parallelization.
Possible Predecessors
reset_obj_db
See also
set_spy, query_spy
Module
Foundation
’color’: If this control mode is activated, only colors may be used which are supported by the display for the
currently active window. Otherwise an error message is displayed.
In case of deactivated control mode and non existent colors, the nearest color is used (see also set_color,
set_gray, set_rgb).
’text’: If this control mode is activated, it will check the coordinates during the setting of the text cursor as well
as during the display of strings ( write_string) to the effect whether a part of a sign would lie outside
the windowframe (a fact which is not forbidden in principle by the system).
If the control mode is deactivaed, the text will be clipped at the windowframe.
’data’: (For program development)
Checks the consistency of image objects (regions and grayvalue components.
’interface’: If this control mode is activated, the interface between the host language and the HALCON proce-
dures will be checked in course (e.g. typifying and counting of the values).
’database’: This is a consistency check of the database (e.g. checks whether an object which shall be canceled
does indeed exist or not.)
’give_error’: Determines whether errors shall trigger exceptions or not. If this control modes is deactivated,
the application program must provide a suitable error treatment itself. Please note that errors which are
not reported usually lead to undefined output parameters which may cause an unpredictable reaction of the
program. Details about how to handle exceptions in the different HALCON language interfaces can be found
in the HALCON Programmer’s Guide and the HDevelop User’s Guide.
’father’: If this control mode is activated when calling the operators open_window or open_textwindow,
HALCON allows only the usage of the number of another HALCON window as the father window of the
new window; otherwise it allows also the usage of IDs of operating system windows as the father window.
This control mode ist only relevant for windows of type ’X-Window’ and ’WIN32-Window’.
’region’: (For program development)
Checks the consistency of chords (this may lead to a notable speed reduction of routines).
’clear’: Normally, if a list of objects shall be canceled by using clear_obj, an exception will be raised, in case
individual objects do not or no longer exist. If the ’clear’ mode is activated, such objects will be ignored.
’memory’: (For program development)
Checks the memory blocks freed by the HALCON memory managemnet on consistency and overwriting of
memory borders.
’all’: Activates all control modes.
’none’: Deactivates all control modes.
’default’: Default settings: [’give_error’,’database’]
Parameter
. Check (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Desired control mode.
Default Value : "default"
List of values : Check ∈ {"color", "text", "database", "data", "interface", "give_error", "father", "region",
"clear", "memory", "all", "none", "default"}
Result
The operator set_check returns the value H_MSG_TRUE, if the parameters are correct. Otherwise an exception
will be raised.
Parallelization Information
set_check is reentrant and processed without parallelization.
See also
get_check, set_color, set_rgb, set_hsi, write_string
Module
Foundation
HALCON 8.0.2
982 CHAPTER 14. SYSTEM
The operator set_spy is the HALCON debugging tool. This tool allows the flexible control of the input and
output data of HALCON-operators - in graphical as well as in textual form. The datacontrol is activated by using
set_spy(’mode’,’on’),
and deactivated by using
set_spy(’mode’,’off’).
The debugging tool can further be activated with the help of the environment variable HALCONSPY. The definition
of this variable corresponds to calling up ’mode’ and ’on’.
The following control modes can be tuned (in any desired combination of course) with the help of Class/Value:
’operator’ When a routine is called, its name and the names of its parameters will be given (in TRIAS notation).
Value: ’on’ or ’off’
default: ’off’
’input_control’ When a routine is called, the names and values of the input control parameters will be given.
Value: ’on’ or ’off’
default: ’off’
’output_control’ When a routine is called, the names and values of the output control parameters are given.
Value: ’on’ or ’off’
default: ’off’
’parameter_values’ Additional information on ’input_control’ and ’output_control’: indicates how many values
per parameter shall be displayed at most (maximum tuplet length of the output).
Value: tuplet length (integer)
default: 4
’db’ Information concerning the 4 relations in the HALCON-database. This is especially valuable in looking for
forgotten clear_obj.
Value: ’on’ or ’off’
default: ’off’
’input_gray_window’ Any reading access of the gray-value component of an (input) image object will cause the
gray-value component to be shown in the indicated window (Window-ID; ’none’ will deactivate this control
).
Value: Window-ID (integer) or ’none’
default: ’none’
’input_region_window’ Any reading access of the region of an (input) iconic object will cause this region to be
shown in the indicated (Window-ID; ’none’ will deactivate this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’input_xld_window’ Any reading access of the xld will cause this xld to be shown in the indicated (Window-ID;
’none’ will deactivate this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’time’ Processing time of the operator
Value: ’on’ or ’off’
default: ’off’
’halt’ Determines whether there is a halt after every individual action (’multiple’) or only at the end of each oper-
ator (’single’). The parameter is only effective if the halt has been activated by ’timeout’ or ’button_window’.
Value: ’single’ or ’multiple’
default: ’multiple’
’timeout’ After every output there will be a halt of the indicated number of seconds.
Value: seconds (real)
default 0.0
’button_window’ Alternative to ’timeout’: after every output spy waits until the cursor indicates (’button_click’
= ’false’) or clicks into (’button_click’ = ’true’) the indicated window. (Window-ID; ’none’ will deactivate
this control ).
Value: Window-ID (integer) or ’none’
default: ’none’
’button_click’ Additional option for ’button_window’: determines whether or not a mouse-click has to be waited
for after an output.
Value: ’on’ or ’off’
default: ’off’
’button_notify’ If ’button_notify’ is activated, spy generates a beep after every output. This is useful in combi-
nation with ’button_window’.
Value: ’on’ or ’off’
default: ’off’
’log_file’ Spy can hereby divert the text output into a file having been opened with open_file.
Value: a file handle (see open_file)
’error’ If ’error’ is activated and an internal error occurs, spy will show the internal procedures (file/line) con-
cerned.
Value: ’on’ or ’off’
default: ’off’
’internal’ If ’internal’ is activated, spy will display the internal procedures and their parameters (file/line) while
an HALCON-operator is processed.
Value: ’on’ or ’off’
default: ’off’
Parameter
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Control mode
Default Value : "mode"
List of values : Class ∈ {"mode", "operator", "input_control", "output_control", "parameter_values",
"input_gray_window", "input_region_window", "input_xld_window", "db", "time", "halt", "timeout",
"button_window", "button_click", "button_notify", "log_file", "error", "internal"}
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char * / Hlong / double
State of the control mode to be set.
Default Value : "on"
Suggested values : Value ∈ {"on", "off", 1, 2, 3, 4, 5, 10, 50, 0.0, 1.0, 2.0, 5.0, 10.0}
Example
Result
The operator set_spy returns the value H_MSG_TRUE if the parameters are correct. Otherwise an exception
is raised.
Parallelization Information
set_spy is processed completely exclusively without parallelization.
Possible Predecessors
reset_obj_db
See also
get_spy, query_spy
Module
Foundation
HALCON 8.0.2
984 CHAPTER 14. SYSTEM
14.3 Information
T_get_chapter_info ( const Htuple Chapter, Htuple *Info )
Parallelization Information
get_keywords is processed completely exclusively without parallelization.
Possible Predecessors
get_chapter_info
Alternatives
get_operator_info
See also
get_operator_name, search_operator, get_param_info
Module
Foundation
The texts will be taken from the files english.hlp, english.sta, english.key, english.num und english.idx which
will be searched by HALCON in the currently used directory or in the directory ’help_dir’ (respectively
’user_help_dir’) (see also get_system and set_system). By adding ’.latex’ after the slotname, the text
of slots containing textual information can be made available in LATEX notation.
HALCON 8.0.2
986 CHAPTER 14. SYSTEM
Parameter
See also
get_operator_info, get_param_names, get_param_num, get_param_types
Module
Foundation
HALCON 8.0.2
988 CHAPTER 14. SYSTEM
The online-texts will be taken from the files english.hlp, english.sta, english.key, english.num and english.idx
which will be searched by HALCON in the currently used directory or the directory ’help_dir’ (see also
get_system and set_system).
Parameter
. ProcName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; (Htuple .) const char *
Name of the procedure on whose parameter more information is needed.
Default Value : "get_param_info"
. ParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Name of the parameter on which more information is needed.
Default Value : "Slot"
. Slot (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Desired information.
Default Value : "description"
List of values : Slot ∈ {"description", "type_list", "default_type", "sem_type", "default_value", "values",
"value_list", "valuemin", "valuemax", "valuefunction", "valuenumber", "assertion", "steprec", "stepmin",
"mixed_type", "multivalue", "multichannel"}
. Information (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Information (empty in case there is no information available).
Result
The operator get_param_info returns the value H_MSG_TRUE if the parameters are correct and the helpfiles
are available. Otherwise an exception handling is raised.
Parallelization Information
get_param_info is processed completely exclusively without parallelization.
Possible Predecessors
get_keywords, search_operator
Alternatives
get_param_names, get_param_num, get_param_types
See also
query_param_info, get_operator_info, get_operator_name
Module
Foundation
HALCON 8.0.2
990 CHAPTER 14. SYSTEM
Parallelization Information
get_param_num is reentrant and processed without parallelization.
Possible Predecessors
get_keywords, search_operator, get_operator_name, get_operator_info
Possible Successors
get_param_types
Alternatives
get_operator_info, get_param_info
See also
get_param_names, get_param_types, get_operator_name
Module
Foundation
’integer’: an integer.
’integer tuple’: an integer or a tuple of integers.
’real’: a floating point number.
’real tuple’: a floating point number or a tuple of floating point numbers.
’string’: a string.
’string tuple’: a string or a tuple of strings.
’no_default’: individual value of which the type cannot be determined.
’no_default tuple’: individual value or tuple of values of which the type cannot be determined.
’default’: individual value of unknown type, whereby the systems assumes it to be an ’integer’.
Parameter
. ProcName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . proc_name ; Htuple . const char *
Name of the procedure.
Default Value : "get_param_types"
. InpCtrlParType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Default type of the input control parameters.
. OutpCtrlParType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . char *
Default type of the output control parameters.
Result
The operator get_param_types returns the value H_MSG_TRUE if the indicated procedure name exists.
Otherwise an exception handling is raised.
Parallelization Information
get_param_types is reentrant and processed without parallelization.
Possible Predecessors
get_keywords, search_operator, get_operator_name, get_operator_info
Alternatives
get_param_info
See also
get_param_names, get_param_num, get_operator_info, get_operator_name
Module
Foundation
HALCON 8.0.2
992 CHAPTER 14. SYSTEM
get_keywords(”, <keywords>). The online-texts are taken from the files english.hlp, english.sta, en-
glish.key, english.num and Halcon.idx, which are searched by HALCON in the currently used directory or the
directory ’help_dir’ (see also get_system and get_system).
Parameter
14.4 Operating-System
count_seconds(&Start);
/* program segment to be measured */
count_seconds(&End);
printf("RunTime = %g\n",End-Start);
Result
The operator count_seconds always returns the value H_MSG_TRUE.
Parallelization Information
count_seconds is reentrant and processed without parallelization.
See also
set_system
Module
Foundation
HALCON 8.0.2
994 CHAPTER 14. SYSTEM
14.5 Parallelization
check_par_hw_potential ( Hlong AllInpPars )
T_check_par_hw_potential ( const Htuple AllInpPars )
Parallelization Information
check_par_hw_potential is local and processed completely exclusively without parallelization.
Possible Successors
store_par_knowledge
See also
store_par_knowledge, load_par_knowledge
Module
Foundation
HALCON 8.0.2
996 CHAPTER 14. SYSTEM
(Windows). This enables HALCON to use the knowledge again later on. With store_par_knowledge it is
possible to store this knowledge explicitely as an ASCII file. At this, FileName denotes the name of this file (incl.
path and file extension). The stored knowledge can be read again later on by using load_par_knowledge.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
Name of parallelization knowledge file.
Default Value : ""
Result
store_par_knowledge returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
store_par_knowledge is local and processed completely exclusively without parallelization.
Possible Predecessors
check_par_hw_potential
Possible Successors
load_par_knowledge
See also
load_par_knowledge, check_par_hw_potential
Module
Foundation
14.6 Parameters
get_system ( const char *Query, Hlong *Information )
T_get_system ( const Htuple Query, Htuple *Information )
Versions
’parallel_halcon’: The currently used variant of HALCON: Parallel HALCON (’true’) or Standard HAL-
CON (’false’)
’version’: HALCON version number, e.g.: 6.0
’last_update’: Date of creation of the HALCON library
’revision’: Revision number of the HALCON library, e.g.: 1
Upper Limits
’max_contour_length’: Maximum number of contour respectively polygone control points of a region.
’max_images’: Maximum total of images.
’max_channels’: Maximum number of channels of an image.
’max_obj_per_par’: Maximum number of image objects which may be used during one call up per param-
eter
’max_inp_obj_par’: Maximum number of input parameters.
’max_outp_obj_par’: Maximum number of output parameters.
’max_inp_ctrl_par’: Maximum number of input control parameters.
’max_outp_ctrl_par’: Maximum number of output control parameters.
’max_window’: Maximum number of windows.
’max_window_types’: Maximum number of window systems.
’max_proc’: Maximum number of HALCON procedures (system defined + user defined).
Graphic
+’flush_graphic’: Determines, whether the flush operation is called or not after each visualization operation
in HALCON. Unix operating systems flash the display buffer auto- matically and make this parameter
effectless on respective operating systems, therefore.
+’int2_bits’: Number of significant bits of int2 images. This number is used when scaling the gray values.
If the values is -1 the gray values will be automatically scaled (default).
+’backing_store’: Storage of the window contents in case of overlaps.
+’icon_name’: Name of iconified graphics windows under X-Window. By default the number of the graph-
ics window is displayed.
+’window_name’: (no description available)
+’default_font’: Name of the font to set at opening the window.
+’update_lut’: (no description available)
+’x_package’: Number of bytes which are sent to the X server during each transfer of data.
+’num_gray_4’: Number of colors reserved under X Xindows concerning the output of graylevels (
disp_channel) on a machine with 4 bitplanes (16 colors).
+’num_gray_6’: Number of colors reserved under X Windows concerning the output of graylevels (
disp_channel) on a machine with 6 bitplanes (64 colors).
+’num_gray_8’: Number of colors reserved under X Windows concerning the output of graylevels (
disp_channel) on a machine with 8 bitplanes (256 colors).
+’num_gray_percentage’: HALCON reserves a certain amount of the available colors under X Windows
for the representation of graylevels ( disp_image). This shall interfere with other X applications
as little as possible. However, if HALCON does not succeed in reserving a minimum percentage of
’num_gray_percentage’ of the necessary colors on the X server, a certain amount of the lookup-table
will be claimed for the HALCON graylevels regardless of the consequences for other applications.
This may result in undesired shifts of color when switching between HALCON windows and windows
of other applications, or if (outside HALCON) a window-dump is generated. The number of the real
graylevels to be reserved depends on the number of available bitplanes on the outputmachine (see also
’num_gray_*’. Naturally no colors will be reserved on monochrome machines - the graylevels will
instead be dithered when displayed. If graylevel displays are used, only different shades of gray will
be applied (’black’, ’white’, ’gray’, etc.). ’num_gray_percentage’ is only used on machines with 8 bit
pseudo-color displays. For machines with displays with 16 bits or more (true color machines), no colors
are reserved for the display of gray levels in this case.
Note: Before the first window on a machine with x bitplanes is opened, num_gray_x indicates the
number of colors which have to be reserved for the display of graylevels, afterwards, however, it will
indicate the number of colors which actually have been reserved.
+’num_graphic_percentage’: Similar to ’num_gray_percentage’, ’num_graphic_percentage’ determines
how many graphics colors (for use with set_color) should be reserved in the LUT on an 8 bit pseudo-
color display under X windows.
+’num_graphic_2’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 2 bitplanes (4 colors).
+’num_graphic_4’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 4 bitplanes (16 colors).
+’num_graphic_6’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 6 bitplanes (64 colors).
+’num_graphic_8’: Number of the HALCON graphic colors reserved under X Windows (for
disp_region etc.) on a machine with 8 bitplanes (256 colors).
Image Processing
+’neighborhood’: Using the 4 or 8 neighborhood.
+’init_new_image’: Initialization of images before applying grayvalue transformations.
+’no_object_result’: Behavior for an empty object lists.
+’empty_region_result’: Reaction of procedures concerning input objects with empty regions which
actually are not useful for such objects (e.g. certain region features, segmentation, etc.). Possible return
values:
’true’: the error will be ignored if possible
’false’: the procedure returns FALSE
HALCON 8.0.2
998 CHAPTER 14. SYSTEM
+’filename_encoding’: This parameter returns how file and directory names are interpreted that are passed
as string parameters to and from HALCON. With the value ’locale’ these names are used unaltered,
while with the value ’utf8’ these names are interpreted as being UTF-8 encoded. In the latter case,
HALCON tries to translate input parameters from UTF-8 to the locale encoding according to the current
system settings, and output parameters from locale to UTF-8 encoding.
Directories
+’image_dir’: Path which will searched for the image file after the default directory (see also:
read_image).
+’lut_dir’: Path for the default directory for color tables (see also: set_lut).
+’help_dir’: Path for the default help directory for the online help files:
{german,english}.{hlp,sta,idx,num,key}.
Other
+’do_low_error’: Flag, if low level error should be printed.
’hostids’: The hostids of the computer that can be used for licensing HALCON.
’num_proc’: Total number of the available HALCON procedures (’num_sys_proc’ + ’num_user_proc’).
’num_sys_proc’: Number of the system procedures (supported procedures).
’num_user_proc’: Number of the user defined procedures (see also ’Extension Packages’ manual).
’byte_order’: Byte order of the processor (’msb_first’ or ’lsb_first’).
’operating_system’: Name of the operating system of the computer on which the HALCONprocess is being
executed.
’operating_system_version’: Version number of the operating system of the computer on which the HAL-
CON process is being executed.
’halcon_arch’: Name of the HALCON architecture of the running HALCON process.
+’clock_mode’ Method used for measuring the time in count_seconds (’processor_time’,
’elapsed_time’, or ’performance_counter’).
+’max_connection’ Maximum number of regions returned by connection.
+’extern_alloc_funct’: Pointer to external function for memory allocation of result images.
’extern_free_funct’: Pointer to external function for memory deallocation of result images.
+’image_cache_capacity’: Upper limit in bytes of the image memory cache.
This parameter is only available in Standard HALCON but ignored in Parallel HALCON.
+’global_mem_cache’: Cache mode of global memory, i.e., memory that is visible beyond an operator. It
specifies whether unused global memory should be cached (’shared’) or freed (’idle’). Additionally,
Parallel HALCON offers the option to cache global memory for each thread separately (’exclusive’).
This mde can accelerate processing at the cost of memory consumption. However, Standard HALCON
treats the value ’exclusive’ like the value ’shared’.
+’temporary_mem_cache’: Flag for unused temporary memory of an operator. It specifies whether mem-
ory that is only used within an operator should be cached (’true’, default) or freed (’false’).
+’alloctmp_max_blocksize’: Maximum size of memory blocks to be allocated within temporary memory
management. (No effect, if ’alloctmp_max_blocksize’ == -1 or ’temporary_mem_cache’ == ’false’)
’temp_mem’: Amount of temporary memory used by the last operator in byte. The return value is only
defined if set_check(’memory’) was called before the operator to be measured. Additionally, in
Parallel HALCON the memory value is not specified when calling operators not sequentially but parallel
in multiple threads.
’mmx_supported’: Flag, if the processor supports MMX operations (’true’) or not (’false’).
+’mmx_enable’: Flag, if MMX operations are used to accelerate selected image processing operators
(’true’) or not (’false’).
+’language’: Language used for error messages (’english’ or ’german’).
Parameter
. Query (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Desired system parameter.
Default Value : "width"
List of values : Query ∈ {"?", "alloctmp_max_blocksize", "backing_store", "border_shape_models",
"byte_order", "clip_region", "clock_mode", "current_runlength_number", "default_font", "do_low_error",
HALCON 8.0.2
1000 CHAPTER 14. SYSTEM
’neighborhood’: This parameter is used with all procedures which examine neighborhood relations:
connection, get_region_contour, get_region_chain, get_region_polygon,
get_region_thickness, boundary, paint_region, disp_region, fill_up,
contlength, shape_histo_all.
Value: 4 or 8
default: 8
’default_font’: Whenever a window is opened, a font will be set for the text output, whereby the ’default_font’
will be used. If the preset font cannot be found, another fontname can be set before opening the window.
Value: Filename of the fonts
default: fixed
’update_lut’ Determines whether the HALCON color tables are adapted according to their environment or not.
Value: ’true’ or ’false’
default: ’false’
’image_dir’: Image files (e.g. read_image and read_sequence) will be looked for in the currently used
directory and in ’image_dir’ (if no absolute paths are indicated). More than one directory name can be indi-
cated (searchpaths), seperated by semicolons (Windows) or colons (Unix). The path can also be determined
using the environment variable HALCONIMAGES.
Value: Name of the filepath
default: ’$HALCONROOT/images’ bzw. ’%HALCONROOT%/images’
’lut_dir’: Color tables ( set_lut) which are realized as an ASCII-file will be looked for in the currently used
directory and in ’lut_dir’ (if no absolute paths are indicated). If HALCONROOT is set, HALCON will search
the color tables in the sub-directory "‘lut"’.
Value: Name of the filepath
default: ’$HALCONROOT/lut’ bzw. ’%HALCONROOT%/lut’
’help_dir’: The online text files german or english.hlp, .sta, .key .num and .idx will be looked for in the cur-
rently used directory or in ’help_dir’. This system parameter is necessary for instance using the operators
get_operator_info and get_param_info. This parameter can also be set by the environment vari-
able HALCONROOT before initializing HALCON. In this case the variable must indicate the directory above
the helpdirectories (that is the HALCON-Homedirectory): e.g.: ’/usr/local/halcon’
Value: Name of the filepath
default: ’$HALCONROOT/help’ bzw. ’%HALCONROOT%/help’
’init_new_image’: Determines whether new images shall be set to 0 before using filters. This is not necessary if
always the whole image is filtered of if the data of not filtered image areas are unimportant.
Value: ’true’ or ’false’
default: ’true’
’no_object_result’: Determines how operations processing iconic objects shall react if the object tuplet is empty
(= no objects). Available values for Value:
’true’: the error will be ignored
’false’: the procedure returns FALSE
’fail’: the procedure returns FAIL
’void’: the procedure returns VOID
’exception’: an exception is raised
default: ’true’
’empty_region_result’: Controls the reaction of procedures concerning input objects with empty regions which
actually are not useful for such objects (e.g. certain region features, segmentation, etc.). Available values for
Value:
’true’: the error will be ignored if possible
’false’: the procedure returns FALSE
’fail’: the procedure returns FAIL
’void’: the procedure returns VOID
’exception’: an exception is raised
default: ’true’
’store_empty_region’: Quite a number of operations will lead to the creation of objects with an empty region (=
no image points) (e.g. intersection, threshold, etc.). This parameter determines whether the object
with an empty region will be returned as a result (’true’) or whether it will be ignored (’false’) that is no result
will be returned.
Value: ’true’ or ’false’
default: ’true’
’pregenerate_shape_models’: This parameter determines whether the shape models created with
create_shape_model or create_scaled_shape_model are pregenerated completely or
not, if this is not explicitly specified in create_shape_model or create_scaled_shape_model.
This parameter mainly serves to achieve a switch between the two modes with minimal code changes.
Normally, only one line needs to be inserted or changed.
Value: ’true’ or ’false’
default: ’false’
’border_shape_models’: This parameter determines whether the shape models to be found with
find_shape_model, find_shape_models, find_scaled_shape_model, or
find_scaled_shape_models may lie partially outside the image (i.e., whether they may cross
the image border).
Value: ’true’ or ’false’
default: ’false’
’image_dpi’: This parameter determines the DPI resolution that is stored in image files written with
write_image in formats that support the storing of the DPI resolution.
default: 300
’backing_store’: Determines whether the window content will be refreshed in case of overlapping of the win-
dows. Some implementations of X Windows are faulty; in order to avoid these errors, the storing of contents
HALCON 8.0.2
1002 CHAPTER 14. SYSTEM
can be deactivated. It may be recommendable in some cases to deactivate the security mechanism, if e.g.
performance / memory is what matters.
Value: true or false
default: true
’flush_graphic’: After each HALCON operation which creates a graphic output, a flush operation will be ex-
ecuted in order to display the data immediately on screen. This is not necessary with all programs (e.g. if
everything is done with the help of the mouse). In this case ’flush_graphic’ can be set to ’false’ to improve the
runlength. Unix window manager flash the display buffer automatically and make this parameter effectless
on respective operating systems, therefore.
Value: ’true’ or ’false’
default: ’true’
’flush_file’: This parameter determines whether the output into a file (also to the terminal) shall be buffered or
not. If the output is to be buffered, in general the data will be displayed on the terminal only after entering
the operator fnew_line.
Value: ’true’ or ’false’
default: ’true’
’ocr_trainf_version’ This parameter determines the format that is used for writing an OCR training file. The
operators write_ocr_trainf, write_ocr_trainf_image and concat_ocr_trainf write
training data in ASCII format for version number 1 or in binary format for version number 2 and 3. Version
number 3 stores images of type byte and uint2. The binary version is faster in reading and writing data and
stores training files more packed. The ASCII format is compabtible to older HALCON releases. Depending
on the file version, the OCR training files can be read by the following HALCON releases:
File Version HALCON Release
1 All
2 7.0.2 and higher
3 7.1 and higher
Value: 1, 2, 3
default: 3
’filename_encoding’: This parameter determines how file and directory names are interpreted that are passed as
string parameters to and from HALCON. With the value ’locale’ these names are used unaltered, while with
the value ’utf8’ these names are interpreted as being UTF-8 encoded. In the latter case, HALCON tries to
translate input parameters from UTF-8 to the locale encoding according to the current system settings, and
output parameters from locale to UTF-8 encoding.
Value: ’locale’ or ’utf8’
default: ’locale’
’x_package’: The output of image data via the network may cause errors owing to the heavy load on the computer
or on the network. In order to avoid this, the data are transmitted in small packages. If the computer is used
locally, these units can be enlarged at will. This can lead to a notably improved output performance.
Value: package size (in bytes)
default: 20480
’int2_bits’: Number of significant bits of int2 images. This number is used when scaling the gray values. If the
values is -1 the gray values will be automatically scaled (default).
Value: -1 or 9..16
default: -1
’num_gray_4’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 4 bitplanes (16 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
Value: 2 - 12
default: 8
’num_gray_6’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 6 bitplanes (64 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
Value: 2 - 62
default: 50
’num_gray_8’: Number of colors to be reserved under X Windows to allow the output of graylevels disp_channel
on a machine with 8 bitplanes (256 colors).
Attention! This value may only be changed before the first window has been opened on the machine.
Value: 2 - 254
default: 140
’num_gray_percentage’: Under X Windows HALCON reserves a part of the available colors for the represen-
tation of gray values ( disp_channel). This shall interfere with other X applications as little as possible.
However, if HALCON does not succeed in reserving a minimum percentage of ’num_gray_percentage’ of
the necessary colors on the X server, a certain amount of the lookup table will be claimed for the HALCON
graylevels regardless of the consequences. This may result in undesired shifts of color when switching be-
tween HALCON windows and windows of other applications, or (outside HALCON) if a window-dump is
generated. The number of the real graylevels to be reserved depends on the number of available bitplanes on
the outputmachine (see also ’num_gray_*’. Naturally no colors will be reserved on monochrome machines -
the graylevels will instead be dithered when displayed. If graylevel-displays are used, only different shades
of gray will be applied (’black’, ’white’, ’gray’, etc.). ’num_gray_percentage’ is only used on machines with
8 bit pseudo-color displays. For machines with displays with 16 bits or more (true color machines), no colors
are reserved for the display of gray levels in this case.
Note: This value may only be changed before the first window has been opened on the machine. For before
opening the first window on a machine with x bitplanes, num_gray_x indicates the number of colors which
have to be reserved for the display of graylevels, afterwards, however, it will indicate the number of colors
which actually have been reserved.
Value: 0 - 100
default: 30
’num_graphic_percentage’: Similar to ’num_gray_percentage’, ’num_graphic_percentage’ determines how
many graphics colors (for use with set_color) should be reserved in the LUT on an 8 bit pseudo-color display
under X windows.
default: 60
’int_zooming’: Determines if the zooming of images is done with integer arithmetic or with floating point arith-
metic. default: ’true’
’icon_name’: Name of iconified graphics windows under X-Window. By default the number of the graphics
window is displayed. default: ’default’
’num_graphic_2’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 2 bitplanes (4 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 2
default: 2
’num_graphic_4’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 4 bitplanes (16 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 14
default: 5
’num_graphic_6’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 6 bitplanes (64 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 62
default: 10
’num_graphic_8’: Number of the graphic colors to be reserved by HALCON under X Windows (concerning the
operators disp_region etc.) on a machine with 8 bitplanes (256 colors).
Attention: This value may only be changed before the first window has been opened on the machine.
Value: 0 - 64
default: 20
’graphic_colors’ HALCON reserves the first num_graphic_x colors form this list of color names as graphic col-
ors. As a default HALCON uses this same list which is also returned by using query_all_colors.
However, the list can be changed individually: hereby a tuplet of color names will be returned as value. It is
recommendable that such a tuplet always includes the colors ’black’ and ’white’, and optionally also ’red’,
’green’ and ’blue’. If ’default’ is set as Value, HALCON returns to the initial setting. Note: On graylevel
HALCON 8.0.2
1004 CHAPTER 14. SYSTEM
machines not the first x colors will be reserved, but the first x shades of gray from the list.
Attention: This value may only be changed before the first window has been opened on the machine.
Value: Tuplets of X Windows color names
default: see also query_all_colors
’current_runlength_number’: Regions will be stored internally in a certain runlengthcode. This parameter can
determine the maximum number of chords which may be used for representing a region. Please note that
some procedures raise the number on their own if necessary.
The value can be enlarged as well as reduced.
Value: maximum number of chords
default: 50000
’clip_region’: Determines whether the regions of iconic objects of the HALCON database will be clipped to
the currently used image size or not. This is the case for example in procedures like gen_circle,
gen_rectangle1 or dilation1.
See also: reset_obj_db
Value: ’true’ or ’false’
default: ’true’
’do_low_error’ Determines whether the HALCON should print low level error or not.
Value: ’true’ or ’false’
default: ’false’
’reentrant’ Determines whether HALCON must be reentrant for being used within a parallel programming en-
vironment (e.g. a multithreaded application). This parameter is only of importance for Parallel HALCON,
which can process several operators concurrently. Thus, the parameter is ignored by the sequentially working
HALCON-Version. If it is set to ’true’, Parallel HALCON internally uses synchronization mechanisms to
protect shared data objects from concurrent accesses. Though this is inevitable with any effectively paral-
lel working application, it may cause undesired overhead, if used within an application which works purely
sequentially. The latter case can be signalled by setting ’reentrant’ to ’false’. This switches off all internal
synchronization mechanisms and thus reduces overhead. Of course, Parallel HALCON then is no longer
thread-safe, which causes another side-effect: Parallel HALCON will then no longer use the internal paral-
lelization of operators, because this needs reentrancy. Setting ’reentrant’ to ’true’ resets Parallel HALCON
to its default state, i.e. it is reentrant (and thread-safe) and it uses the automatic parallelization to speed up
the processing of operators on multiprocessor machines.
Value: ’true’ or ’false’
default: Parallel HALCON: ’true’, otherwise: ’false’
’parallelize_operators’ Determines whether Parallel HALCON uses an automatic parallelization to speed up the
processing of operators on multiprocessor machines. This feature can be switched off by setting ’paral-
lelize_operators’ to ’false’. Even then, Parallel HALCON will remain reentrant (and thread-safe), unless
the parameter ’reentrant’ is changed via set_system accordingly. Changing ’parallelize_operators’ can
be helpful, for example, if HALCON operators are called by a multithreaded application that also does the
scheduling and load-balancing of operators and data by itself. Then, it may be undesired that HALCON
performs additional parallelization steps, which may disturb the application’s scheduling and load-balancing
concepts. For a more detailed control of automatic parallelization single methods of data parallelization
can be switched. ’split_tuple’ enables the tuple parallelization method, ’split_channel’ the parallelization on
image channels, and ’split_domain’ the parallelization on the image domain. A preceding ’˜’ disables the
respective method. The method strings can also be passed within a control tuple to switch on or off methods
of automatic data parallelization at once. E.g., [’split_tuple’,’split_channel’,’split_domain’] is equivalent to
’true’.
The parameter ’parallelize_operators’ is only supported by Parallel HALCON and thus ignored by the se-
quentially working HALCON-Version.
Value:’true’, ’false’, ’split_tuple’, ’split_channel’, ’split_domain’, ’s̃plit_tuple’, ’s̃plit_channel’,
’s̃plit_domain’ default: Parallel HALCON: ’true’, else: ’false’
’thread_num’ Sets the number of threads used by the automatic parallelization of Parallel HALCON. The number
includes the main thread and is restricted to the number of processors for efficiency reasons. Decreasing the
number of threads is helpful if processors are occupied by user worker threads besides the threads of the
automatic parallelization. With this, the number of processing threads can be adapted to the number of
processors for best efficiency. Standard HALCON ignores this parameter value. Value: 1 <= Value <=
processor_num default: Parallel HALCON: processor_num, else: 1
’thread_pool’ Denotes whether Parallel HALCON always creates new threads for automatic parallelization
(’false’) or uses an existing pool of threads (’true’). Using a pool is more efficient for automatic paral-
lelization. When switching off atomatic parallelization permanently, deactivating the pool can save resources
of the operating system. Standard HALCON ignores this parameter value. Value: ’true’, ’false’ default:
Parallel HALCON: ’true’, else: ’false’
’clock_mode’ Determines the mode of the measurement of time intervals with count_seconds. For
Value=’processor_time’, the time the running HALCON process occupies the cpu is measured. This kind
of measuring time is independend from the cpu load caused by other processes, but it features a lower reso-
lution on most systems and is more inaccurate for smaller time intervals, therefore.
For Value=’elapsed_time’, the actual elapsed system time is measured. It includes the waiting time of the
current process as well as the cpu time of other processes. Therefore, to get a reliable measurement make
sure that no other process causes any cpu load.
Value=’performance_counter’ measures the actual system time by using a performance counter,
which results in a higher resolution. If the system does not support any performance counter,
Value=’processor_time’ is used.
Value: ’processor_time’, ’elapsed_time’, ’performance_counter’
default: ’performance_counter’
’max_connection’ Determines the maximum number of regions returned by connection. For Value=0, all
regions are returned.
’extern_alloc_funct’ Pointer to external function for memory allocation of result images. default: 0
’extern_free_funct’ Pointer to external function for memory deallocation of result images. default: 0
’image_cache_capacity’ Upper limit in bytes of the internal image memory cache. To speed up allocation of
new images HALCON does not free image memory but caches it to reuse it. Caching of freed images
is done as long as the upper limit is not reached. This functionality can be switched off by setting ’im-
age_cache_capacity’ to 0.
This parameter is only available in Standard HALCON and ignored in Parallel HALCON.
default: Standard HALCON: 4194304 (4MByte), else: 0
’global_mem_cache’ Cache mode of global memory,i.e., memory that is visible beyond an operator. It specifies
whether unused global memory should be cached (’shared’) or freed (’idle’). Generally, caching speeds up
memory allocation and processing at the cost of memory consumption. Additionally, Parallel HALCON of-
fers the option to cache global memory for each thread separately (’exclusive’). This mode can also accelerate
processing at the cost of higher memory consumption. Standard HALCON treats the value ’exclusive’ like
the value ’shared’.
Value: ’idle’,’exclusive’,’shared’
default: ’false’
’temporary_mem_cache’ Flag if unused temporary memory of an operator should be cached (’true’, default) or
freed (’false’). A single-threaded application can be speeded up by caching global memory, whereas freeing
reduces the memory consumption of a multithreaded application at the expense of speed.
Value: ’true’ or ’false’
default: ’true’
’alloctmp_max_blocksize’ Maximum size of memory blocks to be allocated within temporary memory manage-
ment. (No effect, if ’temporary_mem_cache’ == ’false’ ) Value: -1 or >= 0
default: -1
’mmx_enable’ Flag, if MMX operations were used to accelerate selected image processing operators (’true’) or
not (’false’). (No effect, if ’mmx_supported’ == ’false’, see also operator get_system) default: ’true’ if cpu
supports MMX, else ’false’
’language’ Language used for error messages. Value: ’english’ or ’german’. default: ’ english’
Parameter
. SystemParameter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Name of the system parameter to be changed.
Default Value : "image_dir"
List of values : SystemParameter ∈ {"alloctmp_max_blocksize", "backing_store",
"border_shape_models", "clip_region", "clock_mode", "current_runlength_number", "default_font",
"do_low_error", "empty_region_result", "extern_alloc_funct", "extern_free_funct", "filename_encoding",
"flush_file", "flush_graphic", "global_mem_cache", "graphic_colors", "help_dir", "icon_name",
"image_cache_capacity", "image_dir", "image_dpi", "init_new_image", "int2_bits", "int_zooming",
"language", "lut_dir", "max_connection", "mmx_enable", "neighborhood", "no_object_result",
HALCON 8.0.2
1006 CHAPTER 14. SYSTEM
14.7 Serial
clear_serial ( Hlong SerialHandle, const char *Channel )
T_clear_serial ( const Htuple SerialHandle, const Htuple Channel )
close_all_serials ( )
T_close_all_serials ( )
HALCON 8.0.2
1008 CHAPTER 14. SYSTEM
Parameter
. SerialHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serial_id ; Hlong
Serial interface handle.
. BaudRate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Speed of the serial interface.
. DataBits (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of data bits of the serial interface.
. FlowControl (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Type of flow control of the serial interface.
. Parity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Parity of the serial interface.
. StopBits (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Number of stop bits of the serial interface.
. TotalTimeOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Total timeout of the serial interface in ms.
. InterCharTimeOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Inter-character timeout of the serial interface in ms.
Result
If the parameters are correct and the parameters of the device could be read, the operator get_serial_param
returns the value H_MSG_TRUE. Otherwise an exception is raised.
Parallelization Information
get_serial_param is reentrant and processed without parallelization.
Possible Predecessors
open_serial
Possible Successors
get_serial_param, read_serial, write_serial
See also
set_serial_param
Module
Foundation
Parallelization Information
open_serial is reentrant and processed without parallelization.
Possible Successors
set_serial_param, read_serial, write_serial, close_serial
See also
set_serial_param, get_serial_param, open_file
Module
Foundation
HALCON 8.0.2
1010 CHAPTER 14. SYSTEM
set_serial_param can be used to set the parameters of a serial device. The parameter BaudRate determines
the input and output speed of the device. It should be noted that not all devices support all possible speeds. The
number of sent and received data bits is set with DataBits. The parameter FlowControl determines if and
what kind of data flow control should be used. In the latter case, a choice between software control (’xon_xoff’) and
hardware control (’cts_rts’, ’dtr_dsr’) can be made. If and what kind of parity check of the transmitted data should
be performed can be determined by Parity. The number of stop bits sent is set with StopBits. Finally, two
timeout for reading from the serial device can be set. The parameter TotalTimeOut determines the maximum
time, which may pass in read_serial until the first character arrives, independent of the actual number of
characters requested. The parameter InterCharTimeOut determines the time which may pass between the
reading of individual characters, if multiple characters are requested with read_serial. If one of the timeouts
is set to -1, a read waits an arbitrary amount of time for the arrival of characters. If both timeouts are set to 0 the
a read doesn’t wait and returns the available or none characters. Thus, on Windows systems, a total timeout of
TotalTimeOut + nInterCharTimeOut results if n characters are to be read. On Unix systems, only one of
the two timeouts can be set. Thus, if both timeouts are passed larger than -1, only the total timeout is used. The
unit of both timeouts is milliseconds. It should be noted, however, that the timeout is specified in increments of one
tenths of a second on Unix systems, i.e., the the minimum timeout that has any effect is 100. For each parameter,
the current values can be left in effect by passing ’unchanged’.
Parameter
Possible Predecessors
open_serial, get_serial_param
Possible Successors
read_serial, write_serial
See also
get_serial_param
Module
Foundation
14.8 Sockets
close_socket ( Hlong Socket )
T_close_socket ( const Htuple Socket )
Close a socket.
close_socket closes a socket that was previously opened with open_socket_accept,
open_socket_connect, or socket_accept_connect. For a detailed example, see
open_socket_accept.
Parameter
HALCON 8.0.2
1012 CHAPTER 14. SYSTEM
See also
open_socket_accept, open_socket_connect, socket_accept_connect
Module
Foundation
Parameter
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
. DataType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; char *
Data type of next HALCON data.
Parallelization Information
get_next_socket_data_type is reentrant and processed without parallelization.
See also
send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple
Module
Foundation
Parameter
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
. SocketDescriptor (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong *
Socket descriptor used by the operating system.
Parallelization Information
get_socket_descriptor is reentrant and processed without parallelization.
Possible Predecessors
open_socket_accept, open_socket_connect, socket_accept_connect
See also
set_socket_timeout
Module
Foundation
HALCON 8.0.2
1014 CHAPTER 14. SYSTEM
Parameter
. Port (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Hlong
Port number.
Default Value : 3000
Typical range of values : 1024 ≤ Port ≤ 65535
Minimum Increment : 1
Recommended Increment : 1
. AcceptingSocket (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong *
Socket number.
Example (Syntax: HDevelop)
/* Process 1 */
dev_set_colored (12)
open_socket_accept (3000, AcceptingSocket)
/* Busy wait for an incoming connection */
dev_error_var (Error, 1)
dev_set_check (’~give_error’)
OpenStatus := 5
while (OpenStatus # 2)
socket_accept_connect (AcceptingSocket, ’false’, Socket)
OpenStatus := Error
wait_seconds (0.2)
endwhile
dev_set_check (’give_error’)
/* Connection established */
receive_image (Image, Socket)
threshold (Image, Region, 0, 63)
send_region (Region, Socket)
receive_region (ConnectedRegions, Socket)
area_center (ConnectedRegions, Area, Row, Column)
send_tuple (Socket, Area)
send_tuple (Socket, Row)
send_tuple (Socket, Column)
close_socket (Socket)
close_socket (AcceptingSocket)
/* Process 2 */
dev_set_colored (12)
open_socket_connect (’localhost’, 3000, Socket)
read_image (Image, ’fabrik’)
send_image (Image, Socket)
receive_region (Region, Socket)
connection (Region, ConnectedRegions)
send_region (ConnectedRegions, Socket)
receive_tuple (Socket, Area)
receive_tuple (Socket, Row)
receive_tuple (Socket, Column)
close_socket (Socket)
Parallelization Information
open_socket_accept is reentrant and processed without parallelization.
Possible Successors
socket_accept_connect
See also
open_socket_connect, close_socket, get_socket_timeout, set_socket_timeout,
send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple
Module
Foundation
HALCON 8.0.2
1016 CHAPTER 14. SYSTEM
Parallelization Information
receive_image is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect, get_socket_timeout,
set_socket_timeout
See also
send_image, send_region, receive_region, send_tuple, receive_tuple,
get_next_socket_data_type
Module
Foundation
Parallelization Information
receive_tuple is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect, get_socket_timeout,
set_socket_timeout
See also
send_tuple, send_image, receive_image, send_region, receive_region,
get_next_socket_data_type
Module
Foundation
. Image (input_object) . . . . . . image(-array) ; Hobject : byte / direction / cyclic / int1 / int2 / uint2 / int4 /
real / complex / vector_field
Image to be sent.
. Socket (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . socket_id ; Hlong
Socket number.
HALCON 8.0.2
1018 CHAPTER 14. SYSTEM
Parallelization Information
send_image is reentrant and processed without parallelization.
Possible Predecessors
open_socket_connect, socket_accept_connect
See also
receive_image, send_region, receive_region, send_tuple, receive_tuple,
get_next_socket_data_type
Module
Foundation
See also
receive_tuple, send_image, receive_image, send_region, receive_region,
get_next_socket_data_type
Module
Foundation
/* Process 1 */
dev_set_colored (12)
open_socket_accept (3000, AcceptingSocket)
socket_accept_connect (AcceptingSocket, ’true’, Socket)
receive_image (Image, Socket)
edges_sub_pix (Image, Edges, ’canny’, 1.5, 20, 40)
send_xld (Edges, Socket)
receive_xld (Polygons, Socket)
split_contours_xld (Polygons, Contours, ’polygon’, 1, 5)
gen_parallels_xld (Polygons, Parallels, 10, 30, 0.15, ’true’)
send_xld (Parallels, Socket)
receive_xld (ModParallels, Socket)
receive_xld (ExtParallels, Socket)
stop ()
close_socket (Socket)
close_socket (AcceptingSocket)
/* Process 2 */
dev_set_colored (12)
open_socket_connect (’localhost’, 3000, Socket)
read_image (Image, ’mreut’)
send_image (Image, Socket)
receive_xld (Edges, Socket)
gen_polygons_xld (Edges, Polygons, ’ramer’, 2)
send_xld (Polygons, Socket)
split_contours_xld (Polygons, Contours, ’polygon’, 1, 5)
receive_xld (Parallels, Socket)
mod_parallels_xld (Parallels, Image, ModParallels, ExtParallels,
0.4, 160, 220, 10)
send_xld (ModParallels, Socket)
send_xld (ExtParallels, Socket)
stop ()
close_socket (Socket)
Parallelization Information
send_xld is reentrant and processed without parallelization.
HALCON 8.0.2
1020 CHAPTER 14. SYSTEM
Possible Predecessors
open_socket_connect, socket_accept_connect
See also
receive_xld, send_image, receive_image, send_region, receive_region, send_tuple,
receive_tuple, get_next_socket_data_type
Module
Foundation
requests from other HALCONprocesses. The result of socket_accept_connect is another socket Socket,
which is used for a two-way communication with another HALCON process. After this connection has been
established, data can be exchanged between the two processes by calling the appropriate send or receive operators.
For a detailed example, see open_socket_accept.
Parameter
HALCON 8.0.2
1022 CHAPTER 14. SYSTEM
Tools
15.1 2D-Transformations
T_affine_trans_pixel ( const Htuple HomMat2D, const Htuple Row,
const Htuple Col, Htuple *RowTrans, Htuple *ColTrans )
Hence,
affine_trans_pixel (HomMat2D, Row, Col, RowTrans, ColTrans)
corresponds to the following operator sequence:
affine_trans_point_2d (HomMat2D, Row+0.5, Col+0.5, RowTmp, ColTmp)
RowTrans := RowTmp-0.5
ColTrans := ColTmp-0.5
Parameter
1023
1024 CHAPTER 15. TOOLS
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
The transformation matrix can be created using the operators hom_mat2d_identity,
hom_mat2d_rotate, hom_mat2d_translate, etc., or can be the result of operators like
vector_angle_to_rigid.
For example, if HomMat2D corresponds to a rigid transformation, i.e., if it consists of a rotation and a translation,
the points are transformed as follows:
Qx Px Px
R t R· +t
Qy = · Py = Py
00 1
1 1 1
Parameter
HALCON 8.0.2
1026 CHAPTER 15. TOOLS
The parameter Transformation determines the class of transformations that is used in the bundle adjustment
to transform the image points. This can be used to restrict the allowable transformations. For Transformation
= ’projective’, projective transformations are used (see vector_to_proj_hom_mat2d). For
Transformation = ’affine’, affine transformations are used (see vector_to_hom_mat2d), for
Transformation = ’similarity’, similarity transformations (see vector_to_similarity), and for
Transformation = ’rigid’ rigid transformations (see vector_to_rigid).
The resulting bundle-adjusted transformations are retuned as an array of 3 × 3 projective transformation matrices
in MosaicMatrices2D. In addition, the points reconstructed by the bundle adjustment are returned in (Rows,
Cols). The average projection error of the reconstructed points is returned in Error. This can be used to check
whether the optimization has converged to useful values.
Parameter
. NumImages (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Number of different images that are used for the calibration.
Restriction : NumImages ≥ 2
. ReferenceImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Index of the reference image.
. MappingSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Indices of the source images of the transformations.
. MappingDest (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Indices of the target images of the transformations.
. HomMatrices2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Array of 3 × 3 projective transformation matrices.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double / Hlong
Row coordinates of corresponding points in the respective source images.
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double / Hlong
Column coordinates of corresponding points in the respective source images.
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double / Hlong
Row coordinates of corresponding points in the respective destination images.
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double / Hlong
Column coordinates of corresponding points in the respective destination images.
. NumCorrespondences (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Number of point correspondences in the respective image pair.
. Transformation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Transformation class to be used.
Default Value : "projective"
List of values : Transformation ∈ {"projective", "affine", "similarity", "rigid"}
. MosaicMatrices2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Array of 3 × 3 projective transformation matrices that determine the position of the images in the mosaic.
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Row coordinates of the points reconstructed by the bundle adjustment.
. Cols (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Column coordinates of the points reconstructed by the bundle adjustment.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Average error per reconstructed point.
Example (Syntax: HDevelop)
* Assume that Images contains the four images of the mosaic in the
* layout given in the above description. Then the following example
* computes the bundle-adjusted transformation matrices.
From := [1,1,1,2,2,3]
To := [2,3,4,3,4,4]
HomMatrices2D := []
Rows1 := []
Cols1 := []
Rows2 := []
Cols2 := []
NumMatches := []
for J := 0 to |From|-1 by 1
select_obj (Images, From[J], ImageF)
select_obj (Images, To[J], ImageT)
points_foerstner (ImageF, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsF, ColsF, _, _, _, _, _, _, _, _)
points_foerstner (ImageT, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsT, ColsT, _, _, _, _, _, _, _, _)
proj_match_points_ransac (ImageF, ImageT, RowsF, ColsF, RowsT, ColsT,
’ncc’, 10, 0, 0, 480, 640, 0, 0.5,
’gold_standard’, 2, 42, HomMat2D,
Points1, Points2)
HomMatrices2D := [HomMatrices2D,HomMat2D]
Rows1 := [Rows1,subset(RowsF,Points1)]
Cols1 := [Cols1,subset(ColsF,Points1)]
Rows2 := [Rows2,subset(RowsT,Points2)]
Cols2 := [Cols2,subset(ColsT,Points2)]
NumMatches := [NumMatches,|Points1|]
endfor
bundle_adjust_mosaic (4, 1, From, To, HomMatrices2D, Rows1, Cols1,
Rows2, Cols2, NumMatches, ’rigid’, MosaicMatrices)
gen_bundle_adjusted_mosaic (Images, MosaicImage, HomMatrices2D,
’default’, ’false’, TransMat2D)
Result
If the parameters are valid, the operator bundle_adjust_mosaic returns the value H_MSG_TRUE. If nec-
essary an exception handling is raised.
Parallelization Information
bundle_adjust_mosaic is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac
Possible Successors
gen_bundle_adjusted_mosaic
See also
gen_projective_mosaic
Module
Matching
For example, if the two input matrices correspond to rigid transformations, i.e., to transformations consisting of a
rotation and a translation, the resulting matrix is calculated as follows:
Rl tl Rr tr Rl · Rr Rl +tl · tr
HomMat2DCompose = · =
00 1 00 1 0 0 1
HALCON 8.0.2
1028 CHAPTER 15. TOOLS
Parameter
. HomMat2DLeft (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Left input transformation matrix.
. HomMat2DRight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Right input transformation matrix.
. HomMat2DCompose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_compose returns H_MSG_TRUE. If necessary, an excep-
tion is raised.
Parallelization Information
hom_mat2d_compose is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_compose, hom_mat2d_translate, hom_mat2d_translate_local,
hom_mat2d_scale, hom_mat2d_scale_local, hom_mat2d_rotate,
hom_mat2d_rotate_local, hom_mat2d_slant, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate, hom_mat2d_translate_local, hom_mat2d_scale,
hom_mat2d_scale_local, hom_mat2d_rotate, hom_mat2d_rotate_local,
hom_mat2d_slant, hom_mat2d_slant_local
Module
Foundation
1 0 0
HomMat2DIdentity = 0 1 0
0 0 1
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. Thus, HomMat2DIdentity is stored as the
tuple [1,0,0,0,1,0].
Parameter
HALCON 8.0.2
1030 CHAPTER 15. TOOLS
The point (Px,Py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat2DRotate. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the rotation is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 +Px 0 1 0 −Px
R
HomMat2DRotate = 0 1 +Py · 0 · 0 1 −Py · HomMat2D
0 0 1 0 0 1 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_rotate_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat2DRotate.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
HALCON 8.0.2
1032 CHAPTER 15. TOOLS
The point (Px,Py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat2DScale. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the scaling is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 +Px 0 1 0 −Px
S
HomMat2DScale = 0 1 +Py · 0 · 0 1 −Py · HomMat2D
0 0 1 0 0 1 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_scale_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Sx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the x-axis.
Default Value : 2
Suggested values : Sx ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 16}
Restriction : Sx 6= 0
. Sy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the y-axis.
Default Value : 2
Suggested values : Sy ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 16}
Restriction : Sy 6= 0
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Fixed point of the transformation (y coordinate).
Default Value : 0
Suggested values : Py ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat2DScale (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
hom_mat2d_scale returns H_MSG_TRUE if both scale factors are not 0. If necessary, an exception is raised.
Parallelization Information
hom_mat2d_scale is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate,
hom_mat2d_slant
Possible Successors
hom_mat2d_translate, hom_mat2d_scale, hom_mat2d_rotate, hom_mat2d_slant
See also
hom_mat2d_scale_local
Module
Foundation
HALCON 8.0.2
1034 CHAPTER 15. TOOLS
scaling matrix S. In contrast to hom_mat2d_scale, it is performed relative to the local coordinate system,
i.e., the coordinate system described by HomMat2D; this corresponds to the following chain of transformation
matrices:
0
S Sx 0
HomMat2DScale = HomMat2D · 0 S =
0 Sy
0 0 1
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat2DScale.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Sx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the x-axis.
Default Value : 2
Suggested values : Sx ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 16}
Restriction : Sx 6= 0
. Sy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the y-axis.
Default Value : 2
Suggested values : Sy ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 16}
Restriction : Sy 6= 0
. HomMat2DScale (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
hom_mat2d_scale_local returns H_MSG_TRUE if both scale factors are not 0. If necessary, an exception
is raised.
Parallelization Information
hom_mat2d_scale_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate_local, hom_mat2d_scale_local,
hom_mat2d_rotate_local, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate_local, hom_mat2d_scale_local, hom_mat2d_rotate_local,
hom_mat2d_slant_local
See also
hom_mat2d_scale
Module
Foundation
cos(Theta) 0 0
Axis = 0 x 0 : HomMat2DSlant = sin(Theta) 1 0 · HomMat2D
0 0 1
1 − sin(Theta) 0
Axis = 0 y 0 : HomMat2DSlant = 0 cos(Theta) 0 · HomMat2D
0 0 1
The point (Px,Py) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat2DSlant. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the slant is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations for Axis = ’x’:
1 0 +Px cos(Theta) 0 0 1 0 −Px
HomMat2DSlant = 0 1 +Py · sin(Theta) 1 0 · 0 1 −Py · HomMat2D
0 0 1 0 0 1 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_slant_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Slant angle.
Default Value : 0.78
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Theta ≤ 6.28318530718
HALCON 8.0.2
1036 CHAPTER 15. TOOLS
cos(Theta) 0 0
Axis = 0 x 0 : HomMat2DSlant = HomMat2D · sin(Theta) 1 0
0 0 1
1 − sin(Theta) 0
Axis = 0 y 0 : HomMat2DSlant = HomMat2D · 0
cos(Theta) 0
0 0 1
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat2DSlant.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Theta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Slant angle.
Default Value : 0.78
Suggested values : Theta ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Theta ≤ 6.28318530718
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Coordinate axis that is slanted.
Default Value : "x"
List of values : Axis ∈ {"x", "y"}
. HomMat2DSlant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_slant_local returns H_MSG_TRUE. If necessary, an
exception is raised.
Parallelization Information
hom_mat2d_slant_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate_local, hom_mat2d_scale_local,
hom_mat2d_rotate_local, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate_local, hom_mat2d_scale_local, hom_mat2d_rotate_local,
hom_mat2d_slant_local
See also
hom_mat2d_slant
Module
Foundation
HALCON 8.0.2
1038 CHAPTER 15. TOOLS
hom_mat2d_identity (HomMat2DIdentity)
hom_mat2d_scale (HomMat2DIdentity, Sx, Sy, 0, 0, HomMat2DScale)
hom_mat2d_slant (HomMat2DScale, Theta, ’y’, 0, 0, HomMat2DSlant)
hom_mat2d_rotate (HomMat2DSlant, Phi, 0, 0, HomMat2DRotate)
hom_mat2d_translate (HomMat2DRotate, Tx, Ty, HomMat2D)
Parameter
To perform the transformation in the local coordinate system, i.e., the one described by HomMat2D, use
hom_mat2d_translate_local.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
HALCON 8.0.2
1040 CHAPTER 15. TOOLS
1 0
t Tx
HomMat2DTranslate = HomMat2D · 0 1 t=
Ty
0 0 1
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is usually not stored because it
is identical for all homogeneous matrices that describe an affine transformation. For example, the homogeneous
matrix
ra rb tc
rd re tf
0 0 1
is stored as the tuple [ra, rb, tc, rd, re, tf]. However, it is also possible to process full 3×3 matrices, which represent
a projective 2D transformation.
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Input transformation matrix.
. Tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat2DTranslate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat2d_translate_local returns H_MSG_TRUE. If neces-
sary, an exception is raised.
Parallelization Information
hom_mat2d_translate_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat2d_identity, hom_mat2d_translate_local, hom_mat2d_scale_local,
hom_mat2d_rotate_local, hom_mat2d_slant_local
Possible Successors
hom_mat2d_translate_local, hom_mat2d_scale_local, hom_mat2d_rotate_local,
hom_mat2d_slant_local
See also
hom_mat2d_translate
Module
Foundation
HALCON 8.0.2
1042 CHAPTER 15. TOOLS
h11 h12 h13 h14
f 0 c 1 0 0 −c h21 h22 h23 h24
Q= 0 f r · 0 1 0 −r ·
h31
h32 h33 h34
0 0 1 0 0 1 0
0 0 0 1
Since the image of a plane containing points (x, y, f, 1)T is to be calculated the last two columns of Q can be
joined:
1 0 0
r11 r12 r13 q11 q12 f · q13 + q14 0 1 0
R = r21 r22 r23 = q21 q22 f · r23 + q24 =Q·
0
0 f
r31 r32 r33 q31 q32 f · r33 + q34
0 0 1
Finally, the columns and rows of R are swapped in a way that the first row of P contains the transformation of the
row coordinates and the second row contains the transformation of the column coordinates so that P can be used
directly in projective_trans_image:
0 1 0 0 1 0
P = 1 0 0 ·R· 1 0 0
0 0 1 0 0 1
Parameter
If fewer than 4 pairs of points (Px, Py, Pw), (Qx, Qy, Qw) are given, there exists no unique solution, if exactly 4
pairs are supplied the matrix HomMat2D transforms them in exactly the desired way, and if there are more than
4 point pairs given, hom_vector_to_proj_hom_mat2d seeks to minimize the transformation error. To
achieve such a minimization, two different algorithms are available. The algorithm to use can be chosen using the
parameter Method. For conventional geometric problems Method=’normalized_dlt’ usually yields better results.
However, if one of the coordinates Qw or Pw equals 0, Method=’dlt’ must be chosen.
In contrast to vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d uses homogeneous
coordinates for the points, and hence points at infinity (Pw = 0 or Qw = 0) can be used to determine the transforma-
tion. If finite points are used, typically Pw and Qw are set to 1. In this case, vector_to_proj_hom_mat2d can
also be used. vector_to_proj_hom_mat2d has the advantage that one additional optimization method can
be used and that the covariances of the points can be taken into account. If the correspondence between the points
has not been determined, proj_match_points_ransac should be used to determine the correspondence as
well as the transformation.
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Input points 1 (x coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Input points 1 (y coordinate).
. Pw (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Input points 1 (w coordinate).
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double
Input points 2 (x coordinate).
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double
Input points 2 (y coordinate).
. Qw (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double
Input points 2 (w coordinate).
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Estimation algorithm.
Default Value : "normalized_dlt"
List of values : Method ∈ {"normalized_dlt", "dlt"}
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Homogeneous projective transformation matrix.
HALCON 8.0.2
1044 CHAPTER 15. TOOLS
Parallelization Information
hom_vector_to_proj_hom_mat2d is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac, points_foerstner, points_harris
Possible Successors
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
Alternatives
vector_to_proj_hom_mat2d, proj_match_points_ransac
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Calibration
Compute a projective transformation matrix between two images by finding correspondences between points.
Given a set of coordinates of characteristic points (Cols1, Rows1) and (Cols2, Rows2) in both input images
Image1 and Image2, proj_match_points_ransac automatically determines corresponding points and
the homogeneous projective transformation matrix HomMat2D that best transforms the corresponding points
from the different images into each other. The characteristic points can, for example, be extracted with
points_foerstner or points_harris.
The transformation is determined in two steps: First, gray value correlations of mask windows around the input
points in the first and the second image are determined and an initial matching between them is generated using
the similarity of the windows in both images.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the algorithm’s performance, the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the transformation contains a rotation, i.e., if the first image is rotated with respect to the second image, the
parameter Rotation may contain an estimate for the rotation angle or an angle interval in radians. A good guess
will increase the quality of the gray value matching. If the actual rotation differs too much from the specified
estimate the matching will typically fail. The larger the given interval, the slower the operator is since the entire
algorithm is run for all relevant angles within the interval.
Once the initial matching is complete, a randomized search algorithm (RANSAC) is used to determine the transfor-
mation matrix HomMat2D. It tries to find the matrix that is consistent with a maximum number of correspondences.
For a point to be accepted, its distance from the coordinates predicted by the transformation must not exceed the
threshold DistanceThreshold.
Once a choice has been made, the matrix is further optimized using all consistent points. For this optimization, the
EstimationMethod can be chosen to either be the slow but mathematically optimal ’gold_standard’ method
or the faster ’normalized_dlt’. Here, the algorithms of vector_to_proj_hom_mat2d are used.
Point pairs that still violate the consistency condition for the final transformation are dropped, the matched points
are returned as control values. Points1 contains the indices of the matched input points from the first image,
Points2 contains the indices of the corresponding points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence to
obtain reproducible results. If RandSeed is set to a positive number, the operator yields the same result on every
call with the same parameters because the internally used random number generator is initialized with the seed
value. If RandSeed = 0, the random number generator is initialized with the current time. Hence, the results
may not be reproducible in this case.
Parameter
HALCON 8.0.2
1046 CHAPTER 15. TOOLS
RTrans Row
CTrans = HomMat2D · Col
WTrans 1
!
RowTrans
RTrans
= WTrans
ColTrans CTrans
WTrans
To transform the homogeneous coordinates to Euclidean coordinates, they have to be divided by Qw:
!
Qx
Ex Qw
= Qy
Ey Qw
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
HALCON 8.0.2
1048 CHAPTER 15. TOOLS
Parameter
. HomMat2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double
Homogeneous projective transformation matrix.
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Input point (x coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Input point (y coordinate).
. Pw (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Input point (w coordinate).
. Qx (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double *
Output point (x coordinate).
. Qy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double *
Output point (y coordinate).
. Qw (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double *
Output point (w coordinate).
Parallelization Information
projective_trans_point_2d is reentrant and processed without parallelization.
Possible Predecessors
vector_to_proj_hom_mat2d, hom_vector_to_proj_hom_mat2d,
proj_match_points_ransac, hom_mat3d_project
See also
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_pixel
Module
Foundation
The coordinates of the original point are passed in (Row1,Column1), while the corresponding angle is passed
in Angle1. The coordinates of the transformed point are passed in (Row2,Column2), while the corresponding
angle is passed in Angle2. The following equation describes the transformation of the point using homogeneous
vectors:
Row2 Row1
Column2 = HomMat2D · Column1
1 1
In particular, the operator vector_angle_to_rigid is useful to construct a rigid affine transformation from
the results of the matching operators (e.g., find_shape_model or best_match_rot_mg), which trans-
forms a reference image to the current image or (if the parameters are passed in reverse order) from the current
image to the reference image.
HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
Parameter
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Row coordinate of the original point.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Column coordinate of the original point.
. Angle1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Angle of the original point.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; Htuple . double / Hlong
Row coordinate of the transformed point.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; Htuple . double / Hlong
Column coordinate of the transformed point.
. Angle2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Angle of the transformed point.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Example (Syntax: HDevelop)
Parallelization Information
vector_angle_to_rigid is reentrant and processed without parallelization.
Possible Predecessors
best_match_rot_mg, best_match_rot
Possible Successors
hom_mat2d_invert, affine_trans_image, affine_trans_region,
affine_trans_contour_xld, affine_trans_polygon_xld, affine_trans_point_2d
Alternatives
vector_to_rigid
See also
vector_field_to_hom_mat2d
Module
Foundation
HALCON 8.0.2
1050 CHAPTER 15. TOOLS
If the displacement vector field has been computed from the original image Iorig and the second image Ires , the
internally stored transformation matrix (see affine_trans_image) contains a map that describes how to
transform the first image Iorig to the second image Ires .
Parameter
. VectorField (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : vector_field
Input image.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Parallelization Information
vector_field_to_hom_mat2d is reentrant and processed without parallelization.
Possible Predecessors
optical_flow_mg
Possible Successors
affine_trans_image
Alternatives
vector_to_hom_mat2d
Module
Foundation
HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
X coordinates of the original points.
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Y coordinates of the original points.
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
X coordinates of the transformed points.
If fewer than 4 pairs of points (Px,Py), (Qx,Qy) are given, there exists no unique solution, if exactly 4 pairs
are supplied the matrix HomMat2D transforms them in exactly the desired way, and if there are more than 4
point pairs given, vector_to_proj_hom_mat2d seeks to minimize the transformation error. To achieve
such a minimization, several different algorithms are available. The algorithm to use can be chosen using
the parameter Method. Method=’dlt’ uses a fast and simple, but also rather inaccurate error estimation al-
gorithm while Method=’normalized_dlt’ offers a good compromise between speed and accuracy. Finally,
Method=’gold_standard’ performs a mathematically optimal but slower optimization.
If ’gold_standard’ is used and the input points have been obtained from an operator like points_foerstner,
which provides a covariance matrix for each of the points, which specifies the accuracy of the points, this can be
taken into account by using the input parameters CovYY1, CovXX1, CovXY1 for the points in the first image and
CovYY2, CovXX2, CovXY2 for the points in the second image. The covariances are symmetric 2 × 2 matrices.
CovXX1/CovXX2 and CovYY1/CovYY2 are a list of diagonal entries while CovXY1/CovXY2 contains the non-
diagonal entries which appear twice in a symmetric matrix. If a different Method than ’gold_standard’ is used or
the covariances are unknown the covariance parameters can be left empty.
In contrast to hom_vector_to_proj_hom_mat2d, points at infinity cannot be used to
determine the transformation in vector_to_proj_hom_mat2d. If this is necessary,
hom_vector_to_proj_hom_mat2d must be used. If the correspondence between the points has not
been determined, proj_match_points_ransac should be used to determine the correspondence as well as
the transformation.
If the points to transform are specified in standard image coordinates, their row coordinates must be passed in Px
and their column coordinates in Py. This is necessary to obtain a right-handed coordinate system for the image. In
particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices
quite naturally corresponds to the usual (row,column) order for coordinates in the image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
HALCON 8.0.2
1052 CHAPTER 15. TOOLS
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double / Hlong
Input points in image 1 (row coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double / Hlong
Input points in image 1 (column coordinate).
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
Input points in image 2 (row coordinate).
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Input points in image 2 (column coordinate).
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Estimation algorithm.
Default Value : "normalized_dlt"
List of values : Method ∈ {"normalized_dlt", "gold_standard", "dlt"}
. CovXX1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Row coordinate variance of the points in image 1.
Default Value : []
. CovYY1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Column coordinate variance of the points in image 1.
Default Value : []
. CovXY1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Covariance of the points in image 1.
Default Value : []
. CovXX2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Row coordinate variance of the points in image 2.
Default Value : []
. CovYY2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Column coordinate variance of the points in image 2.
Default Value : []
. CovXY2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Covariance of the points in image 2.
Default Value : []
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Homogeneous projective transformation matrix.
. Covariance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
9 × 9 covariance matrix of the projective transformation matrix.
Parallelization Information
vector_to_proj_hom_mat2d is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac, points_foerstner, points_harris
Possible Successors
projective_trans_image, projective_trans_image_size, projective_trans_region,
projective_trans_contour_xld, projective_trans_point_2d,
projective_trans_pixel
Alternatives
hom_vector_to_proj_hom_mat2d, proj_match_points_ransac
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
Calibration
T_vector_to_rigid ( const Htuple Px, const Htuple Py, const Htuple Qx,
const Htuple Qy, Htuple *HomMat2D )
The point correspondences are passed in the tuples (Px, Py) and (Qx,Qy), where corresponding points must be
at the same index positions in the tuples. The transformation is always overdetermined. Therefore, the returned
transformation is the transformation that minimizes the distances between the original points (Px,Py) and the
transformed points (Qx,Qy), as described in the following equation (points as homogeneous vectors):
2
X
Qx[i] Px[i]
Qy[i] − HomMat2D · Py[i]
= minimum
i
1 1
HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
HALCON 8.0.2
1054 CHAPTER 15. TOOLS
See also
vector_field_to_hom_mat2d
Module
Foundation
The point correspondences are passed in the tuples (Px, Py) and (Qx,Qy), where corresponding points must be
at the same index positions in the tuples. If more than two point correspondences are passed the transformation
is overdetermined. In this case, the returned transformation is the transformation that minimizes the distances
between the original points (Px,Py) and the transformed points (Qx,Qy), as described in the following equation
(points as homogeneous vectors):
2
X
Qx[i] Px[i]
Qy[i] − HomMat2D · Py[i]
= minimum
i
1 1
HomMat2D can be used directly with operators that transform data using affine transformations, e.g.,
affine_trans_image.
Attention
It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coor-
dinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or
any other data that has been extracted from images, the row coordinates of the transformation must be passed in
the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of
passing row and column coordinates follows the usual order (Row,Column). This convention is essential to obtain
a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular
that rotations are performed in the correct mathematical direction.
Parameter
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
X coordinates of the original points.
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Y coordinates of the original points.
. Qx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
X coordinates of the transformed points.
. Qy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Y coordinates of the transformed points.
. HomMat2D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Output transformation matrix.
Parallelization Information
vector_to_similarity is reentrant and processed without parallelization.
Possible Successors
affine_trans_image, affine_trans_region, affine_trans_contour_xld,
affine_trans_polygon_xld, affine_trans_point_2d
Alternatives
vector_to_hom_mat2d, vector_to_rigid
See also
vector_field_to_hom_mat2d
Module
Foundation
15.2 3D-Transformations
T_affine_trans_point_3d ( const Htuple HomMat3D, const Htuple Px,
const Htuple Py, const Htuple Pz, Htuple *Qx, Htuple *Qy, Htuple *Qz )
The transformation matrix can be created using the operators hom_mat3d_identity, hom_mat3d_scale,
hom_mat3d_rotate, hom_mat3d_translate, etc., or be the result of pose_to_hom_mat3d.
For example, if HomMat3D corresponds to a rigid transformation, i.e., if it consists of a rotation and a translation,
the points are transformed as follows:
Qx Px Px
Qy R t Py R· Py + t
=
Qz ·
Pz =
000 1 Pz
1 1 1
Parameter
HALCON 8.0.2
1056 CHAPTER 15. TOOLS
Result
If the parameters are valid, the operator affine_trans_point_3d returns H_MSG_TRUE. If necessary, an
exception is raised.
Parallelization Information
affine_trans_point_3d is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local
Possible Successors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local
Module
Foundation
Result
convert_pose_type returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
convert_pose_type is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration
Possible Successors
write_pose
See also
create_pose, get_pose_type, write_pose, read_pose
Module
Foundation
Create a 3D pose.
create_pose creates the 3D pose Pose. A pose describes a rigid 3D transformation, i.e., a transformation
consisting of an arbitrary translation and rotation, with 6 parameters: TransX, TransY, and TransZ specify the
translation along the x-, y-, and z-axis, respectively, while RotX, RotY, and RotZ describe the rotation.
3D poses are typically used in two ways: First, to describe the position and orientation of one coordinate system
relative to another (e.g., the pose of a part’s coordinate system relative to the camera coordinate system - in short:
the pose of the part relative to the camera) and secondly, to describe how coordinates can be transformed between
two coordinate systems (e.g., to transform points from part coordinates into camera coordinates).
Please note that you can “read” this chain in two ways: If you start from the right, the rotations are always
performed relative to the global (i.e., fixed or “old”) coordinate system. Thus, Rgba can be read as follows: First
rotate around the z-axis, then around the “old” y-axis, and finally around the “old” x-axis. In contrast, if you read
from the left to the right, the rotations are performed relative to the local (i.e., “new”) coordinate system. Then,
Rgba corresponds to the following: First rotate around the x-axis, the around the “new” y-axis, and finally around
the “new(est)” z-axis.
Reading Rgba from right to left corresponds to the following sequence of operator calls:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate (HomMat3DIdent, RotZ, ’z’, 0, 0, 0, HomMat3DRotZ)
hom_mat3d_rotate (HomMat3DRotZ, RotY, ’y’, 0, 0, 0, HomMat3DRotYZ)
hom_mat3d_rotate (HomMat3DRotYZ, RotX, ’x’, 0, 0, 0, HomMat3DXYZ)
In contrast, reading from left to right corresponds to the following operator sequence:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate_local (HomMat3DIdent, RotX, ’x’, 0, 0, 0,
HomMat3DRotX)
hom_mat3d_rotate_local (HomMat3DRotX, RotY, ’y’, 0, 0, 0,
HomMat3DRotXY)
hom_mat3d_rotate_local (HomMat3DRotXY, RotZ, ’z’, 0, 0, 0, HomMat3DXYZ)
HALCON 8.0.2
1058 CHAPTER 15. TOOLS
When passing ’abg’ in OrderOfRotation, the rotation corresponds to the following chain:
If you pass ’rodriguez’ in OrderOfRotation, the rotation parameters RotX, RotY, and RotZ are interpreted
as the x-, y-, and z-component of the so-called Rodriguez rotation vector. The direction of the vector defines the
(arbitrary) axis of rotation. The length of the vector usually defines the rotation angle with positive orientation.
Here, a variation of the Rodriguez vector is used, where the length of the vector defines the tangent of half the
rotation angle:
RotX p
Rrodriguez = rotate around RotY by 2 · arctan( RotX2 + RotY2 + RotZ2 )
RotZ
TransX
R t R(RotX, RotY, RotZ) TransY
Hpose = = =
000 1 TransZ
0 0 0 1
1 0 0 TransX 0
0 1 0 TransY 0
· R(RotX, RotY, RotZ)
= = H(t) · H(R)
0 0 1 TransZ 0
0 0 0 1 0 0 0 1
Transformation of coordinates
The following equation describes how a point can be transformed from coordinate system 1 into coordinate system
2 with a pose, or more exactly, with the corresponding homogeneous transformation matrix 2 H1 (input and output
points as homogeneous vectors, see also affine_trans_point_3d). Note that to transform points from
coordinate system 1 into system 2, you use the transformation matrix that describes the pose of coordinate system
1 relative to system 2.
TransX
p2 p1
R(RotX, RotY, RotZ) · p1 + TransY
= 2 H1 · =
1 1 TransZ
1
0 1 0 0 −TransX
R(RotX, RotY, RotZ) 0 0 1 0
· −TransY
= H(R) · H(−t)
HR(p−T ) =
0 0 0 1 −TransZ
0 0 0 1 0 0 0 1
If you select ’coordinate_system’ for ViewOfTransform, the sequence of transformations remains constant,
but the rotation angles are negated. Please note that, contrary to its name, this is not equivalent to transforming a
coordinate system!
1 0 0 TransX 0
0 1 0
· R(−RotX, −RotY, −RotZ)
TransY 0
Hcoordinate_system =
0 0 1
TransZ 0
0 0 0 1 0 0 0 1
You can convert poses into other representation types using convert_pose_type and query the type using
get_pose_type.
Parameter
HALCON 8.0.2
1060 CHAPTER 15. TOOLS
Result
create_pose returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception handling is
raised.
Parallelization Information
create_pose is reentrant and processed without parallelization.
Possible Successors
pose_to_hom_mat3d, write_pose, camera_calibration, hand_eye_calibration
Alternatives
read_pose, hom_mat3d_to_pose
See also
hom_mat3d_rotate, hom_mat3d_translate, convert_pose_type, get_pose_type,
hom_mat3d_to_pose, pose_to_hom_mat3d, write_pose, read_pose
Module
Foundation
For example, if the two input matrices correspond to rigid transformations, i.e., to transformations consisting of a
rotation and a translation, the resulting matrix is calculated as follows:
Rl tl Rr tr Rl · Rr Rl ·tr + tl
HomMat3DCompose = · =
000 1 000 1 0 0 0 1
HALCON 8.0.2
1062 CHAPTER 15. TOOLS
Parameter
. HomMat3DLeft (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Left input transformation matrix.
. HomMat3DRight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Right input transformation matrix.
. HomMat3DCompose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat3d_compose returns H_MSG_TRUE. If necessary, an excep-
tion is raised.
Parallelization Information
hom_mat3d_compose is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_compose, hom_mat3d_translate, hom_mat3d_translate_local,
hom_mat3d_scale, hom_mat3d_scale_local, hom_mat3d_rotate,
hom_mat3d_rotate_local, pose_to_hom_mat3d
Possible Successors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local
See also
affine_trans_point_3d, hom_mat3d_identity, hom_mat3d_rotate,
hom_mat3d_translate, pose_to_hom_mat3d, hom_mat3d_to_pose
Module
Foundation
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. Thus, HomMat3DIdentity is stored as the
tuple [1,0,0,0,0,1,0,0,0,0,1,0].
Parameter
. HomMat3DIdentity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Transformation matrix.
Result
hom_mat3d_identity always returns H_MSG_TRUE.
Parallelization Information
hom_mat3d_identity is reentrant and processed without parallelization.
Possible Successors
hom_mat3d_translate, hom_mat3d_translate_local, hom_mat3d_scale,
hom_mat3d_scale_local, hom_mat3d_rotate, hom_mat3d_rotate_local
Alternatives
pose_to_hom_mat3d
Module
Foundation
HALCON 8.0.2
1064 CHAPTER 15. TOOLS
Axis = ’y’:
0
cos(Phi) 0 sin(Phi)
Ry 0
· HomMat3D
HomMat3DRotate = Ry = 0 1 0
0
− sin(Phi) 0 cos(Phi)
000 1
Axis = ’z’:
0
cos(Phi) − sin(Phi) 0
Rz 0
· HomMat3D
HomMat3DRotate = Rz = sin(Phi) cos(Phi) 0
0
0 0 1
000 1
Axis = [x,y,z]:
0
Ra 0
HomMat3DRotate = · HomMat3D Ra = u · uT + cos(Phi) · (I − u · uT ) + sin(Phi) · S
0
000 1
0
−z 0 y0
x 1 0 0 0
Axis
u= = y0 I= 0 1 0 S = z0 0 −x0
kAxisk
z0 0 0 1 −y 0 x0 0
The point (Px,Py,Pz) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat3DRotate. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the rotation is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 0 +Px 0 1 0 0 −Px
0 1 0 +Py
· R
0 0 1
· 0 −Py
· HomMat3D
HomMat3DRotate =
0 0 1 +Pz 0 0 0 1 −Pz
0 0 0 1 000 1 0 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat3D, use
hom_mat3d_rotate_local.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Input transformation matrix.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / double / Hlong
Axis, to be rotated around.
Default Value : "x"
Suggested values : Axis ∈ {"x", "y", "z"}
Axis = ’y’:
0
cos(Phi) 0 sin(Phi)
Ry 0
HomMat3DRotate = HomMat3D · Ry = 0 1 0
0
− sin(Phi) 0 cos(Phi)
000 1
HALCON 8.0.2
1066 CHAPTER 15. TOOLS
Axis = ’z’:
0
cos(Phi) − sin(Phi) 0
Rz 0
HomMat3DRotate = HomMat3D · Rz = sin(Phi) cos(Phi) 0
0
0 0 1
000 1
Axis = [x,y,z]:
0
Ra 0
HomMat3DRotate = HomMat3D · Ra = u · uT + cos(Phi) · (I − u · uT ) + sin(Phi) · S
0
000 1
x0 −z 0 y0
1 0 0 0
Axis
u= = y0 I= 0 1 0 S = z0 0 −x0
kAxisk
z0 0 0 1 −y 0 x0 0
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat3DRotate.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Input transformation matrix.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; Htuple . double / Hlong
Rotation angle.
Default Value : 0.78
Suggested values : Phi ∈ {0.1, 0.2, 0.3, 0.4, 0.78, 1.57, 3.14}
Typical range of values : 0 ≤ Phi ≤ 6.28318530718
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; Htuple . const char * / double / Hlong
Axis, to be rotated around.
Default Value : "x"
Suggested values : Axis ∈ {"x", "y", "z"}
. HomMat3DRotate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat3d_rotate_local returns H_MSG_TRUE. If necessary,
an exception is raised.
Parallelization Information
hom_mat3d_rotate_local is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_translate_local, hom_mat3d_scale_local,
hom_mat3d_rotate_local
Possible Successors
hom_mat3d_translate_local, hom_mat3d_scale_local, hom_mat3d_rotate_local
See also
hom_mat3d_invert, hom_mat3d_identity, hom_mat3d_rotate, pose_to_hom_mat3d,
hom_mat3d_to_pose, hom_mat3d_compose
Module
Foundation
The point (Px,Py,Pz) is the fixed point of the transformation, i.e., this point remains unchanged when transformed
using HomMat3DScale. To obtain this behavior, first a translation is added to the input transformation matrix
that moves the fixed point onto the origin of the global coordinate system. Then, the scaling is added, and finally
a translation that moves the fixed point back to its original position. This corresponds to the following chain of
transformations:
1 0 0 +Px 0 1 0 0 −Px
· 0 1 0 −Py · HomMat3D
0 1 0 +Py S 0
HomMat3DScale = 0 0 1 +Pz ·
0 0 0 1 −Pz
0 0 0 1 000 1 0 0 0 1
To perform the transformation in the local coordinate system, i.e., the one described by HomMat3D, use
hom_mat3d_scale_local.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Input transformation matrix.
. Sx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the x-axis.
Default Value : 2
Suggested values : Sx ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 112}
Restriction : Sx 6= 0
. Sy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the y-axis.
Default Value : 2
Suggested values : Sy ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 112}
Restriction : Sy 6= 0
. Sz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double / Hlong
Scale factor along the z-axis.
Default Value : 2
Suggested values : Sz ∈ {0.125, 0.25, 0.5, 1, 2, 4, 8, 112}
Restriction : Sz 6= 0
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x ; Htuple . double / Hlong
Fixed point of the transformation (x coordinate).
Default Value : 0
Suggested values : Px ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
HALCON 8.0.2
1068 CHAPTER 15. TOOLS
The fixed point of the transformation is the origin of the local coordinate system, i.e., this point remains unchanged
when transformed using HomMat3DScale.
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
HALCON 8.0.2
1070 CHAPTER 15. TOOLS
Result
hom_mat3d_to_pose returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised
Parallelization Information
hom_mat3d_to_pose is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_rotate, hom_mat3d_translate, hom_mat3d_invert
Possible Successors
camera_calibration, write_pose, disp_caltab, sim_caltab
See also
create_pose, camera_calibration, disp_caltab, sim_caltab, write_pose, read_pose,
pose_to_hom_mat3d, project_3d_point, get_line_of_sight, hom_mat3d_rotate,
hom_mat3d_translate, hom_mat3d_invert, affine_trans_point_3d
Module
Foundation
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double
Input transformation matrix.
. Tx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x ; Htuple . double / Hlong
Translation along the x-axis.
Default Value : 64
Suggested values : Tx ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Ty (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y ; Htuple . double / Hlong
Translation along the y-axis.
Default Value : 64
Suggested values : Ty ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. Tz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z ; Htuple . double / Hlong
Translation along the z-axis.
Default Value : 64
Suggested values : Tz ∈ {0, 16, 32, 64, 128, 256, 512, 1024}
. HomMat3DTranslate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; Htuple . double *
Output transformation matrix.
Result
If the parameters are valid, the operator hom_mat3d_translate returns H_MSG_TRUE. If necessary, an
exception is raised.
Parallelization Information
hom_mat3d_translate is reentrant and processed without parallelization.
Possible Predecessors
hom_mat3d_identity, hom_mat3d_translate, hom_mat3d_scale, hom_mat3d_rotate
Possible Successors
hom_mat3d_translate, hom_mat3d_scale, hom_mat3d_rotate
See also
hom_mat3d_invert, hom_mat3d_identity, hom_mat3d_translate_local,
pose_to_hom_mat3d, hom_mat3d_to_pose, hom_mat3d_compose
Module
Foundation
Attention
Note that homogeneous matrices are stored row-by-row as a tuple; the last row is not stored because it is identical
for all homogeneous matrices that describe an affine transformation. For example, the homogeneous matrix
ra rb rc td
re rf rg th
ri rj rk tl
0 0 0 1
HALCON 8.0.2
1072 CHAPTER 15. TOOLS
is stored as the tuple [ra, rb, rc, td, re, rf, rg, th, ri, rj, rk, tl].
Parameter
Result
pose_to_hom_mat3d returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised
Parallelization Information
pose_to_hom_mat3d is reentrant and processed without parallelization.
Possible Predecessors
camera_calibration, read_pose
Possible Successors
affine_trans_point_3d, hom_mat3d_invert, hom_mat3d_translate,
hom_mat3d_rotate, hom_mat3d_to_pose
See also
create_pose, camera_calibration, write_pose, read_pose, hom_mat3d_to_pose,
project_3d_point, get_line_of_sight, hom_mat3d_rotate, hom_mat3d_translate,
hom_mat3d_invert, affine_trans_point_3d
Module
Foundation
HALCON 8.0.2
1074 CHAPTER 15. TOOLS
Parameter
. PoseFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of the exterior camera parameters.
Default Value : "campose.dat"
List of values : PoseFile ∈ {"campose.dat", "campose.initial", "campose.final"}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
3D pose.
Number of elements : 7
Result
read_pose returns H_MSG_TRUE if all parameter values are correct and the file has been read successfully. If
necessary an exception handling is raised.
Parallelization Information
read_pose is reentrant and processed without parallelization.
Possible Predecessors
read_cam_par
Possible Successors
pose_to_hom_mat3d, camera_calibration, disp_caltab, sim_caltab
See also
create_pose, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
write_pose, pose_to_hom_mat3d, hom_mat3d_to_pose
Module
Foundation
A typical application of this operator when defining a world coordinate system by placing the standard cal-
ibration plate on the plane of measurements. In this case, the external camera parameters returned by
camera_calibration correspond to a coordinate system that lies above the measurement plane, because
the coordinate system of the calibration plate is located on its surface and the plate has a certain thickness. To
correct the pose, call set_origin_pose with the translation vector (0,0,D), where D is the thickness of the
calibration plate.
Parameter
. PoseIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
original 3D pose.
Number of elements : 7
. DX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
translation of the origin in x-direction.
Default Value : 0
. DY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
translation of the origin in y-direction.
Default Value : 0
. DZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
translation of the origin in z-direction.
Default Value : 0
. PoseNewOrigin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
new 3D pose after applying the translation.
Number of elements : 7
Result
set_origin_pose returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception han-
dling is raised.
Parallelization Information
set_origin_pose is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration
Possible Successors
write_pose, pose_to_hom_mat3d, image_points_to_world_plane,
contour_to_world_plane_xld
See also
hom_mat3d_translate_local
Module
Foundation
HALCON 8.0.2
1076 CHAPTER 15. TOOLS
Parameter
Result
write_pose returns H_MSG_TRUE if all parameter values are correct and the file has been written successfully.
If necessary an exception handling is raised.
Parallelization Information
write_pose is local and processed completely exclusively without parallelization.
Possible Predecessors
camera_calibration, hom_mat3d_to_pose
See also
create_pose, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
read_pose, pose_to_hom_mat3d, hom_mat3d_to_pose
Module
Foundation
15.3 Background-Estimator
close_all_bg_esti ( )
T_close_all_bg_esti ( )
/* read Init-Image: */
read_image(&InitImage,"Init_Image") ;
/* initialize BgEsti-Dataset
with fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7,10,3.25,15.0,&BgEstiHandle) \’
/* read the next image in sequence: */
read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region1,WindowHandle) ;
/* read the next image in sequence: */
read_image(&Image2,"Image_2") ;
/* estimate the Background: */
run_bg_esti(Image2,&Region2,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region2,WindowHandle) ;
HALCON 8.0.2
1078 CHAPTER 15. TOOLS
/* etc. */
/* - end of background estimation - */
/* close the dataset: */
close_bg_est(BgEstiHandle) ;
Result
close_bg_esti returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
close_bg_esti is local and processed completely exclusively without parallelization.
Possible Predecessors
run_bg_esti
See also
create_bg_esti
Module
Foundation
AdaptMode denotes, whether the foreground/background decision threshold applied to the grayvalue difference
between estimation and actual value is fixed or whether it adapts itself depending on the grayvalue deviation of the
background pixels.
If AdaptMode is set to ’off’, the parameter MinDiff denotes a fixed threshold. The parameters StatNum,
ConfidenceC and TimeC are meaningless in this case.
If AdaptMode is set to ’on’, then MinDiff is interpreted as a base threshold. For each pixel an offset is added
to this threshold depending on the statistical evaluation of the pixel value over time. StatNum holds the number
of data sets (past frames) that are used for computing the grayvalue variance (FIR-Filter). ConfidenceC is used
to determine the confidence interval.
The confidence interval determines the values of the background statistics if background pixels are hidden by
a foreground object and thus are detected as foreground. According to the student t-distribution the confidence
constant is 4.30 (3.25, 2.82, 2.26) for a confidence interval of 99,8% (99,0%, 98,0%, 95,0%). TimeC holds a
time constant for the exp-function that raises the threshold in case of a foreground estimation of the pixel. That
means, the threshold is raised in regions where movement is detected in the foreground. That way larger changes in
illumination are tolerated if the background becomes visible again. The main reason for increasing this tolerance is
the impossibility for a prediction of illumintaion changes while the background is hidden. Therefore no adaptation
of the estimated background image is possible.
Attention
If GainMode was set to ’frame’, the run-time can be extremly long for large values of Gain1 or Gain2, because
the values for the gains’ table are determined by a simple binary search.
Parameter
HALCON 8.0.2
1080 CHAPTER 15. TOOLS
Result
create_bg_esti returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
create_bg_esti is local and processed completely exclusively without parallelization.
Possible Successors
run_bg_esti
See also
set_bg_esti_params, close_bg_esti
Module
Foundation
/* read Init-Image:*/
read_image(&InitImage,"Init_Image") ;
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7.0,10,3.25,15.0,&BgEstiHandle) ;
/* read the next image in sequence: */
read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region1,WindowHandle) ;
/* read the next image in sequence: */
read_image(&Image2,"Image_2") ;
/* estimate the Background: */
run_bg_esti(Image2,&Region2,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region2,WindowHandle) ;
HALCON 8.0.2
1082 CHAPTER 15. TOOLS
/* etc. */
/* change only the gain parameter in dataset: */
get_bg_esti_params(BgEstiHandle,&par1,&par2,&par3,&par4,
&par5,&par6,&par7,&par8,&par9,&par10);
set_bg_esti_params(BgEstiHandle,par1,par2,par3,0.004,
0.08,par6,par7,par8,par9,par10) ;
/* read the next image in sequence: */
read_image(&Image3,"Image_3") ;
/* estimate the Background: */
run_bg_esti(Image3,&Region3,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region3,WindowHandle) ;
/* etc. */
Result
get_bg_esti_params returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
get_bg_esti_params is reentrant and processed without parallelization.
Possible Predecessors
create_bg_esti
Possible Successors
run_bg_esti
See also
set_bg_esti_params
Module
Foundation
/* read Init-Image: */
read_image(&InitImage,"Init_Image") ;
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption: */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7,10,3.25,15.0,&BgEstiHandle) ;
/* read the next image in sequence: */
read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* give the background image from the aktive dataset: */
give_bg_esti(&BgImage,BgEstiHandle) ;
/* display the background image: */
disp_image(BgImage,WindowHandle) ;
Result
give_bg_esti returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
give_bg_esti is reentrant and processed without parallelization.
Possible Predecessors
run_bg_esti
Possible Successors
run_bg_esti, create_bg_esti, update_bg_esti
See also
run_bg_esti, update_bg_esti, create_bg_esti
Module
Foundation
The background estimation processes only single-channel images. Therefore the background has to be adapted
separately for every channel.
The background estimation should be used on half- or even quarter-sized images. For this, the input images (and
the initialization image!) has to be reduced using zoom_image_factor. The advantage is a shorter run-time
on one hand and a low-band filtering on the other. The filtering eliminates high frequency noise and results in a
more reliable estimation. As a result the threshold (see create_bg_esti) can be lowered. The foreground
region returned by run_bg_esti then has to be enlarged again for further processing.
Attention
The passed image (PresentImage) must have the same type and size as the background image of the current
data set (initialized with create_bg_esti).
Parameter
. PresentImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / real
Current image.
. ForegroundRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Region of the detected foreground.
. BgEstiHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bg_estimation ; Hlong
ID of the BgEsti data set.
Example
/* read Init-Image: */
read_image(&InitImage,"Init_Image") ;
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7,10,3.25,15.0,&BgEstiHandle) ;
/* read the next image in sequence: */
HALCON 8.0.2
1084 CHAPTER 15. TOOLS
read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region1,WindowHandle) ;
/* read the next image in sequence: */
read_image(&Image2,"Image_2") ;
/* estimate the Background: */
run_bg_esti(Image2,&Region2,BgEstiHandle) ;
/* display the foreground region: */
disp_region(Region2,WindowHandle) ;
/* etc. */
Result
run_bg_esti returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
run_bg_esti is reentrant and processed without parallelization.
Possible Predecessors
create_bg_esti, update_bg_esti
Possible Successors
run_bg_esti, give_bg_esti, update_bg_esti
See also
set_bg_esti_params, create_bg_esti, update_bg_esti, give_bg_esti
Module
Foundation
/* read Init-Image:*/
read_image(&InitImage,"Init_Image") ;
HALCON 8.0.2
1086 CHAPTER 15. TOOLS
Result
set_bg_esti_params returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
set_bg_esti_params is reentrant and processed without parallelization.
Possible Predecessors
create_bg_esti
Possible Successors
run_bg_esti
See also
update_bg_esti
Module
Foundation
Parameter
. PresentImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / real
Current image.
. UpDateRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region describing areas to change.
. BgEstiHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bg_estimation ; Hlong
ID of the BgEsti data set.
Example
/* read Init-Image: */
read_image(&InitImage,"Init_Image") ;
/* initialize BgEsti-Dataset with
fixed gains and threshold adaption */
create_bg_esti(InitImage,0.7,0.7,"fixed",0.002,0.02,
"on",7,10,3.25,15.0,&BgEstiHandle) ;
/* read the next image in sequence: */
read_image(&Image1,"Image_1") ;
/* estimate the Background: */
run_bg_esti(Image1,&Region1,BgEstiHandle) ;
/* use the Region and the information of a knowledge base */
/* to calculate the UpDateRegion */
update_bg_esti(Image1,UpdateRegion,BgEstiHandle) ;
/* then read the next image in sequence: */
read_image(&Image2,"Image_2") ;
/* estimate the Background: */
run_bg_esti(Image2,&Region2,BgEstiHandle) ;
/* etc. */
Result
update_bg_esti returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
update_bg_esti is reentrant and processed without parallelization.
Possible Predecessors
run_bg_esti
Possible Successors
run_bg_esti
See also
run_bg_esti, give_bg_esti
Module
Foundation
15.4 Barcode
clear_all_bar_code_models ( )
T_clear_all_bar_code_models ( )
Delete all bar code models and free the allocated memory
The operator clear_all_bar_code_models deletes all bar code models that were created by
create_bar_code_model. All memory used by the models is freed. After the operator call, all bar code
handles are invalid.
Attention
clear_all_bar_code_models exists solely for the purpose of implementing the “reset program” function-
ality in HDevelop. clear_all_bar_code_models must not be used in any application.
HALCON 8.0.2
1088 CHAPTER 15. TOOLS
Result
The operator clear_all_bar_code_models returns the value H_MSG_TRUE if all bar code models were
freed correctly. Otherwise, an exception will be raised.
Parallelization Information
clear_all_bar_code_models is processed completely exclusively without parallelization.
Alternatives
clear_bar_code_model
See also
create_bar_code_model, find_bar_code
Module
Bar Code
Parameter
HALCON 8.0.2
1090 CHAPTER 15. TOOLS
Following bar code symbologies are supported: 2/5 Industrial, 2/5 Interleaved, Codabar, Code 39, Code 93, Code
128, EAN-8, EAN-8 Add-On 2, EAN-8 Add-On 5, EAN-13, EAN-13 Add-On 2, EAN-13 Add-On 5, UPC-A,
UPC-A Add-On 2, UPC-A Add-On 5, UPC-E, UPC-E Add-On 2, UPC-E Add-On 5, PharmaCode, RSS-14, RSS-
14 Truncated, RSS-14 Stacked, RSS-14 Stacked Omnidirectional, RSS Limited, RSS Expanded, RSS Expanded
Stacked.
Note, that the PharmaCode can be read in forward and backward direction, both yielding a valid result. Therefore,
both strings are returned and concatenated into a single string in DecodedDataStrings by a separating comma.
Parameter
Access iconic objects that were created during the search or decoding of bar code symbols.
With the operator get_bar_code_object, iconic objects created during the last call of the operator
find_bar_code can be accessed. Besides the name of the object (ObjectName), the bar code model
(BarCodeHandle) must be passed to get_bar_code_object. In addition, in CandidateHandle an in-
dex to a single decoded symbol or a single symbol candidate must be passed. Alternatively, CandidateHandle
can be set to ’all’ and then all objects of the decoded symbols or symbol candidates are returned.
Setting ObjectName to ’symbol_regions’ will return regions of successfully decoded symbols. When choosing
’all’ as CandidateHandle, the regions of all decoded symbols are retrieved. The order of the returned objects
is the same as in find_bar_code. If there is a total of n decoded symbols CandidateHandle can be chosen
in between 0 and (n-1) to get the region of the respective decoded symbol.
Setting ObjectName to ’candidate_regions’ will return regions of potential bar codes. If there is a total of n
decoded symbols out of a total of m candidates then CandidateHandle can be chosen between 0 and (m-1).
With CandidateHandle between 0 and (n-1) the original segmented region of the respective decoded symbol
is retrieved. With CandidateHandle between n and (m-1) the region of the potential or undecodable symbol
is returned. In addition, CandidateHandle can be set to ’all’ to retrieve all candidate regions at the same time.
Setting ObjectName to ’scanlines_all’ or ’scanlines_valid’ will return XLD contours representing the partic-
ular detected bars in the scanlines applied on the candidate regions. ’scanlines_all’ represents all scanlines that
find_bar_code whould place in order to decode a barcode. ’scanlines_valid’ represents only those scanlines
that could be successfully decoded. For single row bar codes, there will be at least one ’scanlines_valid’ if the
symbol was successfully decoded. There will be no ’scanlines_valid’ if it was not decoded. For stacked bar codes
(e.g. ’RSS-14 Stacked’ and ’RSS Expanded Stacked’) this rule applies similarly, but on a per-symbol-row basis
rather then per-symbol. Note that get_bar_code_object returns all XLD contours merged into a single ar-
ray of XLDs and hence there is no way to identify the contours corresponding to separate scanlines. Furthermore,
if ’all’ is used as CandidateHandle, the output object will contain XLD contours for all symbols and in this
case there is no way to identify the contours corresponding to separate symbols as well. However, the contours
still can be used for visualization purposes.
Parameter
. BarCodeObjects (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; Hobject *
Objects that are created as intermediate results during the detection or evaluation of bar codes.
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; Hlong
Handle of the bar code model.
. CandidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; const char * / Hlong
Indicating the bar code results respectively candidates for which the data is required.
Default Value : "all"
Suggested values : CandidateHandle ∈ {0, 1, 2, "all"}
. ObjectName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Name of the iconic object to return.
Default Value : "symbol_regions"
List of values : ObjectName ∈ {"symbol_regions", "candidate_regions", "scanlines_all", "scanlines_valid"}
Result
The operator get_bar_code_object returns the value H_MSG_TRUE if the given parameters are correct
and the requested objects are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_bar_code_object is reentrant and processed without parallelization.
Possible Predecessors
find_bar_code
See also
get_bar_code_result
Module
Bar Code
Get one or several parameters that describe the bar code model.
The operator get_bar_code_param allows to query parameters of a bar code model, which are of relevance
for a successful search and decoding of a respective class of bar codes.
The names of the desired parameters are passed in the generic parameter GenParamNames and the corresponding
values are returned in GenParamValues. All of these parameters can be set and changed at any time with the
operator set_bar_code_param.
The following parameters can be queried – ordered by different categories:
HALCON 8.0.2
1092 CHAPTER 15. TOOLS
’meas_thresh’: Threshold for the detection of edges in the bar code region.
’max_diff_orient’: Maximal difference in the orientation of edges in a bar code region. The difference in oriented
angles, given in degree, refers to neighboring pixels.
Further details on the above parameters can be found with the description of set_bar_code_param operator.
Parameter
Get the alphanumerical results that were accumulated during the decoding of bar code symbols.
The operator get_bar_code_result allows to access alphanumeric results of the find and decode process.
To access a result, first the handle of the bar code model (BarCodeHandle) and the index of the resulting
symbol (CandidateHandle) must be passed. CandidateHandle refers to the results, in the same order that
is returned by operator find_bar_code. CandidateHandle can take numbers from 0 to (n-1), where n is
the total number of successfully decoded symbols. Alternatively, CandidateHandle can be set to ’all’ if all
results are desired. The option ’all’ can be chosen only in the case where the return value of a single result is single
valued.
When ResultName is set to ’decoded_strings’ the decoded result is returned as a string in a human readable
format. This decoded string can be returned for a single result, i.e., CandidateHandle is for example 0, or for
all results simultaneously, i.e., CandidateHandle is set to ’all’. Note, that only data characters are comprised
in the decoded string. Start/stop characters are excluded, but can be refered to via ’decoded_reference’. For codes
with a facultative check character it depends on the settings whether the check character is returned or not. When
’check_char’ is set to the default value ’absent’ the decoded string takes the check character as a normal data
character. When ’check_char’ is set to ’present’ and if the check character is correct it will be ignored in the string.
If the check character is wrong the resulting string is an empty string.
When choosing ’decoded_reference’ as ResultName the underlying decoded reference data is returned. It com-
prises all original characters of the symbol, i.e., data characters, potential start or stop characters and check charac-
ters if present. For codes taking only numeric data, like, e.g., the EAN/UPC codes, the RSS-14 and RSS Limited
codes, or the 2/5 codes, the decoded reference data takes the same values as the decoded string data including check
characters. For codes with alphanumeric data, like for example code 39 or code 128 the decoded reference data are
the indices of the respective decoding table. For RSS Expanded and RSS Expanded Stacked the reference values
are the ASCII codes of the decoded data, where the special charachter FNC1 appears with value 10. Furthermore,
for all codes from the RSS family the first reference value reprsents a linkage flag with value of 1 if the flag is set
and 0 otherwise. As the decoded reference is a tuple of whole numbers it can only be called for a single result,
meaning that CandidateHandle has to be the handle number of the corresponding decoded symbol.
When ResultName is set to ’composite_strings’ or ’composite_reference’, then the decoded string or the refer-
ence data of a RSS Composite component is returned, respectively. For further details see the description of the
parameter ’composite_code’ of set_bar_code_param.
When ResultName is set to ’orientation’, the orientation for the specified result is returned. The ’orientation’ of
a bar code is defined as the angle between its reading direction and the horizontal image axis. The angle is positive
in counter clockwise direction and is given in degrees. It can be in the range of [-180.0 . . . 180.0] degrees. Note
that the reading direction is perpendicular to the bars of the bar code. A single angle is returned when only one
result is specified, e.g., by entering 0 for CandidateHandle. Otherwise, when CandidateHandle is set to
’all’, a tuple containing the angles of all results is returned.
Parameter
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; (Htuple .) Hlong
Handle of the bar code model.
. CandidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) const char * / Hlong
Indicating the bar code results respectively candidates for which the data is required.
Default Value : "all"
Suggested values : CandidateHandle ∈ {0, 1, 2, "all"}
. ResultName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; (Htuple .) const char *
Names of the resulting data to return.
Default Value : "decoded_strings"
Suggested values : ResultName ∈ {"decoded_strings", "decoded_reference", "orientation",
"composite_strings", "composite_reference"}
. BarCodeResults (output_control) . . . . . . . attribute.value(-array) ; (Htuple .) char * / Hlong * / double *
List with the results.
HALCON 8.0.2
1094 CHAPTER 15. TOOLS
Result
The operator get_bar_code_result returns the value H_MSG_TRUE if the given parameters are correct
and the requested results are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_bar_code_result is reentrant and processed without parallelization.
Possible Predecessors
find_bar_code
See also
get_bar_code_object
Module
Bar Code
’element_size_min’: Minimal size of bar code elements, i.e. the minimal width of bars and spaces. For small bar
codes the value should be reduced to 1.5. In the case of huge bar codes the value should be increased, which
results in a shorter execution time and fewer candidates.
Typical values: [1.5 . . . 10.0]
Default: 2.0
’element_size_max’: Maximal size of bar code elements, i.e. the maximal width of bars and spaces. The value of
’element_size_max’ should be adequate low such that two neighboring bar codes are not fused into a single
one. On this other hand the value should be sufficiently high in order to find the complete bar code region.
Typical values: [4.0 . . . 60.0]
Default: 8.0
’element_height_min’: Minimal bar code height. The default value of this parameter is -1, meaning that the bar
code reader automatically derives a reasonable height from the other parameters. Just for very flat and very
high bar codes a manual adjustment of this parameter can be necessary. In the case of a bar code with a height
of less than 16 pixels the respective height should be set by the user. Note, that the minimal value is 8 pixels.
If the bar code is very high, i.e. 70 pixels and more, manually adjusting to the respective height can lead to a
speed-up of the subsequent finding and reading operation.
Typical values: [-1, 8 . . . 64]
Default: -1
’orientation’: Expected bar code orientation. A potential (candidate) bar code contains bars with similar ori-
entation. The ’orientation’ and ’orientation_tol’ parameters are used to specify the range [’orientation’-
’orientation_tol’, ’orientation’+’orientation_tol’]. find_bar_code processes a candidate bar code only
when the avarage orientation of its bars lies in this range. If the bar codes are expected to appear only in
certain orientations in the processed images, one can reduce the orientation range adequately. This enables
an early identification of false candidates and hence shorter execution times. This adjustment can be used for
images with a lot of texture, which includes fragments tending to result in false bar code candidates.
The actual orientation angle of a bar code is explained with get_bar_code_result(...,’orientation’,...)
with the only difference that for the early identification of false candidates the reading direction of the bar
codes is ignored, which results in relevant orientation values only in the range [-90.0 . . . 90.0]. The only ex-
ception to this rule constitutes the bar code symbol PharmaCode, which possesses a forward and a backward
reading direction at the same time: here, ’orientation’ can take values in the range [-180.0 . . . 180.0] and the
decoded result is unique corresponding to just one reading direction.
Typical values: [-90.0 . . . 90.0]
Default: 0.0
’orientation_tol’: Orientation tolerance. See the explanation of ’orientation’ parameter. As explained there, rel-
evant orientation values are only in the range of [-90.0 . . . 90.0], which means that with ’orientation_tol’ =
90 the whole range is spanned. Therefore, valid values for ’orientation_tol’ are only in the range of [0.0
. . . 90.0]. The default value 90.0 means that no restriction on the bar code candidates is performed.
Typical values: [0.0 . . . 90.0]
Default: 90.0
’meas_thresh’: The bar-space-sequence of a bar code is determined with a scanline measuring the position of the
edges. Finding these edges requires a threshold. ’meas_thresh’ defines this threshold which is a relative value
with respect to the dynamic range of the scanline pixels. In the case of disturbances in the bar code region or
a high noise level, the value of ’meas_thresh’ should be increased.
Typical values: [0.05 . . . 0.2]
Default: 0.05
’max_diff_orient’: A potential bar code region contains bars, and hence edges, with a similar orientation. The
value max_diff_orient denotes the maximal difference in this orientation between adjacent pixels and is given
in degree. If a bar code is of bad quality with jagged edges the parameter max_diff_orient should be set to
bigger values. If the bar code is of good quality max_diff_orient can be set to smaller values, thus reducing
the number of potential but false bar code candidates.
Typical values: [2 . . . 20]
Default: 10
’check_char’: For bar codes with a facultative check character, this parameter determines whether the check char-
acter is taken into account or not. If the bar code has a check character, ’check_char’ should be set to ’present’
and thus the check character is tested. In that case, a bar code result is returned only if the check sum is cor-
rect. For ’check_char’ set to ’absent’ no check sum is computed and bar code results are retunred as long as
they were successfully decoded. Bar codes with a facultative check character are, e.g. Code 39, Codabar, 25
Industrial and 25 Interleaved.
Values: [’absent’, ’present’]
Default: ’absent’
’composite_code’: EAN.UPC bar codes can have an additional 2D Composite code component appended. If
’composite_code’ is set to ’CC-A/B’ the composite component will be found and decoded. By default, ’com-
posite_code’ is set to ’none’ and thus it is disabled. If the searched bar code symbol has no attached composite
component, just the result of the bar code itself is returned by find_bar_code. Composite codes are sup-
ported only for bar codes of the RSS family.
Values: [’none’, ’CC-A/B’]
Default: ’none’
Parameter
. BarCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . barcode ; (Htuple .) Hlong
Handle of the bar code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that shall be adjusted for finding and decoding bar codes.
Default Value : "element_size_max"
List of values : GenParamNames ∈ {"element_size_min", "element_size_max", "element_height_min",
"orientation", "orientation_tol", "meas_thresh", "max_diff_orient", "check_char", "composite_code"}
. GenParamValues (input_control) . . . . . attribute.name(-array) ; (Htuple .) Hlong / const char * / double
Values of the generic parameters that are adjusted for finding and decoding bar codes.
Default Value : 8
Suggested values : GenParamValues ∈ {0.1, 1.5, 2, 8, 32, 45, "present", "absent", "none", "CC-A/B"}
HALCON 8.0.2
1096 CHAPTER 15. TOOLS
Result
The operator set_bar_code_param returns the value H_MSG_TRUE if the given parameters are correct.
Otherwise, an exception will be raised.
Parallelization Information
set_bar_code_param is reentrant and processed without parallelization.
Possible Predecessors
create_bar_code_model
Possible Successors
find_bar_code
Module
Bar Code
15.5 Calibration
T_caltab_points ( const Htuple CalTabDescrFile, Htuple *X, Htuple *Y,
Htuple *Z )
Read the mark center points from the calibration plate description file.
caltab_points reads the mark center points from the calibration plate description file CalTabDescrFile
(see gen_caltab) and returns their coordinates in X, Y und Z. The mark center points are 3D coordinates in
the calibration plate coordinate system und describe the 3D model of the calibration plate. The calibration plate
coordinate system is located in the middle of the surface of the calibration plate, its z-axis points into the calibration
plate, its x-axis to the right, and it y-axis downwards.
The mark center points are typically used as input parameters for the operator camera_calibration. This
operator projects the model points into the image, minimizes the distance between the projected points and the
observed 2D coordinates in the image (see find_marks_and_pose), and from this computes the exact values
for the interior and exterior camera parameters.
Parameter
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
X coordinates of the mark center points in the coordinate system of the calibration plate.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Y coordinates of the mark center points in the coordinate system of the calibration plate.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Z coordinates of the mark center points in the coordinate system of the calibration plate.
Example (Syntax: HDevelop)
* read_image(Image1, ’calib-01’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
StartCamPar := [0.008, 0.0, 0.000011, 0.000011, 384, 288, 768, 576]
find_marks_and_pose(Image1,Caltab1,’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,
StartPose1)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ) >
* camera calibration
camera_calibration(NX, NY, NZ, RCoord1, CCoord1, StartCamPar,
StartPose1, ’all’, CamParam, FinalPose, Errors)
Result
caltab_points returns H_MSG_TRUE if all parameter values are correct and the file CalTabDescrFile
has been read successfully. If necessary, an exception handling is raised.
Parallelization Information
caltab_points is reentrant and processed without parallelization.
Possible Successors
camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
project_3d_point, get_line_of_sight, gen_caltab
Module
Foundation
HALCON 8.0.2
1098 CHAPTER 15. TOOLS
Result
If the parameters are valid, the operator cam_mat_to_cam_par returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
cam_mat_to_cam_par is reentrant and processed without parallelization.
Possible Predecessors
stationary_camera_self_calibration
See also
camera_calibration, cam_par_to_cam_mat
Module
Calibration
Result
If the parameters are valid, the operator cam_par_to_cam_mat returns the value H_MSG_TRUE. If necessary
an exception handling is raised.
Parallelization Information
cam_par_to_cam_mat is reentrant and processed without parallelization.
Possible Predecessors
camera_calibration
See also
stationary_camera_self_calibration, cam_mat_to_cam_par
Module
Calibration
Then, the point is projected into the image plane, i.e., onto the sensor chip.
For the modeling of this projection process that is determined by the used combination of camera, lens, and frame
grabber, HALCON provides the following three 3D camera models:
For area scan cameras, the projection of the point pc that is given in camera coordinates into a (sub-)pixel [r,c]
in the image consists of the following steps: First, the point is projected into the image plane, i.e., onto the sensor
chip. If the underlying camera model is an area scan pinhole camera, i.e., if the focal length passed in CamParam
is greater than 0, the projection is described by the following equations:
x
pc = y
z
HALCON 8.0.2
1100 CHAPTER 15. TOOLS
x y
u = Focus · and v = Focus ·
z z
In contrast, if the focal length is passed as 0 in CamParam, the camera model of an area scan telecentric camera
is used, i.e., it is assumed that the optics of the lens of the camera performs a parallel projection. In this case, the
corresponding equations are:
x
pc = y
z
u = x and v=y
2u 2v
ũ = p and ṽ = p
1+ 1− 4κ(u2 + v2 ) 1+ 1 − 4κ(u2 + v 2 )
Finally, the point is transformed from the image plane coordinate system into the image coordinate system, i.e.,
the pixel coordinate system:
ũ ṽ
c= + Cx and r= + Cy
Sx Sy
For line scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON,
the following assumptions for this motion are made:
The motion is described by the motion vector V = (Vx , Vy , Vz )T that must be given in [meter/scanline] in the
camera coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact,
this is equivalent to the assumption of a fixed camera with the object travelling along −V .
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system is the
center of projection. The z-axis is identical to the optical axis and directed so that the visible points have positive z
coordinates. The y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector
has a positive y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a
right-handed coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves
relatively to the object, i.e., each image line has been imaged from a different position. This means, there would
be an individual pose for each image line. To make things easier, in HALCON, all transformations from world
coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion
V is taken into account during the projection of the point pc into the image. Consequently, only the pose of the
first image line is returned by the operators find_marks_and_pose and camera_calibration.
For line scan pinhole cameras, the projection of the point pc that is given in the camera coordinate system into a
(sub-)pixel [r,c] in the image is defined as follows:
Assuming
x
pc = y ,
z
m · D · ũ = x − t · Vx
−m · D · pv = y − t · Vy
m · Focus = z − t · Vz
with
1
D =
1+ κ(ũ2 + (pv )2 )
pv = Sy · Cy
ũ
c= + Cx and r=t
Sx
Camera parameters
The total of 14 camera parameters for area scan cameras and 17 camera parameters for line scan cameras, respec-
tively, can be divided into the interior and exterior camera parameters:
Interior camera parameters: These parameters describe the characteristics of the used camera, especially the
dimension of the sensor itself and the projection properties of the used combination of lens, camera, and
frame grabber.
For area scan cameras, the above described camera model contains the following 8 parameters:
Focus: Focal length of the lens. 0 for telecentric lenses.
Kappa (κ): Distortion coefficient to model the pillow- or barrel-shaped distortions caused by the lens.
Sx : Scale factor. For pinhole cameras, it corresponds to the horizontal distance between two neighbor-
ing cells on the sensor. For telecentric cameras, it represents the horizontal size of a pixel in world
coordinates. Attention: This value increases, if the image is subsampled!
Sy : Scale factor. For pinhole cameras, it corresponds to the vertical distance between two neighboring
cells on the sensor. For telecentric cameras, it respresents the vertical size of a pixel in world coordi-
nates. Since in most cases the image signal is sampled line-synchronously, this value is determined
by the dimension of the sensor and needn’t be estimated for pinhole cameras by the calibration
process. Attention: This value increases, if the image is subsampled!
Cx : Column coordinate of the image center point (center of the radial distortion).
Cy : Row coordinate of the image center point (center of the radial distortion).
ImageWidth: Width of the sampled image. Attention: This value decreases, if the image is subsam-
pled!
ImageHeight: Height of the sampled image. Attention: This value decreases, if the image is subsam-
pled!
For line scan cameras, the above described camera model contains the following 11 parameters:
Focus: Focal length of the lens.
Kappa: Distortion coefficient to model the pin-cushion- or barrel-shaped distortions caused by the lens.
Sx : Scale factor, corresponds to the horizontal distance between two neighboring cells on the sensor.
Attention: This value increases if the image is subsampled!
Sy : Scale factor. During the calibration, it appears only in the form pv = Sy · Cy . pv describes the
distance of the image center point from the sensor line in [meters]. Attention: This value increases
if the image is subsampled!
Cx : Column coordinate of the image center point (center of the radial distortion).
Cy : Distance of the image center point (center of the radial distortion) from the sensor line in [scanlines].
ImageWidth: Width of the sampled image. Attention: This value decreases if the image is subsampled!
ImageHeight: Height of the sampled image. Attention: This value decreases if the image is subsam-
pled!
Vx : X-component of the motion vector.
Vy : Y-component of the motion vector.
Vz : Z-component of the motion vector.
Note that the term focal length is not quite correct and would be appropriate only for an infinite object
distance. To simplify matters, always the term focal length is used even if the image distance is meant.
HALCON 8.0.2
1102 CHAPTER 15. TOOLS
Exterior camera parameters: These 6 parameters describe the 3D pose, i.e., the position and orientation, of the
world coordinate system relative to the camera coordinate system. For line scan cameras, the pose of the
world coordinate system refers to the camera coordinate system of the first image line. Three parameters
describe the translation, three the rotation. See create_pose for more information about 3D poses. Note
that camera_calibration operates with all types of 3D poses for NStartPose.
When using the standard calibration plate, the world coordinate system is defined by the coordinate system
of the calibration plate which is located in the middle of the surface of the calibration plate, its z-axis pointing
into the calibration plate, its x-axis to the right, and it y-axis downwards.
How to generate a appropriate calibration plate? The simplest method to determine the interior parameters of
a camera is the use of the standard calibration plate as generated by the operator gen_caltab. You can
obtain high-precision calibration plates in various sizes and materials from your local distributor. In case of
small distances between object and lens it may be sufficient to print the calibration pattern by a laser printer
and to mount it on a cardboard. Otherwise – especially by using a wide-angle lens – it is possible to print
the PostScript file on a large ink-jet printer and to mount it on a aluminum plate. It is very important that
the mark coordinates in the calibration plate description file correspond to the real ones on the calibration
plate with high accuracy. Thus, the calibration plate description file has to be modified in accordance with
the measurement of the calibration plate!
How to take a set of suitable images? If you use the standard calibration plate, you can proceed in the following
way: With the combination of lens (fixed distance!), camera, and frame grabber to be calibrated a set of
images of the calibration plate has to be taken, see open_framegrabber and grab_image. The
following items have to be considered:
• At least a total of 10 to 20 images should be taken into account.
• The calibration plate has to be completely visible (incl. border!).
• Reflections etc. on the calibration plate should be avoided.
• Within the set of images the calibration plate should appear in different positions and orientations: Once
left in the image, once right, once (left and right) at the bottom, once (left or right) at the top, from
different distances etc. At this, the calibration plate should be rotated around its x- and/or y-axis, so the
perspective distortions of the calibration pattern are clearly visible. Thus, the exterior camera parameters
(camera pose with regard of the calibration plate) should be set to a large variety of different values!
• The calibration plate should fill at least a quarter of the whole image to ensure the robust detection of the
marks.
How to extract the calibration marks in the images? If a standard calibration plate is used, you can use the
operators find_caltab and find_marks_and_pose to determine the coordinates of the calibration
marks in each image and to compute a rough estimate for the exterior camera parameters. The concatenation
of these values can directly be used as initial values for the exterior camera parameters (NStartPose) in
camera_calibration.
Obviously, images in which the segmentation of the calibration plate ( find_caltab) has failed or the
calibration marks haven’t been determined successfully by find_marks_and_pose should not be used.
How to find suitable initial values for the interior camera parameters? The operators
find_marks_and_pose (determination of initial values for the exterior camera parameters) and
camera_calibration require initial values for the interior camera parameters. These parameters can be
provided by a appropriate text file (see read_cam_par) which can be generated by write_cam_par
or can be edited manually.
For area scan cameras, the following should be considered for the initial values of the single parameters:
Focus: The initial value is the nominal focal length of the the used lens, e.g., 0.008 m.
Kappa: Use 0.0 as initial value.
Sx : The initial value for the horizontal distance between two neighboring cells depends on the dimen-
sion of the used chip of the camera (see technical specifications of the camera). Generally, common
chips are either 1/3”-Chips (e.g., SONY XC-73, SONY XC-777), 1/2”-Chips (e.g., SONY XC-999,
Panasonic WV-CD50), or 2/3”-Chips (e.g., SONY DXC-151, SONY XC-77). Notice: The value of
Sx increases if the image is subsampled! Appropriate initial values are:
The value for Sx is calibrated, since the video signal of a camera normally isn’t sampled pixel-
synchronously.
Sy : Since most off-the-shelf cameras have quadratic pixels, the same values for Sy are valid as for Sx .
In contrast to Sx the value for Sy will not be calibrated for pinhole cameras, because the video
signal of a camera normally is sampled line-synchronously. Thus, the initial value is equal to the
final value. Appropriate initial values are:
Full image (768*576) Subsampling (384*288)
1/3"-Chip 0.0000055 m 0.0000110 m
1/2"-Chip 0.0000086 m 0.0000172 m
2/3"-Chip 0.0000110 m 0.0000220 m
Cx and Cy : Initial values for the coordinates of the image center is the half image width and half image
height. Notice: The values of Cx and Cy decrease if the image is subsampled! Appropriate initial
values are:
Full image (768*576) Subsampling (384*288)
Cx 384.0 192.0
Cy 288.0 144.0
ImageWidth and ImageHeight: These two parameters are determined by the the used frame grabber
and therefore are not calibrated. Appropriate initial values are, for example:
Full image (768*576) Subsampling (384*288)
ImageWidth 768 384
ImageHeight 576 288
For line scan cameras, the following should be considered for the initial values of the single parameters:
Focus: The initial value is the nominal focal length of the the used lens, e.g., 0.008 m.
Kappa: Use 0.0 as initial value.
Sx : The initial value for the horizontal distance between two neighboring cells can be taken from the
technical specifications of the camera. Typical initial values are 7e-6 m, 10e-6 m, and 14e-6 m.
Notice: The value of Sx increase, if the image is subsampled!
Sy : The initial value for the size of a cell in the direction perpendicular to the sensor line can also be
taken from the technical specifications of the camera. Typical initial values are 7e-6 m, 10e-6 m,
and 14e-6 m. Notice: The value of Sx increase, if the image is subsampled! In contrast to Sx , the
value for Sy will NOT be calibrated for line scan cameras, because it appears only in the form pv =
Sy · Cy . Therefore, it cannot be determined separately.
Cx : The initial value for the x-coordinate of the image center is the half image width. Notice: The
values of Cx decreases if the image is subsampled! Appropriate initial values are:
Image width: 1024 2048 4096 8192
Cx: 512 1024 2048 4096
Cy : The initial value for the y-coordinate of the image center can normally be set to 0.
ImageWidth and ImageHeight: These two parameters are determined by the used frame grabber and
therefore are not calibrated.
Vx , Vy , Vz : The initial values for the x-, y-, and z-component of the motion vector depend on the image
acquisition setup. Assuming a camera that looks perpendicularly onto a conveyor belt, and that is
rotated around its optical axis such that the sensor line is perpendicular to the conveyor belt, i.e., the
y-axis of the camera coordinate system is parallel to the conveyor belt, the initial values Vx = Vz =
0. The initial value for Vy can then be determined, e.g., from a line scan image of an object with
known size (e.g., calibration plate, ruler):
Vy = l[m]/l[row]
HALCON 8.0.2
1104 CHAPTER 15. TOOLS
with:
l[m] = Length of the object in object coordinates [meter]
l[row] = Length of the object in image coordinates [rows]
If, compared to the above setup, the camera is rotated 30 degrees around its optical axis, i.e., around
the z-axis of the camera coordinate system, the above determined initial values must be changed as
follows:
Vxz = sin(30) ∗ Vy
Vyz = cos(30) ∗ Vy
Vzz = Vz = 0
If, compared to the first setup, the camera is rotated -20 degrees around the x-axis of the camera
coordinate system, the following initial values result:
Vxx = Vx = 0
Vyx = cos(−20) ∗ Vy
Vzx = sin(−20) ∗ Vy
The quality of the initial values for Vx , Vy , and Vz are crucial for the success of the whole calibration.
If they are not precise enough, the calibration may fail.
Which camera parameters have to be estimated? The input parameter EstimateParams is used to select
which camera parameters to estimate. Usually this parameter is set to ’all’, i.e., all 6 exterior camera pa-
rameters (translation and rotation) and all interior camera parameters are determined. If the interior camera
parameters already have been determined (e.g., by a previous call to camera_calibration) it is often
desired to only determine the pose of the world coordinate system in camera coordinates (i.e., the exterior
camera parameters). In this case, EstimateParams can be set to ’pose’. This has the same effect as
EstimateParams = [’alpha’,’beta’,’gamma’,’transx’,’transy’,’transz’]. Otherwise, EstimateParams
contains a tuple of strings indicating the combination of parameters to estimate. In addition, parameters can
be excluded from estimation by using the prefix ~. For example, the values [’pose’,’~transx’] have the same
effect as [’alpha’,’beta’,’gamma’,’transy’,’transz’]. Whereas [’all’,’~focus’] determines all internal and ex-
ternal parameters except the focus, for instance. The prefix ~ can be used with all parameter values except
’all’.
What is the order within the individual parameters? The length of the tuple NStartPose corresponds to the
number of calibration images, e.g., using 15 images leads to a length of the tuple NStartPose equal to
15 · 7 = 105 (15 times the 7 exterior camera parameters). The first 7 values correspond to the pose of the
calibration plate in the first image, the next 7 values to the pose in the second image, etc.
This fixed number of calibration images has to be considered within the tuples with the coordinates of the 3D
model marks and the extracted 2D marks. If 15 images are used, the length of the tuples NRow and NCol
is 15 times the length of the tuples with the coordinates of the 3D model marks (NX, NY, and NZ). If every
image consists 49 marks, the length of the tuples NRow and NCol is 15 · 49 = 735, while the length of the
tuples NX, NY, and NZ is 49. The order of the values in NRow and NCol is “image after image”, i.e., using
49 marks the first 3D model point corresponds to the 1st, 50th, 99th, 148th, 197th, 246th, etc. extracted 2D
mark.
The 3D model points can be read from a calibration plate description file using the operator
caltab_points. Initial values for the poses of the calibration plate can be determined by applying
find_marks_and_pose for each image. The tuple NStartPose is set by the concatenation of all
these poses.
What is the meaning of the output parameters? If the camera calibration process is finished successfully, i.e.,
the minimization process has converged, the output parameters CamParam and NFinalPose contain the
computed exact values for the interior and exterior camera parameters. The length of the tuple NFinalPose
corresponds to the length of the tuple NStartPose.
The representation types of NFinalPose correspond to the representation type of the first tuple of
NStartPose (see create_pose). You can convert the representation type by convert_pose_type.
The computed average errors (Errors) give an impression of the accuracy of the calibration. The error
values (deviations in x and y coordinates) are measured in pixels.
Must I use a planar calibration object? No. The operator camera_calibration is designed in a way that
the input tuples NX, NY, NZ, NRow, and NCol can contain any 3D/2D correspondences, see the above para-
graph explaining the order of the single parameters.
Thus, it makes no difference how the required 3D model marks and the corresponding extracted 2D marks are
determined. On one hand, it is possible to use a 3D calibration pattern, on the other hand, you also can use any
characteristic points (natural landmarks) with known position in the world. By setting EstimateParams
to ’pose’, it is thus possible to compute the pose of an object in camera coordinates! For this, at least three
3D/2D-correspondences are necessary as input. NStartPose can, e.g., be generated directly as shown in
the program example for create_pose.
Attention
The minimization process of the calibration depends on the initial values of the interior (StartCamParam) and
exterior (NStartPose) camera parameters. The computed average errors Errors give an impression of the
accuracy of the calibration. The errors (deviations in x and y coordinates) are measured in pixels.
Parameter
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered tuple with all x coordinates of the calibration marks (in meters).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered tuple with all y coordinates of the calibration marks (in meters).
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered tuple with all z coordinates of the calibration marks (in meters).
. NRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; Htuple . double
Ordered tuple with all row coordinates of the extracted calibration marks (in pixels).
. NCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; Htuple . double
Ordered tuple with all column coordinates of the extracted calibration marks (in pixels).
. StartCamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong
Initial values for the interior camera parameters.
. NStartPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Ordered tuple with all initial values for the exterior camera parameters.
. EstimateParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char * / Hlong
Camera parameters to be estimated.
Default Value : "all"
List of values : EstimateParams ∈ {"all", "pose", "alpha", "beta", "gamma", "transx", "transy", "transz",
"focus", "kappa", "cx", "cy", "sx", "sy", "vx", "vy", "vz"}
. CamParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
Interior camera parameters.
. NFinalPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Ordered tuple with all exterior camera parameters.
. Errors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Average error distances in pixels.
Example (Syntax: HDevelop)
HALCON 8.0.2
1106 CHAPTER 15. TOOLS
Result
camera_calibration returns H_MSG_TRUE if all parameter values are correct and the desired camera pa-
rameters have been determined by the minimization algorithm. If necessary, an exception handling is raised.
Parallelization Information
camera_calibration is reentrant and processed without parallelization.
Possible Predecessors
find_marks_and_pose, caltab_points, read_cam_par
Possible Successors
write_pose, pose_to_hom_mat3d, disp_caltab, sim_caltab
See also
find_caltab, find_marks_and_pose, disp_caltab, sim_caltab, write_cam_par,
read_cam_par, create_pose, convert_pose_type, write_pose, read_pose,
pose_to_hom_mat3d, hom_mat3d_to_pose, caltab_points, gen_caltab
Module
Calibration
• ’fixed’: Only Kappa is modified, the other interior camera parameters remain unchanged. In general, this
leads to a change of the visible part of the scene.
• ’fullsize’: The scale factors Sx and Sy and the image center point [Cx , Cy ]T are modified in order to preserve
the visible part of the scene. Thus, all points visible in the original image are also visible in the modified
(rectified) image. In general, this leads to undefined pixels in the modified image.
• ’adaptive’: A trade-off between the other modes: The visible part of the scene is slightly reduced to prevent
undefined pixels in the modified image. Similiarly to ’fullsize’, the scale factors and the image center point
are modified.
• ’preserve_resolution’: As in the mode ’fullsize’, all points visible in the original image are also visible in
the modified (rectified) image, i.e., the scale factors Sx and Sy and the image center point [Cx , Cy ]T are
modified. In general, this leads to undefined pixels in the modified image. In contrast to the mode ’fullsize’
additionally the size of the modified image is increased such that the image resolution does not decrease in
any part of the image.
In all modes the radial distortion coefficient κ in CamParOut is set to Kappa. The transformation of a pixel in
the modified image into the image plane using CamParOut results in the same point as the transformation of a
pixel in the original image via CamParIn.
Parameter
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Mode
Default Value : "adaptive"
Suggested values : Mode ∈ {"fullsize", "adaptive", "fixed", "preserve_resolution"}
. CamParIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double / Hlong
Interior camera parameters (original).
. Kappa (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Desired radial distortion.
Default Value : 0.0
. CamParOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double * / Hlong *
Interior camera parameters (modified).
Result
change_radial_distortion_cam_par returns H_MSG_TRUE if all parameter values are correct. If nec-
essary, an exception handling is raised.
Parallelization Information
change_radial_distortion_cam_par is reentrant and processed without parallelization.
Possible Predecessors
camera_calibration, read_cam_par
Possible Successors
change_radial_distortion_image, change_radial_distortion_contours_xld,
gen_radial_distortion_map
See also
camera_calibration, read_cam_par, change_radial_distortion_image,
change_radial_distortion_contours_xld
Module
Calibration
HALCON 8.0.2
1108 CHAPTER 15. TOOLS
Parallelization Information
change_radial_distortion_contours_xld is reentrant and processed without parallelization.
Possible Predecessors
change_radial_distortion_cam_par, gen_contours_skeleton_xld, edges_sub_pix,
smooth_contours_xld
Possible Successors
gen_polygons_xld, smooth_contours_xld
See also
change_radial_distortion_cam_par, camera_calibration, read_cam_par,
change_radial_distortion_image
Module
Calibration
See also
change_radial_distortion_cam_par, camera_calibration, read_cam_par,
change_radial_distortion_contours_xld
Module
Calibration
Transform an XLD contour into the plane z=0 of a world coordinate system.
The operator contour_to_world_plane_xld transforms contour points given in Contours into the plane
z=0 in a world coordinate system and returns the 3D contour points in ContoursTrans. The world coordinate
system is chosen by passing its 3D pose relative to the camera coordinate system in WorldPose. In CamParam
you must pass the interior camera parameters (see write_cam_par for the sequence of the parameters and the
underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
With the parameter Scale you can scale the resulting 3D coordinates. The parameter Scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’µm’ for the parameter Scale.
Internally, the operator first computes the line of sight between the projection center and the image point in the
camera coordinate system, taking into account the radial distortions. The line of sight is then transformed into the
world coordinate system specified in WorldPose. By intersecting the plane z=0 with the line of sight the 3D
coordinates of the transformed contour ContoursTrans are obtained.
Parameter
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject
Input XLD contours to be transformed in image coordinates.
. ContoursTrans (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *
Transformed XLD contours in world coordinates.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
3D pose of the world coordinate system in camera coordinates.
Number of elements : 7
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . const char * / Hlong / double
Scale oder dimension
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
Example (Syntax: HDevelop)
HALCON 8.0.2
1110 CHAPTER 15. TOOLS
Result
contour_to_world_plane_xld returns H_MSG_TRUE if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
contour_to_world_plane_xld is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
See also
image_points_to_world_plane
Module
Calibration
Generate a calibration plate description file and a corresponding PostScript file. (obsolete)
create_caltab has been replaced with the operator gen_caltab. The operator is contained and described
for compatibility reasons only.
create_caltab generates the description of a standard calibration plate for HALCON. This calibration plate
consists of 49 black circular marks on a white plane which are surrounded by a black frame. The parameter Width
sets the width (equal to the height) of the whole calibration plate in meters. Using a width of 0.8 m, the distance
between two neighboring marks becomes 10 cm, and the mark radius and the frame width are set to 2.5 cm. The
calibration plate coordinate system is located in the middle of the surface of the calibration plate, its z-axis points
into the calibration plate, its x-axis to the right, and it y-axis downwards.
The file CalTabDescrFile contains the calibration plate description, e.g., the number of rows and columns
of the calibration plate, the geometry of the surrounding frame (see find_caltab), and the coordinates and
the radius of all calibration plate marks given in the calibration plate coordinate system. A file generated by
create_caltab looks like the following (comments are marked by a ’#’ at the beginning of a line):
#
# Description of the standard calibration plate
# used for the camera calibration in HALCON
#
# 7 rows X 7 columns
# Distance between mark centers [meter]: 0.1
# Quadratic frame (with outer and inner border) around calibration plate
w 0.025
o -0.41 0.41 0.41 -0.41
i -0.4 0.4 0.4 -0.4
# calibration marks at y = 0 m
-0.3 0 0.025
-0.2 0 0.025
-0.1 0 0.025
0 0 0.025
0.1 0 0.025
0.2 0 0.025
0.3 0 0.025
HALCON 8.0.2
1112 CHAPTER 15. TOOLS
The file CalTabFile contains the corresponding PostScript description of the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), the printed calibration plate may not
match the values in the calibration plate descripton file CalTabDescrFile exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
Parameter
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double
Width of the calibration plate in meters.
Default Value : 0.8
Suggested values : Width ∈ {1.2, 0.8, 0.6, 0.4, 0.2, 0.1}
Recommended Increment : 0.1
Restriction : 0.0 < Width
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .filename.write ; const char *
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. CalTabFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; const char *
File name of the PostScript file.
Default Value : "caltab.ps"
Example (Syntax: HDevelop)
Result
create_caltab returns H_MSG_TRUE if all parameter values are correct and both files have been written
successfully. If necessary, an exception handling is raised.
Parallelization Information
create_caltab is processed completely exclusively without parallelization.
Possible Successors
read_cam_par, caltab_points
See also
gen_caltab, find_caltab, find_marks_and_pose, camera_calibration, disp_caltab,
sim_caltab
Module
Foundation
Project and visualize the 3D model of the calibration plate in the image.
disp_caltab is used to visualize the calibration marks and the connecting lines between the marks of the
used calibration plate (CalTabDescrFile) in the window specified by WindowHandle. Additionally, the
x- and y-axes of the plate’s coordiante system are printed on the plate’s surface. For this, the 3D model of
the calibration plate is projected into the image plane using the interior (CamParam) and exterior camera pa-
rameters (CaltabPose, i.e., the pose of the calibration plate in camera coordinates). The underlying camera
model (pinhole, telecentric, or line scan camera with radial distortion) is described in write_cam_par and
camera_calibration.
Typically, disp_caltab is used to verify the result of the camera calibration (see camera_calibration)
by superimposing it onto the original image. The current line width can be set by set_line_width, the current
color can be set by set_color. Additionally, the font type of the labels of the coordinate axes can be set by
set_font.
The parameter ScaleFac influences the number of supporting points to approximate the elliptic contours of the
calibration marks. You should increase the number of supporting points, if the image part in the output window is
displayed with magnification (see set_part).
Parameter
Result
disp_caltab returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception handling is
raised.
Parallelization Information
disp_caltab is reentrant, local, and processed without parallelization.
Possible Predecessors
camera_calibration, read_cam_par, read_pose
See also
find_marks_and_pose, camera_calibration, sim_caltab, write_cam_par,
HALCON 8.0.2
1114 CHAPTER 15. TOOLS
Result
find_caltab returns H_MSG_TRUE if all parameter values are correct and an image region is
found. The behavior in case of empty input (no image given) can be set via set_system
(’no_object_result’,<Result>) and the behavior in case of an empty result region via set_system
(’store_empty_region’,<true/false>). If necessary, an exception handling is raised.
Parallelization Information
find_caltab is reentrant and processed without parallelization.
Possible Predecessors
read_image
Possible Successors
find_marks_and_pose
See also
find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
caltab_points, gen_caltab
Module
Foundation
Extract the 2D calibration marks from the image and calculate initial values for the exterior camera parameters.
find_marks_and_pose is used to determine the necessary input data for the subsequent camera calibration
(see camera_calibration): First, the 2D center points [RCoord,CCoord] of the calibration marks within
the region CalTabRegion of the input image Image are extracted and ordered. Secondly, a rough estimate for
the exterior camera parameters (StartPose) is computed, i.e., the 3D pose (= position and orientation) of the
calibration plate relative to the camera coordinate system (see create_pose for more information about 3D
poses).
In the input image Image an edge detector is applied (see edges_image, mode ’lanser2’) to the region
CalTabRegion, which can be found by applying the operator find_caltab. The filter parameter for this
edge detection can be tuned via Alpha. In the edge image closed contours are searched for: The number of closed
contours must correspond to the number of calibration marks as described in the calibration plate description file
CalTabDescrFile and the contours have to be ellipticly shaped. Contours shorter than MinContLength are
discarded, just as contours enclosing regions with a diameter larger than MaxDiamMarks (e.g., the border of the
calibration plate).
For the detection of contours a threshold operator is applied on the resulting amplitudes of the edge detector. All
points with a high amplitude (i.e., borders of marks) are selected.
First, the threshold value is set to StartThresh. If the search for the closed contours or the successive pose
estimate fails, this threshold value is successively decreased by DeltaThresh down to a minimum value of
MinThresh.
Each of the found contours is refined with subpixel accuracy (see edges_sub_pix) and subsequently approx-
imated by an ellipse. The center points of these ellipses represent a good approximation of the desired 2D image
coordinates [RCoord,CCoord] of the calibration mark center points. The order of the values within these two tu-
ples must correspond to the order of the 3D coordinates of the calibration marks in the calibration plate description
file CalTabDescrFile, since this fixes the correspondences between extracted image marks and known model
marks (given by caltab_points)! If a triangular orientation mark is defined in a corner of the plate by the
plate description file (see gen_caltab), the mark will be detected and the point order is returned in row-major
order beginning with the corner mark in the (barycentric) negative quadrant with respect to the defined coordinate
system of the plate. Else, if no orientation mark is defined, the order of the center points is in row-major order
beginning at the upper left corner mark in the image.
Based on the ellipse parameters for each calibration mark, a rough estimate for the exterior camera parameters is
finally computed. For this purpose the fixed correspondences between extracted image marks and known model
marks are used. The estimate StartPose describes the pose of the calibration plate in the camera coordinate
system as required by the operator camera_calibration.
HALCON 8.0.2
1116 CHAPTER 15. TOOLS
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / uint2
Input image.
. CalTabRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject
Region of the calibration plate.
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. StartCamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Initial values for the interior camera parameters.
Number of elements : (StartCamParam = 8) ∨ (StartCamParam = 11)
. StartThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Initial threshold value for contour detection.
Default Value : 128
List of values : StartThresh ∈ {80, 96, 112, 128, 144, 160}
. DeltaThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Loop value for successive reduction of StartThresh.
Default Value : 10
List of values : DeltaThresh ∈ {6, 8, 10, 12, 14, 16, 18, 20, 22}
. MinThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong
Minimum threshold for contour detection.
Default Value : 18
List of values : MinThresh ∈ {8, 10, 12, 14, 16, 18, 20, 22}
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Filter parameter for contour detection, see edges_image.
Default Value : 0.9
Suggested values : Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1}
Typical range of values : 0.2 ≤ Alpha ≤ 2.0
Restriction : Alpha > 0.0
. MinContLength (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Minimum length of the contours of the marks.
Default Value : 15.0
Suggested values : MinContLength ∈ {10.0, 15.0, 20.0, 30.0, 40.0, 100.0}
Restriction : MinContLength > 0.0
. MaxDiamMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Maximum expected diameter of the marks.
Default Value : 100.0
Suggested values : MaxDiamMarks ∈ {50.0, 100.0, 150.0, 200.0, 300.0}
Restriction : MaxDiamMarks > 0.0
. RCoord (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Tuple with row coordinates of the detected marks.
. CCoord (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Tuple with column coordinates of the detected marks.
. StartPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double * / Hlong *
Estimation for the exterior camera parameters.
Number of elements : 7
Example (Syntax: HDevelop)
Result
find_marks_and_pose returns H_MSG_TRUE if all parameter values are correct and an estimation for the
exterior camera parameters has been determined successfully. If necessary, an exception handling is raised.
Parallelization Information
find_marks_and_pose is reentrant and processed without parallelization.
Possible Predecessors
find_caltab
Possible Successors
camera_calibration
See also
find_caltab, camera_calibration, disp_caltab, sim_caltab, read_cam_par,
read_pose, create_pose, pose_to_hom_mat3d, caltab_points, gen_caltab,
edges_sub_pix, edges_image
Module
Foundation
HALCON 8.0.2
1118 CHAPTER 15. TOOLS
The file CalTabPSFile contains the corresponding PostScript description of the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), the printed calibration plate may not
match the values in the calibration plate descripton file CalTabDescrFile exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
Parameter
HALCON 8.0.2
1120 CHAPTER 15. TOOLS
Result
gen_caltab returns H_MSG_TRUE if all parameter values are correct and both files have been written success-
fully. If necessary, an exception handling is raised.
Parallelization Information
gen_caltab is processed completely exclusively without parallelization.
Possible Successors
read_cam_par, caltab_points
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab
Module
Foundation
Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world
coordinate system.
gen_image_to_world_plane_map generates a projection map Map, which describes the mapping between
the image plane and the plane z=0 (plane of measurements) in a world coordinate system. This map can be used
to rectify an image with the operator map_image. The rectified image shows neither radial nor perspective dis-
tortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly onto the plane
of measurements. The world coordinate system is chosen by passing its 3D pose relative to the camera coordinate
system in WorldPose. In CamParam you must pass the interior camera parameters (see write_cam_par for
the sequence of the parameters and the underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
The size of the images to be mapped can be specified by the parameters WidthIn and HeightIn. The pixel
position of the upper left corner of the output image is determined by the origin of the world coordinate system.
The size of the output image can be choosen by the parameters WidthMapped, HeightMapped, and Scale.
WidthMapped and HeightMapped must be given in pixels.
With the parameter Scale you can specify the size of a pixel in the transformed image. There are two typical
scenarios: First, you can scale the image such that pixel coordinates in the transformed image directly correspond
to metric units, e.g., that one pixel corresponds to one micron. This is useful if you want to perform measurements
in the transformed image which will then directly result in metric results. The second scenario is to scale the image
such that its content appears in a size similar to the original image. This is useful, e.g., if you want to perform
shape-based matching in the transformed image.
Scale must be specified as the ratio desired pixel size/original unit. A pixel size of 1µm means that a pixel in
the transformed image corresponds to the area 1µm × 1µm in the plane of measurements. The original unit is
determined by the coordinates of the calibration object. If the original unit is meters (which is the case if you use
the standard calibration plate), you can use the parameter values ’m’, ’cm’, ’mm’, ’microns’, or ’µm’ to directly set
the unit of pixel coordinates in the transformed image.
The parameter Interpolation specifies whether bilinear interpolation (’bilinear’) should be applied between
the pixels in the input image or whether the gray value of the nearest neighboring pixel (’none’) should be used.
The mapping function is stored in the output image Map. Map has the same size as the resulting images after
the mapping. If no interpolation is chosen, Map consists of one image containing one channel, in which for each
pixel of the resulting image the linearized coordinate of the pixel of the input image is stored that is the nearest
neighbor to the transformed coordinates. If bilinear interpolation is chosen, Map consists of one image containing
five channels. In the first channel for each pixel in the resulting image the linearized coordinates of the pixel in
the input image is stored that is in the upper left position relative to the transformed coordinates. The four other
channels contain the weights of the four neighboring pixels of the transformed coordinates which are used for the
bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to
the transformed coordinates. If several images have to be mapped using the same camera parameters,
gen_image_to_world_plane_map in combination with map_image is much more efficient than the op-
erator image_to_world_plane because the mapping function needs to be computed only once.
Parameter
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : int4 / uint2
Image containing the mapping data.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
3D pose of the world coordinate system in camera coordinates.
Number of elements : 7
. WidthIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Htuple . Hlong
Width of the images to be transformed.
. HeightIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Htuple . Hlong
Height of the images to be transformed.
. WidthMapped (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Htuple . Hlong
Width of the resulting mapped images in pixels.
. HeightMapped (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Htuple . Hlong
Height of the resulting mapped images in pixels.
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . const char * / Hlong / double
Scale or unit.
Default Value : "m"
Suggested values : Scale ∈ {"m", "cm", "mm", "microns", "µm", 1.0, 0.01, 0.001, "1.0e-6", 0.0254, 0.3048,
0.9144}
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of interpolation.
Default Value : "bilinear"
List of values : Interpolation ∈ {"none", "bilinear"}
Example (Syntax: HDevelop)
HALCON 8.0.2
1122 CHAPTER 15. TOOLS
* first determine parameters such that the entire image content is visible
* -> transform image boundary into world plane, determine smallest
* rectangle around it
get_image_pointer1(Image, Pointer, Type, Width, Height)
gen_rectangle1 (ImageRect, 0, 0, Height-1, Width-1)
gen_contour_region_xld (ImageRect, ImageBorder, ’border’)
contour_to_world_plane_xld(ImageBorder, ImageBorderWCS, FinalCamParam,
WorldPose, 1)
smallest_rectangle1_xld (ImageBorderWCS, MinY, MinX, MaxY, MaxX)
* -> move the pose to the upper left corner of the surrounding rectangle
set_origin_pose(WorldPose, MinX, MinY, 0, PoseForEntireImage)
* -> determine the scaling factor such that the center pixel has the same
* size in the original and in the rectified image
* method: transform corner points of the pixel into the world
* coordinate system, compute their distances, and use their
* mean as the scaling factor
image_points_to_world_plane(FinalCamParam, PoseForEntireImage,
[Height/2, Height/2, Height/2+1],
[Width/2, Width/2+1, Width/2],
1, WorldPixelX, WorldPixelY)
distance_pp(WorldPixelY[0], WorldPixelX[0], WorldPixelY[1], WorldPixelX[1],
WorldLength1)
distance_pp(WorldPixelY[0], WorldPixelX[0], WorldPixelY[2], WorldPixelX[2],
WorldLength2)
ScaleForSimilarPixelSize := (WorldLength1+WorldLength2)/2
* -> determine output image size such that entire input image fits into it
ExtentX := MaxX-MinX
ExtentY := MaxY-MinY
WidthRectifiedImage := ExtentX/ScaleForSimilarPixelSize
HeightRectifiedImage := ExtentY/ScaleForSimilarPixelSize
* create mapping with the determined parameters
gen_image_to_world_plane_map(Map, FinalCamParam, PoseForEntireImage,
Width, Height,
WidthRectifiedImage, HeightRectifiedImage,
ScaleForSimilarPixelSize, ’bilinear’)
* transform grabbed images with the created map
while(1)
grab_image_async(Image, FGHandle, -1)
map_image(Image, Map, RectifiedImage)
endwhile
Result
gen_image_to_world_plane_map returns H_MSG_TRUE if all parameter values are correct. If necessary,
an exception handling is raised.
Parallelization Information
gen_image_to_world_plane_map is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
Possible Successors
map_image
Alternatives
image_to_world_plane
See also
map_image, contour_to_world_plane_xld, image_points_to_world_plane
Module
Calibration
Generate a projection map that describes the mapping of images corresponding to a changing radial distortion.
gen_radial_distortion_map computes the mapping of images corresponding to a changing radial dis-
tortion in accordance to the interior camera parameters CamParIn and CamParOut which can be obtained,
e.g., using the operator camera_calibration. CamParIn and CamParOut contain the old and the new
camera parameters including the old and the new radial distortion, respectively (also see write_cam_par for
the sequence of the parameters and the underlying camera model). Each pixel of the potential output image is
transformed into the image plane using CamParOut and subsequently projected into a subpixel position of the
potential input image using CamParIn.
The mapping function is stored in the output image Map. The size of Map is given by the camera parameters
CamParOut and therefore defines the size of the resulting mapped images using map_image. The size of the
images to be mapped with map_image is determined by the camera parmaters CamParIn. If no interpolation
is chosen (Interpolation = ’none’), Map consists of one image containing one channel, in which for each
pixel of the output image the linearized coordinate of the pixel of the input image is stored that is the nearest
neighbor to the transformed coordinates. If bilinear interpolation is chosen (Interpolation = ’bilinear’),
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image
the linearized coordinate of the pixel in the input image is stored that is in the upper left position relative to
the transformed coordinates. The four other channels contain the weights of the four neighboring pixels of the
transformed coordinates which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
If CamParOut was computed via change_radial_distortion_cam_par, the mapping describes the
effect of a lens with a modified radial distortion. If κ is 0, the mapping corresponds to a rectification.
If several images have to be mapped using the same camera parameters, gen_radial_distortion_map
in combination with map_image is much more efficient than the operator
change_radial_distortion_image because the transformation must be computed only once.
Parameter
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; Hobject * : int4 / uint2
Image containing the mapping data.
. CamParIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Old camera parameters.
Number of elements : 8
. CamParOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
New camera parameters.
Number of elements : 8
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of interpolation.
Default Value : "bilinear"
List of values : Interpolation ∈ {"none", "bilinear"}
Result
gen_radial_distortion_map returns H_MSG_TRUE if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
gen_radial_distortion_map is reentrant and processed without parallelization.
Possible Predecessors
change_radial_distortion_cam_par, camera_calibration, hand_eye_calibration
Possible Successors
map_image
Alternatives
change_radial_distortion_image
HALCON 8.0.2
1124 CHAPTER 15. TOOLS
See also
change_radial_distortion_contours_xld
Module
Calibration
The advantage of representing the line of sight as two points is that it is easier to transform the line in 3D. To do
so, all that is necessary is to apply the operator affine_trans_point_3d to the two points.
Parameter
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Row coordinate of the pixel.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Column coordinate of the pixel.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. PX (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
X coordinate of the first point on the line of sight in the camera coordinate system
. PY (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Y coordinate of the first point on the line of sight in the camera coordinate system
. PZ (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Z coordinate of the first point on the line of sight in the camera coordinate system
. QX (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
X coordinate of the second point on the line of sight in the camera coordinate system
. QY (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Y coordinate of the second point on the line of sight in the camera coordinate system
HALCON 8.0.2
1126 CHAPTER 15. TOOLS
Result
get_line_of_sight returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
get_line_of_sight is reentrant and processed without parallelization.
Possible Predecessors
read_cam_par, camera_calibration
Possible Successors
affine_trans_point_3d
See also
camera_calibration, disp_caltab, read_cam_par, project_3d_point,
affine_trans_point_3d
Module
Calibration
Output
The resulting Pose is of code-0 (see create_pose) and represents the pose of the center of the rectangle. You
can compute the pose of the corners of the rectangle as follows:
A rectangle is symmetric with respect to its x, y, and z axis and one and the same contour can represent a rectangle
in 4 different poses. The angles in Pose are normalized to be in the range [−90; 90] degrees and the rest of the 4
possible poses can be computed by combining flips around the corresponding axis:
∗ NOTE: the following code works ONLY for pose of type Code-0
∗ as it is returned by get_rectangle_pose
∗
∗ flip around z-axis
PoseFlippedZ := Pose
PoseFlippedZ[5] := PoseFlippedZ[5]+180
∗ flip around y-axis
PoseFlippedY := Pose
PoseFlippedY[4] := PoseFlippedY[4]+180
PoseFlippedY[5] := -PoseFlippedY[5]
∗ flip around x-axis
PoseFlippedX := Pose
PoseFlippedX[3] := PoseFlippedX[3]+180
PoseFlippedX[4] := -PoseFlippedX[4]
PoseFlippedX[5] := -PoseFlippedX[5]
Note that if the rectangle is a square (Width == Height) the number of alternative poses is 8.
If more than one contour are given in Contour, a corresponding tuple of values for both Width and Height
has to be provided as well. Yet, if only one value is provided for each of these arguments, then this value is applied
for each processed contour. A pose is estimated for each processed contour and all poses are concatenated in Pose
(see the example below).
• ratio Width/Height
• length of the projected contour
• degree of perspective distortion of the contour
In order to achieve an accurate pose estimation, there are three corresponding criteria that should be considered:
The ratio Width/Height should fulfill
1
< Width/Height < 3
3
For a rectangular object deviating from this criterion, its longer side dominates the determination of its pose. This
causes instability in the estimation of the angle around the longer rectangle’s axis. In the extreme case when one
of the dimensions is 0, the rectangle is in fact a line segment, whose pose cannot be estimated.
Secondly, the lengths of each side of the contour should be at least 20 pixels. An error is returned if a side of the
contour is less than 5 pixels long.
Thirdly, the more the contour appears projectively distorted, the more stable the algorithm works. Therefore, the
pose of a rectangle tilted w.r.t to the image plane can be estimated accurately, whereas the pose of an rectangle
parallel to the image plane of the camera could be unstable. This is further discussed in the next paragraph.
Additionally, there is a rule of thumb that ensures projective distortion: the rectangle should be placed in space
such that its size in x and y dimension in the camera coordinate system should not be less than 1/10th of its
distance from the camera in z direction.
get_rectangle_pose provides two measures for the accuracy of the estimated Pose. Error is the average
pixel error between the contour points and the modeled rectangle reprojected on the image. If Error is exceeding
0.5, this is an indication that the algorithm did not converge properly, and the resulting Pose should not be used.
HALCON 8.0.2
1128 CHAPTER 15. TOOLS
CovPose contains 36 entries representing the 6 × 6 covariance matrix of the first 6 entries of Pose. The above
mentioned case of instability of the angle about the longer rectangle’s axis be detected by checking that the absolute
values of the variances and covariances of the rotations around the x and y axis (CovPose[21],CovPose[28],
and CovPose[22] == CovPose[27]) do not exceed 0.05. Further, unusually increased values of any of the
covariances and especially of the variances (the 6 values on the diagonal of CovPose with indices 0, 7, 14, 21, 28
and 35, respectively) indicate a poor quality of Pose.
Parameter
Result
get_rectangle_pose returns H_MSG_TRUE if all parameter values are correct and the position of the
rectangle has been determined successfully. If the provided contour(s) cannot be segmented as a quadrangle
get_rectangle_pose returns H_ERR_FIT_QUADRANGLE. If further necessary, an exception handling is
raised.
Parallelization Information
get_rectangle_pose is reentrant, local, and processed without parallelization.
Possible Predecessors
edges_sub_pix
See also
get_circle_pose, set_origin_pose, camera_calibration
References
G.Schweighofer and A.Pinz: “Robust Pose Estimation from a Planar Target”; Transactions on Pattern Analysis
and Machine Intelligence (PAMI), 28(12):2024-2030, 2006
Module
3D Metrology
HALCON 8.0.2
1130 CHAPTER 15. TOOLS
The two hand-eye configurations are discussed in more detail below, followed by general information about the
process of hand-eye calibration.
cam
Moving camera: Hcal = cam Htool · tool Hbase · base Hcal
* 6 YH
H
H
CamStartPose MRelPoses BaseStartPose
CamFinalPose BaseFinalPose
From the set of calibration images, the operator hand_eye_calibration determines the two transformations
at the ends of the chain, i.e., the pose of the robot tool in camera coordinates (cam Htool ) and the pose of the
calibration object in the robot base coordinate system (base Hcal ). In the input parameters CamStartPose and
BaseStartPose, you must specify suitable starting values for these transformations which are constant over
all calibration images. hand_eye_calibration then returns the calibrated values in CamFinalPose and
BaseFinalPose.
In contrast, the transformation in the middle of the chain, tool Hbase , is known but changes for each calibration
image, because it describes the pose of the robot moving the camera, or to be more exact its inverse pose (pose of
the base coordinate system in robot tool coordinates). You must specify the (inverse) robot poses in the calibration
images in the parameter MRelPoses.
Internally, hand_eye_calibration uses a Newton-type algorithm to minimize an error function based on
normal equations. Analogously to the calibration of the camera itself (see camera_calibration), the hand-
eye calibration becomes more robust if you use many calibration images that were acquired with different robot
poses.
Stationary camera
In this configuration, the robot grasps the calibration object and moves it in front of the camera. Again, the
information extracted from a calibration image, i.e., the pose of the calibration object in camera coordinates (i.e.,
the external camera parameters), are equal to a chain of poses or homogeneous transformation matrices, this time
from the calibration object via the robot’s tool to its base and finally to the camera:
cam
Stationary camera: Hcal = cam Hbase · base Htool · tool Hcal
*
YH
H
6 H
CamStartPose MRelPoses BaseStartPose
CamFinalPose BaseFinalPose
Analogously to the configuration with a moving camera, the operator hand_eye_calibration determines
the two transformations at the ends of the chain, here the pose of the robot base coordinate system in camera coordi-
nates (cam Hbase ) and the pose of the calibration object relative to the robot tool (tool Hcal ). In the input parameters
CamStartPose and BaseStartPose, you must specify suitable starting values for these transformations.
hand_eye_calibration then returns the calibrated values in CamFinalPose and BaseFinalPose.
Please note that the names of the parameters BaseStartPose and BaseFinalPose are misleading for this
configuration!
The transformation in the middle of the chain, base Htool , describes the pose of the robot moving the calibration
object, i.e., the pose of the tool relative to the base coordinate system. You must specify the robot poses in the
calibration images in the parameter MRelPoses.
How do I get 3D model points and their projections? 3D model points given in the world coordinate system
(NX, NY, NZ) and their associated projections in the image (NRow, NCol) form the basis of the hand-eye
calibration. In order to be able to perform a successful hand-eye calibration, you need images of the 3D
model points that were obtained for sufficiently many different poses of the manipulator.
In principle, you can use arbitrary known points for the calibration. However, it is usually most convenient to
use the standard calibration plate, e.g., the one that can be generated with gen_caltab. In this case, you
can use the operators find_caltab and find_marks_and_pose to extract the position of the cali-
bration plate and of the calibration marks and the operator caltab_points to access the 3D coordinates
of the calibration marks (see also the description of camera_calibration).
The parameter MPointsOfImage specifies the number of 3D model points used for each pose of the
manipulator, i.e., for each image. With this, the 3D model points which are stored in a linearized fashion
in NX, NY, NZ, and their corresponding projections (NRow, NCol) can be associated with the corresponding
pose of the manipulator (MRelPoses). Note that in contrast to the operator camera_calibration the
3D coordinates of the model points must be specified for each calibration image, not only once.
How do I acquire a suitable set of images? If a standard calibration plate is used, the following procedure
should be used:
• At least 10 to 20 images from different positions should be taken in which the position of the camera
with respect to the calibration plate is sufficiently different. The position of the calibration plate (moving
camera: relative to the robot’s tool; stationary camera: relative to the robot’s base) must not be changed
between images.
• In each image, the calibration plate must be completely visible (including its border).
• No reflections or other disturbances should be visible on the calibration plate.
• The set of images must show the calibration plate from very different positions of the manipulator.
The calibration plate can and should be visible in different parts of the images. Furthermore, it should
be slightly to moderately rotated around its x- or y-axis, in order to clearly exhibit distortions of the
calibration marks. In other words, the corresponding exterior camera parameters (pose of the calibration
plate in camera coordinates) should take on many different values.
• In each image, the calibration plate should fill at least one quarter of the entire image, in order to ensure
the robust detection of the calibration marks.
• The interior camera parameters of the camera to be used must have been determined earlier and must be
passed in CamParam (see camera_calibration). Note that changes of the image size, the focal
length, the aperture, or the focus effect a change of the interior camera parameters.
• The camera must not be modified between the acquisition of the individual images, i.e., focal length,
aperture, and focus must not be changed, because all calibration images use the same interior camera
parameters. Please make sure that the focus is sufficient for the expected changes of the distance the
camera from the calibration plate. Therefore, bright lighting conditions for the calibration plate are
important, because then you can use smaller apertures which result in larger depth of focus.
How do I obtain suitable starting values? Depending on the used hand-eye configuration, you need starting val-
ues for the following poses:
Moving camera
BaseStartPose = pose of the calibration object in robot base coordinates
CamStartPose = pose of the robot tool in camera coordinates
Stationary camera
BaseStartPose = pose of the calibration object in robot tool coordinates
CamStartPose = pose of the robot base in camera coordinates
The camera’s coordinate system is oriented such that its optical axis corresponds to the z-axis, the x-axis
points to the right, and the y-axis downwards. The coordinate system of the standard calibration plate is
located in the middle of the surface of the calibration plate, its z-axis points into the calibration plate, its
x-axis to the right, and it y-axis downwards.
For more information about creating a 3D pose please refer to the description of create_pose which also
contains a short example.
HALCON 8.0.2
1132 CHAPTER 15. TOOLS
In fact, you need a starting value only for one of the two poses (BaseStartPose or CamStartPose).
The other can be computed from one of the calibration images. This means that you can pick the pose that is
easier to determine and let HALCON compute the other one for you.
The main idea is to exploit the fact that the two poses for which we need starting values are connected via the
already described chain of transformations, here shown for a configuration with a moving camera:
cam
Moving camera: Hcal = cam Htool · tool Hbase · base Hcal
* 6 YH
H
H
CamStartPose MRelPoses BaseStartPose
In this configuration, it is typically easy to determine a starting value for cam Htool (CamStartPose). Thus,
we solve the equation for base Hcal (BaseStartPose):
Thus, to compute BaseStartPose you need one of the robot poses (e.g., the one in the first image), your
estimate for CamStartPose, and the pose of the calibration object in camera coordinates in the selected
image. If you use the standard calibration plate, you typically already obtained its pose when applying the
operator find_marks_and_pose to determine the projections of the marks. An example program can
be found below.
For a configuration with a stationary camera, the chain of transformations is:
cam
Stationary camera: Hcal = cam Hbase · base Htool · tool Hcal
*
HH
Y
6 H
CamStartPose MRelPoses BaseStartPose
tool
In this configuration, it is typically easier to determine a starting value for Hcal (BaseStartPose).
Thus, we solve the equation for cam Hbase (CamStartPose):
Thus, to compute CamStartPose you need one of the robot poses (e.g., the one in the first image), your
estimate for BaseStartPose, and the pose of the calibration object in camera coordinates in the selected
image. If you use the standard calibration plate, you typically already obtained its pose when applying the
operator find_marks_and_pose to determine the projections of the marks. An example program can
be found below.
How do I obtain the poses of the robot? In the parameter MRelPoses you must pass the poses of the robot in
the calibration images (moving camera: pose of the robot base in robot tool coordinates; stationary camera:
pose of the robot tool in robot base coordinates) in a linearized fashion. We recommend to create the robot
poses in a separate program and save in files using write_pose. In the calibration program you can then
read and accumulate them in a tuple as shown in the example program below. Besides, we recommend to
save the pose of the robot tool in robot base coordinates independent of the hand-eye configuration. When
using a moving camera, you then invert the read poses before accumulating them. This is also shown in the
example program.
Via the cartesian interface of the robot, you can typically obtain the pose of the tool in base coordinates in
a notation that corresponds to the pose representations with the codes 0 or 2 (OrderOfRotation = ’gba’
or ’abg’, see create_pose). In this case, you can directly use the pose values obtained from the robot as
input for create_pose.
If the cartesian interface of your robot describes the orientation in a different way, e.g., with the representation
ZYZ (Rz (ϕ1) · Ry (ϕ2) · Rz (ϕ3)), you can create the corresponding homogeneous transformation matrix
step by step using the operators hom_mat3d_rotate and hom_mat3d_translate and then convert
the matrix into a pose using hom_mat3d_to_pose. The following example code creates a pose from the
ZYZ representation described above:
hom_mat3d_identity (HomMat3DIdent)
hom_mat3d_rotate (HomMat3DIdent, ϕ3, ’z’, 0, 0, 0, HomMat3DRotZ)
hom_mat3d_rotate (HomMat3DRotZ, ϕ2, ’y’, 0, 0, 0, HomMat3DRotYZ)
hom_mat3d_rotate (HomMat3DRotYZ, ϕ1, ’z’, 0, 0, 0, HomMat3DRotZYZ)
hom_mat3d_translate (HomMat3DRotZYZ, Tx, Ty, Tz, base_H_tool)
hom_mat3d_to_pose (base_H_tool, RobPose)
Please note that the hand-eye calibration only works if the robot poses MRelPoses are specified with high
accuracy!
How can I exclude individual pose parameters from the estimation? hand_eye_calibration estimates
a maximum of 12 pose parameters, i.e., 6 parameters each for the two computed poses CamFinalPose
and BaseFinalPose. However, it is possible to exclude some of these pose parameters from the esti-
mation. This means that the starting values of the poses remain unchanged and are assumed constant for
the estimation of all other pose parameters. The parameter ToEstimate is used to determine which pose
parameters should be estimated. In ToEstimate, a list of keywords for the parameters to be estimated is
passed. The possible values are:
BaseFinalPose:
’baseTx’ = translation along the x-axis
’baseTy’ = translation along the y-axis
’baseTz’ = translation along the z-axis
’baseRa’ = rotation around the x-axis
’baseRb’ = rotation around the y-axis
’baseRg’ = rotation around the z-axis
’base_pose’ = all 6 BaseFinalPose parameters
CamFinalPose:
’camTx’ = translation along the x-axis
’camTy’ = translation along the y-axis
’camTz’ = translation along the z-axis
’camRa’ = rotation around the x-axis
’camRb’ = rotation around the y-axis
’camRg’ = rotation around the z-axis
’cam_pose’ = all 6 CamFinalPose parameters
In order to estimate all 12 pose parameters, you can pass the keyword ’all’ (or of course a tuple containing
all 12 keywords listed above).
It is useful to exclude individual parameters from estimation if those pose parameters have already been mea-
sured exactly. Therefor define a string tuple of the parameters that should be estimated or prefix the strings
of excluded parameters with a ’~’ sign. For example, ToEstimate = [’all’,’~camTx’] estimates all pose
values except the x translation of the camera. Whereas ToEstimate = [’base_pose’,’~baseRy’] estimates
the pose of the base apart from the rotation around the y-axis. The latter is equivalent to ToEstimate =
[’baseTx’,’baseTy’,’baseTz’,’baseRx’,’baseRz’].
Which terminating criteria can be used for the error minimization? The error minimization terminates either
after a fixed number of iterations or if the error falls below a given minimum error. The parameter
StopCriterion is used to choose between these two alternatives. If ’CountIterations’ is passed, the
algorithm terminates after MaxIterations iterations.
If StopCriterion is passed as ’MinError’, the algorithm runs until the error falls below the error threshold
given in MinError. If, however, the number of iterations reaches the number given in MaxIterations,
the algorithm terminates with an error message.
What is the order of the individual parameters? The length of the tuple MPointsOfImage corresponds to
the number of different positions of the manipulator and thus to the number of calibration images. The
parameter MPointsOfImage determines the number of model points used in the individual positions. If
the standard calibration plate is used, this means 49 points per position (image). If for example 15 images
were acquired, MPointsOfImage is a tuple of length 15, where all elements of the tuple have the value 49.
HALCON 8.0.2
1134 CHAPTER 15. TOOLS
The number of calibration images which is determined by the length of MPointsOfImage, must also be
taken into account for the tuples for the 3D model points and for the extracted 2D marks, respectively. Hence,
for 15 calibration images with 49 model points each, the tuples NX, NY, NZ, NRow, and NCol must contain
15 · 49 = 735 values each. These tuples are ordered according to the image the respective points lie in, i.e.,
the first 49 values correspond to the 49 model points in the first image. The order of the 3D model points and
the extracted 2D model points must be the same in each image.
The length of the tuple MRelPoses also depends on the number of calibration images. If, for example, 15
images and therefore 15 poses are used, the length of the tuple MRelPoses is 15 · 7 = 105 (15 times 7 pose
parameters). The first seven parameters thus determine the pose of the manipulator in the first image, and so
on.
What do the output parameters mean? If StopCriterion was set to ’CountIterations’, the output parame-
ters BaseFinalPose and CamFinalPose are returned even if the algorithm didn’t converge. If, how-
ever, StopCriterion was set to ’MinError’, the error must fall below ’MinError’ in order for output
parameters to be returned.
The representation type of BaseFinalPose and CamFinalPose is the same as in the corresponding
starting values. It can be changed with the operator convert_pose_type. The description of the dif-
ferent representation types and of their conversion can be found with the documentation of the operator
create_pose.
The parameter NumErrors contains a list of (numerical) errors from the individual iterations of the algo-
rithm. Based on the evolution of the errors, it can be decided whether the algorithm has converged for the
given starting values. The error values are returned as 3D deviations in meters. Thus, the last entry of the
error list corresponds to an estimate of the accuracy of the returned pose parameters.
Attention
The quality of the calibration depends on the accuracy of the input parameters (position of the calibration marks,
robot poses MRelPoses, and the starting positions BaseStartPose, CamStartPose). Based on the returned
error measures NumErrors, it can be decided, whether the algorithm has converged. Furthermore, the accuracy
of the returned pose can be estimated. The error measures are 3D differences in meters.
Parameter
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Linear list containing all the x coordinates of the calibration points (in the order of the images).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Linear list containing all the y coordinates of the calibration points (in the order of the images).
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Linear list containing all the z coordinates of the calibration points (in the order of the images).
. NRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; Htuple . double
Linear list containing all row coordinates of the calibration points (in the order of the images).
. NCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real-array ; Htuple . double
Linear list containing all the column coordinates of the calibration points (in the order of the images).
. MPointsOfImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
Number of the calibration points for each image.
. MRelPoses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Measured 3D pose of the robot for each image (moving camera: robot base in robot tool coordinates;
stationary camera: robot tool in robot base coordinates).
. BaseStartPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Starting value for the 3D pose of the calibration object in robot base coordinates (moving camera) or in robot
tool coordinates (stationary camera), respectively.
. CamStartPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Starting value for the 3D pose of the robot tool (moving camera) or robot base (stationary camera),
respectively, in camera coordinates.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
. ToEstimate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; Htuple . const char *
Parameters to be estimated (max. 12 degrees of freedom).
Default Value : "all"
List of values : ToEstimate ∈ {"all", "base_pose", "cam_pose", "baseTx", "baseTy", "baseTz", "baseRa",
"baseRb", "baseRg", "camTx", "camTy", "camTz", "camRa", "camRb", "camRg"}
read_cam_par(’campar.dat’, CamParam)
CalDescr := ’caltab.descr’
caltab_points(CalDescr, X, Y, Z)
* process all calibration images
for i := 0 to NumImages-1 by 1
read_image(Image, ’calib_’+i$’02d’)
* find marks on the calibration plate in every image
find_caltab(Image, CalPlate, CalDescr, 3, 150, 5)
find_marks_and_pose(Image, CalPlate, CalDescr, CamParam, 128, 10,
RCoordTmp, CCoordTmp, StartPose)
* accumulate 2D and 3D coordinates of the marks
RCoord := [RCoord, RCoordTmp]
CCoord := [CCoord, CCoordTmp]
XCoord := [XCoord, X]
YCoord := [YCoord, Y]
ZCoord := [ZCoord, Z]
NumMarker := [NumMarker, |RCoordTmp|]
* read pose of the robot tool in robot base coordinates
read_pose(’robpose_’+i$’02d’+’.dat’, RobPose)
* moving camera? invert pose
if (IsMovingCameraConfig=’true’)
pose_to_hom_mat3d(RobPose, base_H_tool)
hom_mat3d_invert(base_H_tool, tool_H_base)
hom_mat3d_to_pose(tool_H_base, RobPose)
endif
* accumulate robot poses
MRelPoses := [MRelPoses, RobPose]
* store the pose of the calibration plate in the first image and the
* corresponding pose of the robot for later use
if (i=0)
cam_P_cal := StartPose
RelPose0 := RobPose
endif
endfor
* obtain starting values: read one, compute the other
if (IsMovingCameraConfig=’true’)
HALCON 8.0.2
1136 CHAPTER 15. TOOLS
Result
hand_eye_calibration returns H_MSG_TRUE if all parameter values are correct and the method converges
with an error less than the specified minimum error (if StopCriterion = ’MinError’). If necessary, an excep-
tion handling is raised.
Parallelization Information
hand_eye_calibration is reentrant and processed without parallelization.
Possible Predecessors
find_marks_and_pose
Possible Successors
write_pose, convert_pose_type, pose_to_hom_mat3d, disp_caltab, sim_caltab
See also
find_caltab, find_marks_and_pose, disp_caltab, sim_caltab, write_cam_par,
read_cam_par, create_pose, convert_pose_type, write_pose, read_pose,
pose_to_hom_mat3d, hom_mat3d_to_pose, caltab_points, gen_caltab
Module
Calibration
Transform image points into the plane z=0 of a world coordinate system.
The operator image_points_to_world_plane transforms image points which are given in Rows and
Cols into the plane z=0 in a world coordinate system and returns their 3D coordinates in X and Y. The world
coordinate system is chosen by passing its 3D pose relative to the camera coordinate system in WorldPose.
In CamParam you must pass the interior camera parameters (see write_cam_par for the sequence of the
parameters and the underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
With the parameter Scale you can scale the resulting 3D coordinates. The parameter Scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’µm’ for the parameter Scale.
Internally, the operator first computes the line of sight between the projection center and the image contour points
in the camera coordinate system, taking into account the radial distortions. The line of sight is then transformed
into the world coordinate system specified in WorldPose. By intersecting the plane z=0 with the line of sight the
3D coordinates X and Y are obtained.
Parameter
HALCON 8.0.2
1138 CHAPTER 15. TOOLS
Result
image_points_to_world_plane returns H_MSG_TRUE if all parameter values are correct. If necessary,
an exception handling is raised.
Parallelization Information
image_points_to_world_plane is reentrant and processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
See also
contour_to_world_plane_xld
Module
Calibration
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
image_to_world_plane rectifies an image Image by transforming it into the plane z=0 (plane of mea-
surements) in a world coordinate system. The resulting rectified image ImageWorld shows neither radial nor
perspective distortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly
onto the plane of measurements. The world coordinate system is chosen by passing its 3D pose relative to the
camera coordinate system in WorldPose. In CamParam you must pass the interior camera parameters (see
write_cam_par for the sequence of the parameters and the underlying camera model).
In many cases CamParam and WorldPose are the result of calibrating the camera with the operator
camera_calibration. See below for an example.
The pixel position of the upper left corner of the output image ImageWorld is determined by the origin of the
world coordinate system. The size of the output image ImageWorld can be choosen by the parameters Width,
Height, and Scale. Width and Height must be given in pixels.
With the parameter Scale you can specify the size of a pixel in the transformed image. There are two typical
scenarios: First, you can scale the image such that pixel coordinates in the transformed image directly correspond
to metric units, e.g., that one pixel corresponds to one micron. This is useful if you want to perform measurements
in the transformed image which will then directly result in metric results. The second scenario is to scale the image
such that its content appears in a size similar to the original image. This is useful, e.g., if you want to perform
shape-based matching in the transformed image.
Scale must be specified as the ratio desired pixel size/original unit. A pixel size of 1µm means that a pixel in
the transformed image corresponds to the area 1µm × 1µm in the plane of measurements. The original unit is
determined by the coordinates of the calibration object. If the original unit is meters (which is the case if you use
the standard calibration plate), you can use the parameter values ’m’, ’cm’, ’mm’, ’microns’, or ’µm’ to directly set
the unit of pixel coordinates in the transformed image.
The parameter Interpolation specifies, whether bilinear interpolation (’bilinear’) should be applied between
the pixels in the input image or whether the gray value of the nearest neighboring pixel (’none’) should be used.
HALCON 8.0.2
1140 CHAPTER 15. TOOLS
1, WorldPixelX, WorldPixelY)
distance_pp(WorldPixelY[0], WorldPixelX[0], WorldPixelY[1], WorldPixelX[1],
WorldLength1)
distance_pp(WorldPixelY[0], WorldPixelX[0], WorldPixelY[2], WorldPixelX[2],
WorldLength2)
ScaleForSimilarPixelSize := (WorldLength1+WorldLength2)/2
* -> determine output image size such that entire input image fits into it
ExtentX := MaxX-MinX
ExtentY := MaxY-MinY
WidthRectifiedImage := ExtentX/ScaleForSimilarPixelSize
HeightRectifiedImage := ExtentY/ScaleForSimilarPixelSize
* transform the image with the determined parameters
image_to_world_plane(Image, RectifiedImage, FinalCamParam,
PoseForEntireImage, WidthRectifiedImage,
HeightRectifiedImage, ScaleForSimilarPixelSize,
’bilinear’)
Result
image_to_world_plane returns H_MSG_TRUE if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
image_to_world_plane is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
Alternatives
gen_image_to_world_plane_map, map_image
See also
contour_to_world_plane_xld, image_points_to_world_plane
Module
Calibration
Result
project_3d_point returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
project_3d_point is reentrant and processed without parallelization.
Possible Predecessors
read_cam_par, affine_trans_point_3d
Possible Successors
gen_region_points, gen_region_polygon, disp_polygon
See also
camera_calibration, disp_caltab, read_cam_par, get_line_of_sight,
affine_trans_point_3d
Module
Calibration
HALCON 8.0.2
1142 CHAPTER 15. TOOLS
the calibration can theoretically be performed if the 1D histograms of the images do not change by the movement
of the objects in the images. This can, for example, be the case if an object moves in front of a uniformly textured
background. However, it is preferable to use Features = 0 2d _histogram 0 because this mode is more accurate.
The mode Features = 0 1d _histograms 0 should only be used if it is impossible to construct the camera set-up
such that neither the camera nor the objects in the scene move.
Furthermore, care should be taken to cover the range of gray values without gaps by choosing appropriate image
contents. Whether there are gaps in the range of gray values can easily be checked based on the 1D gray value
histograms of the images or the 2D gray value histograms of consecutive images. In the 1D gray value histograms
(see gray_histo_abs), there should be no areas between the minimum and maximum gray value that have a
frequency of 0 or a very small frequency. In the 2D gray value histograms (see histo_2dim), a single connected
region having the shape of a “strip” should result from a threshold operation with a lower threshold of 1. If more
than one connected component results, a more suitable image content should be chosen. If the image content can
be chosen such that the gray value range of the image (e.g., 0-255 for byte images) can be covered with two images
with different exposures, and if there are no gaps in the histograms, the two images suffice for the calibration. This,
however, is typically not the case, and hence multiple images must be used to cover the entire gray value range.
As described above, for this multiple images with different exposures must be taken to cover the entire gray value
range as well as possible. For this, normally the first image should be exposed such that the maximum gray value
is slightly below the saturation limit of the camera, or such that the image is significantly overexposed. If the first
image is overexposed, a significant overexposure is necessary to enable radiometric_self_calibration
to detect the overexposed areas reliably. If the camera exhibits an unusual saturation behavior (e.g., a saturation
limit that lies significantly below the maximum gray value) the overexposed areas should be masked out by hand
with reduce_domain in the overexposed image.
radiometric_self_calibration returns the inverse gray value response function of the camera in
InverseResponse. The inverse response function can be used to create an image with a linear response by
using InverseResponse as the LUT in lut_trans. The parameter FunctionType determines which
function model is used to model the response function. For FunctionType = 0 discrete 0 , the response func-
tion is described by a discrete function with the relevant number of gray values (256 for byte images). For
FunctionType = 0 polynomial 0 , the response is described by a polynomial of degree PolynomialDegree.
The computation of the response function is slower for FunctionType = 0 discrete 0 . However, since a poly-
nomial tends to oscillate in the areas in which no gray value information can be derived, even if smoothness
constraints are imposed as described below, the discrete model should usually be preferred over the polynomial
model.
The parameter Smoothness defines (in addition to the constraints on the response function that can be de-
rived from the images) constraints on the smoothness of the response function. If, as described above, the gray
value range can be covered completely and without gaps, the default value of 1 should not be changed. Other-
wise, values > 1 can be used to obtain a stronger smoothing of the response function, while values < 1 lead
to a weaker smoothing. The smoothing is particularly important in areas for which no gray value information
can be derived from the images, i.e., in gaps in the histograms and for gray values smaller than the minimum
gray value of all images or larger than the maximum gray value of all images. In these areas, the smoothness
constraints lead to an interpolation or extrapolation of the response function. Because of the nature of the inter-
nally derived constraints, FunctionType = 0 discrete 0 favors an exponential function in the undefined areas,
whereas FunctionType = 0 polynomial 0 favors a straight line. Please note that the interpolation and extrapo-
lation is always less reliable than to cover the gray value range completely and without gaps. Therefore, in any
case it should be attempted first to acquire the images optimally, before the smoothness constraints are used to
fill in the remaining gaps. In all cases, the response function should be checked for plausibility after the call to
radiometric_self_calibration. In particular, it should be checked whether InverseResponse is
monotonic. If this is not the case, a more suitable scene should be used to avoid interpolation, or Smoothness
should be set to a larger value. For FunctionType = 0 polynomial 0 , it may also be necessary to change
PolynomialDegree. If, despite these changes, an implausible response is returned, the saturation behavior
of the camera should be checked, e.g., based on the 2D gray value histogram, and the saturated areas should be
masked out by hand, as described above.
When the inverse gray value response function of the camera is determined, the absolute energy falling on the
camera cannot be determined. This means that InverseResponse can only be determined up to a scale factor.
Therefore, an additional constraint is used to fix the unknown scale factor: the maximum gray value that can occur
should occur for the maximum input gray value, e.g., InverseResponse[255] = 255 for byte images. This
constraint usually leads to the most intuitive results. If, however, a multichannel image (typically an RGB image)
should be radiometrically calibrated (for this, each channel must be calibrated separately), the above constraint
may lead to the result that a different scaling factor is determined for each channel. This may lead to the result that
gray tones no longer appear gray after the correction. In this case, a manual white balancing step must be carried
out by identifying a homogeneous gray area in the original image, and by deriving appropriate scaling factors from
the corrected gray values for two of the three response curves (or, in general, for n − 1 of the n channels). Here,
the response curve that remains invariant should be chosen such that all scaling factors are < 1. With the scaling
factors thus determined, new response functions should be calculated by multiplying each value of a response
function with the scaling factor corresponding to that response function.
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image-array ; Hobject : byte / uint2
Input images.
. ExposureRatios (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double
Ratio of the exposure energies of successive image pairs.
Default Value : 0.5
Suggested values : ExposureRatios ∈ {0.25, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}
Restriction : (ExposureRatios > 0) ∧ (ExposureRatios < 1)
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Features that are used to compute the inverse response function of the camera.
Default Value : "2d_histogram"
List of values : Features ∈ {"2d_histogram", "1d_histograms"}
. FunctionType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Type of the inverse response function of the camera.
Default Value : "discrete"
List of values : FunctionType ∈ {"discrete", "polynomial"}
. Smoothness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Smoothness of the inverse response function of the camera.
Default Value : 1.0
Suggested values : Smoothness ∈ {0.3, 0.5, 0.7, 0.8, 1.0, 1.2, 1.5, 2.0, 3.0}
Restriction : Smoothness > 0
. PolynomialDegree (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Degree of the polynomial if FunctionType = ’polynomial’.
Default Value : 5
Suggested values : PolynomialDegree ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction : (PolynomialDegree ≥ 1) ∧ (PolynomialDegree ≤ 20)
. InverseResponse (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
Inverse response function of the camera.
Example (Syntax: HDevelop)
HALCON 8.0.2
1144 CHAPTER 15. TOOLS
while (1)
grab_image_async (Image, FGHandle, -1)
lut_trans (Image, ImageLinear, InverseResponse)
* Process radiometrically correct image.
[...]
endwhile
close_framegrabber (FGHandle)
Result
If the parameters are valid, the operator radiometric_self_calibration returns the value
H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
radiometric_self_calibration is reentrant and processed without parallelization.
Possible Predecessors
read_image, grab_image, grab_image_async, set_framegrabber_param, concat_obj,
proj_match_points_ransac, projective_trans_image
Possible Successors
lut_trans
See also
histo_2dim, gray_histo, gray_histo_abs, reduce_domain
Module
Calibration
Focus:foc: 0.00806039;
DOUBLE:0.0:;
Kappa:kappa: -2253.5;
DOUBLE::;
"Radial distortion coefficient [1/(meter*meter)]";
Sx:sx: 1.0629e-05;
DOUBLE:0.0:;
"Width of a cell on the chip [meter]";
Sy:sy: 1.1e-05;
DOUBLE:0.0:;
"Height of a cell on the chip [meter]";
Cx:cx: 378.236;
DOUBLE:0.0:;
"X-coordinate of the image center [pixel]";
Cy:cy: 297.587;
DOUBLE:0.0:;
"Y-coordinate of the image center [pixel]";
ImageWidth:imgw: 768;
INT:1:32767;
"Width of the used calibration images [pixel]";
ImageHeight:imgh: 576;
INT:1:32767;
"Height of the used calibration images [pixel]";
In addition to the 8 parameters of the parameter group Camera:Parameter, the parameter group LinescanCamera:
Parameter contains 3 parameters that describe the motion of the camera with respect to the object. With this,
the parameter group LinescanCamera:Parameter consists of the 11 parameters Focus, Kappa (κ), Sx, Sy, Cx, Cy,
ImageWidth, ImageHeight, Vx, Vy und Vz. A suitable file can look like the following:
Focus:foc: 0.061;
DOUBLE:0.0:;
"Focal length of the lens [meter]";
Kappa:kappa: -16.9761;
DOUBLE::;
"Radial distortion coefficient [1/(meter*meter)]";
Sx:sx: 1.06903e-05;
DOUBLE:0.0:;
"Width of a cell on the chip [meter]";
Sy:sy: 1e-05;
DOUBLE:0.0:;
"Height of a cell on the chip [meter]";
Cx:cx: 930.625;
DOUBLE:0.0:;
"X-coordinate of the image center [pixel]";
HALCON 8.0.2
1146 CHAPTER 15. TOOLS
Cy:cy: 149.962;
DOUBLE:0.0:;
"Y-coordinate of the image center [pixel]";
ImageWidth:imgw: 2048;
INT:1:32767;
"Width of the used calibration images [pixel]";
ImageHeight:imgh: 3840;
INT:1:32767;
"Height of the used calibration images [pixel]";
Vx:vx: 1.41376e-06;
DOUBLE::;
"X-component of the motion vector [meter/scanline]";
Vy:vy: 5.45756e-05;
DOUBLE::;
"Y-component of the motion vector [meter/scanline]";
Vz:vz: 3.45872e-06;
DOUBLE::;
"Z-component of the motion vector [meter/scanline]";
Parameter
. CamParFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of interior camera parameters.
Default Value : "campar.dat"
List of values : CamParFile ∈ {"campar.dat", "campar.initial", "campar.final"}
. CamParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double * / Hlong *
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
Example (Syntax: HDevelop)
Result
read_cam_par returns H_MSG_TRUE if all parameter values are correct and the file has been read successfully.
If necessary an exception handling is raised.
Parallelization Information
read_cam_par is reentrant and processed without parallelization.
Possible Successors
find_marks_and_pose, sim_caltab, gen_caltab, disp_caltab, camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
write_cam_par, write_pose, read_pose, project_3d_point, get_line_of_sight
Module
Foundation
sim_caltab is used to generate a simulated calibration image. The calibration plate description is read from the
file CalTabDescrFile and will be projected into the image plane using the given camera parameters (interior
camera parameters CamParam and exterior camera parameters CaltabPose), see also project_3d_point.
In the simulated image only the calibration plate is shown. The image background is set to the gray value
GrayBackground, the calibration plate background is set to GrayCaltab, and the calibration marks are set
to the gray value GrayMarks. The parameter ScaleFac influences the number of supporting points to approxi-
mate the elliptic contours of the calibration marks, see also disp_caltab. Increasing the number of supporting
points causes a more accurate determination of the mark boundary, but increases the computation time, too. For
each pixel of the simulated image which touches a subpixel-boundary of this kind, the gray value is set linearly
between GrayMarks and GrayCaltab dependent on the proportion Inside/Outside.
By applying the operator sim_caltab you can generate synthetic calibration images (with known camera pa-
rameters!) to test the quality of the calibration algorithm (see camera_calibration).
Parameter
. SimImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject * : byte
Simulated calibration image.
. CalTabDescrFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
File name of the calibration plate description.
Default Value : "caltab.descr"
List of values : CalTabDescrFile ∈ {"caltab.descr", "caltab_10mm.descr", "caltab_30mm.descr",
"caltab_100mm.descr", "caltab_200mm.descr"}
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. CaltabPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Exterior camera parameters (3D pose of the calibration plate in camera coordinates).
Number of elements : 7
. GrayBackground (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Gray value of image background.
Default Value : 128
Suggested values : GrayBackground ∈ {0, 32, 64, 96, 128, 160}
Restriction : (0 ≤ GrayBackground) ≤ 255
. GrayCaltab (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Gray value of calibration plate.
Default Value : 224
Suggested values : GrayCaltab ∈ {144, 160, 176, 192, 208, 224, 240}
Restriction : (0 ≤ GrayCaltab) ≤ 255
. GrayMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Gray value of calibration marks.
Default Value : 80
Suggested values : GrayMarks ∈ {16, 32, 48, 64, 80, 96, 112}
Restriction : (0 ≤ GrayMarks) ≤ 255
. ScaleFac (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; Htuple . double
Scaling factor to reduce oversampling.
Default Value : 1.0
Suggested values : ScaleFac ∈ {1.0, 0.5, 0.25, 0.125}
Recommended Increment : 0.05
Restriction : 1.0 ≥ ScaleFac
Example (Syntax: HDevelop)
HALCON 8.0.2
1148 CHAPTER 15. TOOLS
Result
sim_caltab returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception handling is
raised.
Parallelization Information
sim_caltab is reentrant and processed without parallelization.
Possible Predecessors
camera_calibration, find_marks_and_pose, read_pose, read_cam_par,
hom_mat3d_to_pose
Possible Successors
find_caltab
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, create_pose,
hom_mat3d_to_pose, project_3d_point, gen_caltab
Module
Calibration
x = PX .
Here, x is a homogeneous 2D vector, X a homogeneous 3D vector, and P a homogeneous 3×4 projection matrix.
The projection matrix P can be decomposed as follows:
P = K(R|t) .
Here, R is a 3×3 rotation matrix and t is an inhomogeneous 3D vector. These two entities describe
the position (pose) of the camera in 3D space. This convention is analogous to the convention used in
camera_calibration, i.e., for R = I and t = 0 the x axis points to the right, the y axis downwards, and
the z axis points forward. K is the calibration matrix of the camera (the camera matrix) which can be described as
follows:
af sf u
K= 0 f v .
0 0 1
Here, f is the focal length of the camera in pixels, a the aspect ratio of the pixels, s is a factor that models the
skew of the image axes, and (u, v) is the principal point of the camera in pixels. In this convention, the x axis
corresponds to the column axis and the y axis to the row axis.
Since the camera is stationary, it can be assumed that t = 0. With this convention, it is easy to see that the
fourth coordinate of the homogeneous 3D vector X has no influence on the position of the projected 3D point.
Consequently, the fourth coordinate can be set to 0, and it can be seen that X can be regarded as a point at infinity,
and hence represents a direction in 3D. With this convention, the fourth coordinate of X can be omitted, and hence
X can be regarded as inhomogeneous 3D vector which can only be determined up to scale since it represents a
direction. With this, the above projection equation can be written as follows:
x = KRX .
If two images of the same point are taken with a stationary camera, the following equations hold:
x1 = K1 R1 X
x2 = K2 R2 X
and conseqently
x2 = K2 R2 R−1 −1 −1
1 K1 x1 = K2 R12 K1 x1 = H12 x1 .
If the camera parameters do not change when taking the two images, K1 = K2 holds. Because of the above, the
two images of the same 3D point are related by a projective 2D transformation. This transformation can be deter-
mined with proj_match_points_ransac. It needs to be taken into account that the order of the coordinates
of the projective 2D transformations in HALCON is the opposite of the above convention. Furthermore, it needs
to be taken into account that proj_match_points_ransac uses a coordinate system in which the origin
of a pixel lies in the upper left corner of the pixel, whereas stationary_camera_self_calibration
uses a coordinate system that corresponds to the definition used in camera_calibration, in which the
origin of a pixel lies in the center of the pixel. For projective 2D transformations that are determined with
proj_match_points_ransac the rows and columns must be exchanged and a translation of (0.5, 0.5) must
be applied. Hence, instead of H12 = K2 R12 K−11 the following equations hold in HALCON:
0 1 0.5 0 1 −0.5
H12 = 1 0 0.5 K2 R12 K−1
1
1 0 −0.5
0 0 1 0 0 1
and
0 1 −0.5 0 1 0.5
K2 R12 K1−1 = 1 0 −0.5 H12 1 0 0.5 .
0 0 1 0 0 1
From the above equation, constraints on the camera parameters can be derived in two ways. First, the rotation can
be eliminated from the above equation, leading to equations that relate the camera matrices with the projective 2D
transformation between the two images. Let Hij be the projective transformation from image i to image j. Then,
Kj K>
j = Hij Ki K> >
i Hij
K−> −1
j Kj = H−> −> −1 −1
ij Ki Ki Hij
HALCON 8.0.2
1150 CHAPTER 15. TOOLS
From the second equation, linear constraints on the camera parameters can be derived. This method is used for
EstimationMethod = ’linear’. Here, all source images i given by MappingSource and all destination
images j given by MappingDest are used to compute constraints on the camera parameters. After the camera
parameters have been determined from these constraints, the rotation of the camera in the respective images can
be determined based on the equation Rij = K−1 j Hij Ki and by constructing a chain of transformations from the
reference image ReferenceImage to the respective image. From the first equation above, a nonlinear method
to determine the camera parameters can be derived by minimizing the following error:
> >
2
X
Kj K>
E= j − Hij Ki Ki Hij F
(i,j)∈{(s,d)}
Here, analogously to the linear method, {(s, d)} is the set of overlapping images specified by MappingSource
and MappingDest. This method is used for EstimationMethod = ’nonlinear’. To start the minimization,
the camera parameters are initialized with the results of the linear method. These two methods are very fast and
return acceptable results if the projective 2D transformations Hij are sufficiently accurate. For this, it is essential
that the images do not have radial distortions. It can also be seen that in the above two methods the camera
parameters are determined independently from the rotation parameters, and consequently the possible constraints
are not fully exploited. In particular, it can be seen that it is not enforced that the projections of the same 3D
point lie close to each other in all images. Therefore, stationary_camera_self_calibration offers
a complete bundle adjustment as a third method (EstimationMethod = ’gold_standard’). Here, the camera
parameters and rotations as well as the directions in 3D corresponding to the image points (denoted by the vectors
X above), are determined in a single optimization by minimizing the following error:
n m
!
X X 1 2
2 2
E= kxij − Ki Ri Xj k + 2 (ui + vi )
i=1 j=1
σ
In this equation, only the terms for which the reconstructed direction Xj is visible in image i are taken into account.
The starting values for the parameters in the bundle adjustment are derived from the results of the nonlinear method.
Because of the high complexity of the minimization the bundle adjustment requires a significantly longer execution
time than the two simpler methods. Nevertheless, because the bundle adjustment results in significantly better
results, it should be preferred.
In each of the three methods the camera parameters that should be computed can be specified. The remaining
parameters are set to a constant value. Which parameters should be computed is determined with the parameter
CameraModel which contains a tuple of values. CameraModel must always contain the value ’focus’ that
specifies that the focal length f is computed. If CameraModel contains the value ’principal_point’ the principal
point (u, v) of the camera is computed. If not, the principal point is set to (ImageWidth/2, ImageHeight/2).
If CameraModel contains the value ’aspect’ the aspect ratio a of the pixels is determined, otherwise it is set to
1. If CameraModel contains the value ’skew’ the skew of the image axes is determined, otherwise it is set to
0. Only the following combinations of the parameters are allowed: ’focus’, [’focus’, ’principal_point’], [’focus’,
’aspect’], [’focus’, ’principal_point’, ’aspect’] und [’focus’, ’principal_point’, ’aspect’, ’skew’].
Additionally, it is possible to determine the parameter Kappa which models radial lens distortions, if
EstimationMethod = ’gold_standard’ has been selected and the camera parameters are assumed constant.
In this case, ’kappa’ can also be included in the parameter CameraModel.
When using EstimationMethod = ’gold_standard’ to determine the principal point, it is possible to penalize
estimations far away from the image center. This can be done by adding a sigma to the parameter ’principal_point:
0.5’. If no sigma is given the penalty term in the above equation for calculating the error is ommited.
The parameter FixedCameraParams determines whether the camera parameters can change in each im-
age or whether they should be assumed constant for all images. To calibrate a camera so that it can
later be used for measuring with the calibrated camera, only FixedCameraParams = ’true’ is use-
ful. The mode FixedCameraParams = ’false’ is mainly useful to compute spherical mosaics with
gen_spherical_mosaic if the camera zoomed or if the focus changed significantly when the mosaic images
were taken. If a mosaic with constant camera parameters should be computed, of course FixedCameraParams
= ’true’ should be used. It should be noted that for FixedCameraParams = ’false’ the camera calibration
problem is determined very badly, especially for long focal lengths. In these cases, often only the focal length can
be determined. Therefore, it may be necessary to use CameraModel = ’focus’ or to constrain the position of the
principal point by using a small Sigma for the penalty term for the principal point.
The number of images that are used for the calibration is passed in NumImages. Based on the number of images,
several constraints for the camera model must be observed. If only two images are used, even under the assumption
of constant parameters not all camera parameters can be determined. In this case, the skew of the image axes should
be set to 0 by not adding ’skew’ to CameraModel. If FixedCameraParams = ’false’ is used, the full set of
camera parameters can never be determined, no matter how many images are used. In this case, the skew should be
set to 0 as well. Furthermore, it should be noted that the aspect ratio can only be determined accurately if at least
one image is rotated around the optical axis (the z axis of the camera coordinate system) with respect to the other
images. If this is not the case the computation of the aspect ratio should be suppressed by not adding ’aspect’ to
CameraModel.
As described above, to calibrate the camera it is necessary that the projective transformation for each overlapping
image pair is determined with proj_match_points_ransac. For example, for a 2×2 block of images in
the following layout
1 2
3 4
the following projective transformations should be determined, assuming that all images overlap each other: 17→2,
17→3, 17→4, 27→3, 27→4 und 37→4. The indices of the images that determine the respective transformation are
given by MappingSource and MappingDest. The indices are start at 1. Consequently, in the above example
MappingSource = [1,1,1,2,2,3] and MappingDest = [2,3,4,3,4,4] must be used. The number of images
in the mosaic is given by NumImages. It is used to check whether each image can be reached by a chain of
transformations. The index of the reference image is given by ReferenceImage. On output, this image has the
identity matrix as its transformation matrix.
The 3 × 3 projective transformation matrices that correspond to the image pairs are passed in
HomMatrices2D. Additionally, the coordinates of the matched point pairs in the image pairs must
be passed in Rows1, Cols1, Rows2, and Cols2. They can be determined from the output of
proj_match_points_ransac with tuple_select or with the HDevelop function subset. To enable
stationary_camera_self_calibration to determine which point pair belongs to which image pair,
NumCorrespondences must contain the number of found point matches for each image pair.
The computed camera matrices Ki are returned in CameraMatrices as 3 × 3 matrices. For
FixedCameraParams = ’false’, NumImages matrices are returned. Since for FixedCameraParams =
’true’ all camera matrices are identical, a single camera matrix is returned in this case. The computed rotations Ri
are returned in RotationMatrices as 3 × 3 matrices. RotationMatrices always contains NumImages
matrices.
If EstimationMethod = ’gold_standard’ is used, (X, Y, Z) contains the reconstructed directions Xj . In ad-
dition, Error contains the average projection error of the reconstructed directions. This can be used to check
whether the optimization has converged to useful values.
If the computed camera parameters are used to project 3D points or 3D directions into the image i the respective
camera matrix should be multiplied with the corresponding rotation matrix (with hom_mat2d_compose).
Parameter
HALCON 8.0.2
1152 CHAPTER 15. TOOLS
* Assume that Images contains four images in the layout given in the
* above description. Then the following example performs the camera
* self-calibration using these four images.
From := [1,1,1,2,2,3]
To := [2,3,4,3,4,4]
HomMatrices2D := []
Rows1 := []
Cols1 := []
Rows2 := []
Cols2 := []
NumMatches := []
for J := 0 to |From|-1 by 1
select_obj (Images, From[J], ImageF)
select_obj (Images, To[J], ImageT)
points_foerstner (ImageF, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsF, ColsF, _, _, _, _, _, _, _, _)
points_foerstner (ImageT, 1, 2, 3, 100, 0.1, ’gauss’, ’true’,
RowsT, ColsT, _, _, _, _, _, _, _, _)
Result
If the parameters are valid, the operator stationary_camera_self_calibration returns the value
H_MSG_TRUE. If necessary an exception handling is raised.
Parallelization Information
stationary_camera_self_calibration is reentrant and processed without parallelization.
Possible Predecessors
proj_match_points_ransac
Possible Successors
gen_spherical_mosaic
See also
gen_projective_mosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Module
Calibration
HALCON 8.0.2
1154 CHAPTER 15. TOOLS
For area scan cameras, the projection of the point pc that is given in camera coordinates into a (sub-)pixel [r,c]
in the image consists of the following steps: First, the point is projected into the image plane, i.e., onto the sensor
chip. If the underlying camera model is an area scan pinhole camera, i.e., if the focal length passed in CamParam
is greater than 0, the projection is described by the following equations:
x
pc = y
z
x y
u = Focus · and v = Focus ·
z z
In contrast, if the focal length is passed as 0 in CamParam, the camera model of an area scan telecentric camera
is used, i.e., it is assumed that the optics of the lens of the camera performs a parallel projection. In this case, the
corresponding equations are:
x
pc = y
z
u = x and v=y
2u 2v
ũ = p and ṽ = p
1+ 1− 4κ(u2 + v2 ) 1+ 1 − 4κ(u2 + v 2 )
Finally, the point is transformed from the image plane coordinate system into the image coordinate system, i.e.,
the pixel coordinate system:
ũ ṽ
c= + Cx and r= + Cy
Sx Sy
For line scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON,
the following assumptions for this motion are made:
The motion is described by the motion vector V = (Vx , Vy , Vz )T that must be given in [meter/scanline] in the
camera coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact,
this is equivalent to the assumption of a fixed camera with the object travelling along −V .
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system is the
center of projection. The z-axis is identical to the optical axis and directed so that the visible points have positive z
coordinates. The y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector
has a positive y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a
right-handed coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves
relatively to the object, i.e., each image line has been imaged from a different position. This means, there would
be an individual pose for each image line. To make things easier, in HALCON, all transformations from world
coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion
V is taken into account during the projection of the point pc into the image. Consequently, only the pose of the
first image line is returned by the operators find_marks_and_pose and camera_calibration.
For line scan pinhole cameras, the projection of the point pc that is given in the camera coordinate system into a
(sub-)pixel [r,c] in the image is defined as follows:
Assuming
x
pc = y ,
z
m · D · ũ = x − t · Vx
−m · D · pv = y − t · Vy
m · Focus = z − t · Vz
with
1
D =
1 + κ(ũ2 + (pv )2 )
pv = Sy · Cy
ũ
c= + Cx and r=t
Sx
The format of the text file CamParFile is a (HALCON-independent) generic parameter description. It allows to
group arbitrary sets of parameters hierarchically. The description of a single parameter within a parameter group
consists of the following 3 lines:
Depending on the number of elements of CamParam, the parameter groups Camera:Parameter or LinescanCam-
era:Parameter, respectively, are written into the text file CamParFile (see read_cam_par for an example).
The parameter group Camera:Parameter consits of the 8 interior camera parameters of the area scan camera. The
parameter group LinescanCamera:Parameter consists of the 11 interior camera parameters of the line scan camera.
Parameter
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Interior camera parameters.
Number of elements : (CamParam = 8) ∨ (CamParam = 11)
. CamParFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; Htuple . const char *
File name of interior camera parameters.
Default Value : "campar.dat"
List of values : CamParFile ∈ {"campar.dat", "campar.initial", "campar.final"}
Example (Syntax: HDevelop)
HALCON 8.0.2
1156 CHAPTER 15. TOOLS
read_image(Image3, ’calib-03’)
* find calibration pattern
find_caltab(Image1, Caltab1, ’caltab.descr’, 3, 112, 5)
find_caltab(Image2, Caltab2, ’caltab.descr’, 3, 112, 5)
find_caltab(Image3, Caltab3, ’caltab.descr’, 3, 112, 5)
* find calibration marks and start poses
StartCamPar := [Focus, Kappa, Sx, Sy, Cx, Cy, ImageWidth, ImageHeight]
find_marks_and_pose(Image1, Caltab1, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1,
StartPose1)
find_marks_and_pose(Image2, Caltab2, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord2, CCoord2,
StartPose2)
find_marks_and_pose(Image3, Caltab3, ’caltab.descr’, StartCamPar,
128, 10, 18, 0.9, 15.0, 100.0, RCoord3, CCoord3,
StartPose3)
* read 3D positions of calibration marks
caltab_points(’caltab.descr’, NX, NY, NZ)
* camera calibration
camera_calibration(NX, NY, NZ, [RCoord1, RCoord2, RCoord3],
[CCoord1, CCoord2, CCoord3], StartCamPar,
[StartPose1, StartPose2, StartPose3], ’all’,
CamParam, NFinalPose, Errors)
* write interior camera parameters to file
write_cam_par(CamParam, ’campar.dat’)
Result
write_cam_par returns H_MSG_TRUE if all parameter values are correct and the file has been written suc-
cessfully. If necessary an exception handling is raised.
Parallelization Information
write_cam_par is local and processed completely exclusively without parallelization.
Possible Predecessors
camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
read_cam_par, write_pose, read_pose, project_3d_point, get_line_of_sight
Module
Foundation
15.6 Datacode
clear_all_data_code_2d_models ( )
T_clear_all_data_code_2d_models ( )
Delete all 2D data code models and free the allocated memory
The operator clear_all_data_code_2d_models deletes all 2D data code models that were created by
create_data_code_2d_model or read_data_code_2d_model. All memory used by the models is
freed. After the operator call all 2D data code handles are invalid.
Attention
clear_all_data_code_2d_models exists solely for the purpose of implementing the “reset program”
functionality in HDevelop. clear_all_data_code_2d_models must not be used in any application.
Result
The operator clear_all_data_code_2d_models returns the value H_MSG_TRUE if all 2D data code
models were freed correctly. Otherwise, an exception will be raised.
Parallelization Information
clear_all_data_code_2d_models is processed completely exclusively without parallelization.
Alternatives
clear_data_code_2d_model
See also
create_data_code_2d_model, read_data_code_2d_model
Module
Data Code
HALCON 8.0.2
1158 CHAPTER 15. TOOLS
stream are, in compliance with the standard, doubled (’\\’) for the output. This is necessary in order to distinguish
data backslashs from the ECI sequence ’\nnnnnn’.
The information whether the symbol contains ECI codes (and consequently doubled backslashs) or not is stored
in the Symbology Identifier number that can be obtained for every succesfully decoded symbol with the help of
the operator get_data_code_2d_results passing the generic parameter ’symbology_ident’. How the code
number encodes additional information about the symbology and the data code reader, like the ECI support, is
defined in the different symbology specifications. For more information see the appropriate standards and the
operator get_data_code_2d_results.
The Symbology Indentifier code will not be preceded by the data code reader to the output data, even if the symbol
contains an ECI code. If this is needed, e.g., by a subsequent processing unit, the ’symbology_ident’ number
(obtained by the operator get_data_code_2d_results with parameter ’symbology_ident’) can be added to
the data stream manually together with the symbology flag and the symbol code: ’]d’, ’]Q’, or ’]L’ for DataMatrix
codes, QR codes, or PDF417 codes, respectively.
Standard default settings of the data code model
The default settings of the model were chosen to read a wide range of common symbols within a reasonable
amount of time. However, for run-time reasons some restrictions apply to the symbol (see the following table).
If the model was modified (as described later), it is at any time possible to reset it to these default settings by
passing the generic parameter ’default_parameters’ together with the value ’standard_recognition’ to the operator
set_data_code_2d_param.
It is possible to query the model parameters with the operator get_data_code_2d_param. The
names of all supported parameters for setting or querying the model are returned by the operator
query_data_code_2d_params.
Store the data code model
Furthermore, the operator write_data_code_2d_model allows to write the model into a file that can be
used later to create (e.g., in a different application) an identical copy of the model. Such a model copy is created
directly by read_data_code_2d_model (without calling create_data_code_2d_model).
Free the data code model
Since memory is allocated during create_data_code_2d_model and the following operations, the model
should be freed explicitly by the operator clear_data_code_2d_model if it is no longer used.
Parameter
. SymbolType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; (Htuple .) const char *
Type of the 2D data code.
Default Value : "Data Matrix ECC 200"
List of values : SymbolType ∈ {"Data Matrix ECC 200", "QR Code", "PDF417"}
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that can be adjusted for the 2D data code model.
Default Value : []
List of values : GenParamNames ∈ {"default_parameters", "strict_model", "persistence", "polarity",
"mirrored", "contrast_min", "model_type", "version", "version_min", "version_max", "symbol_size",
"symbol_size_min", "symbol_size_max", "symbol_cols", "symbol_cols_min", "symbol_cols_max",
"symbol_rows", "symbol_rows_min", "symbol_rows_max", "symbol_shape", "module_size",
"module_size_min", "module_size_max", "module_width", "module_width_min", "module_width_max",
"module_aspect", "module_aspect_min", "module_aspect_max", "module_gap", "module_gap_min",
"module_gap_max", "module_gap_col", "module_gap_col_min", "module_gap_col_max",
"module_gap_row", "module_gap_row_min", "module_gap_row_max", "slant_max", "module_grid",
"position_pattern_min"}
. GenParamValues (input_control) . . . . . . attribute.value(-array) ; (Htuple .) const char * / Hlong / double
Values of the generic parameters that can be adjusted for the 2D data code model.
Default Value : []
Suggested values : GenParamValues ∈ {"standard_recognition", "enhanced_recognition", "yes", "no",
"any", "dark_on_light", "light_on_dark", "square", "rectangle", "small", "big", "fixed", "variable", 0, 1, 2, 3, 4,
5, 6, 7, 8, 10, 30, 50, 70, 90, 12, 14, 16, 18, 20, 22, 24, 26, 32, 36, 40, 44, 48, 52, 64, 72, 80, 88, 96, 104, 120,
132, 144}
. DataCodeHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; (Htuple .) Hlong *
Handle for using and accessing the 2D data code model.
Example (Syntax: HDevelop)
* (2) Create a model for reading a wide range of Data matrix ECC 200 codes
* (this model will also read light symbols on dark background)
create_data_code_2d_model (’Data Matrix ECC 200’, ’default_parameters’,
’enhanced_recognition’, DataCodeHandle)
HALCON 8.0.2
1160 CHAPTER 15. TOOLS
* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)
Result
The operator create_data_code_2d_model returns the value H_MSG_TRUE if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
create_data_code_2d_model is processed completely exclusively without parallelization.
Possible Successors
set_data_code_2d_param, find_data_code_2d
Alternatives
read_data_code_2d_model
See also
clear_data_code_2d_model, clear_all_data_code_2d_models
Module
Data Code
Detect and read 2D data code symbols in an image or train the 2D data code model.
The operator find_data_code_2d detects 2D data code symbols in the input image (Image) and reads
the data that is coded in the symbol. Before calling find_data_code_2d, a model of a class of 2D data
codes that matches the symbols in the images must be created with create_data_code_2d_model or
read_data_code_2d_model. The handle returned by these operators is passed to find_data_code_2d
in DataCodeHandle. To look for more than one symbol in an image, the generic parameter
’stop_after_result_num’ can be passed to GenParamNames together with the number of requested symbols as
GenParamValues.
As a result the operator returns for every successfully decoded symbol the surrounding XLD contour
(SymbolXLDs), a result handle, which refers to a candidate structure that stores additional information about
the symbol as well as the search and decoding process (ResultHandles), and the string that is encoded in
the symbol (DecodedDataStrings). If the string is longer than 1024 characters it is shortened to 1020
characters followed by ’. . . ’. In this case, accessing the complete string is only possible with the operator
get_data_code_2d_results. Passing the candidate handle from ResultHandles together with the
generic parameter ’decoded_data’ get_data_code_2d_results returns a tuple with the ASCII code of
all characters of the string.
Adjusting the model
If there is a symbol in the image that cannot be read, it should be verified, whether the properties of the symbol
fit the model parameters. Special attention should be paid to the correct polarity (’polarity’, light-on-dark or dark-
on-light), the symbol size (’symbol_size’ for ECC 200, ’version’ for QR Code, ’symbol_rows’ and ’symbol_cols’
for PDF417), the module size (’module_size’ for ECC 200 and QR Code, ’module_width’ and ’module_aspect’
for PDF417), the possibility of a mirroring of the symbol (’mirrored’), and the specified minimum contrast (’con-
trast_min’). Further relevant parameters are the gap between neighboring foreground modules and, for ECC 200,
the maximum slant of the L-shaped finder pattern (’slant_max’). The current settings for these parameters are
returned by the operator get_data_code_2d_param. If necessary, the appropriate model parameters can be
adjusted with set_data_code_2d_param.
It is recommended to adjust the model as well as possible to the symbols in the images also for run-time reasons.
In general, the run-time of find_data_code_2d is higher for a more general model than for a more specific
model. One should take into account that a general model leads to a high run-time especially if no valid data code
can be found.
Train the model
Besides setting the model parameters manually with set_data_code_2d_param, the model can also be
trained with find_data_code_2d based on one or several sample images. For this the generic parameter
’train’ must be passed in GenParamNames. The corresponding value passed in GenParamValues determines
the model parameters that should be learned. The following values are possible:
It is possible to train several of these parameters in one call of find_data_code_2d by passing the generic pa-
rameter ’train’ in a tuple more than once in conjunction with the appropriate parameters: e.g., GenParamNames
= [’train’,’train’] and GenParamValues = [’polarity’,’module_size’]. Furthermore, in conjunction with ’train’
= ’all’ it is possible to exclude single parameters from training explicitly again by passing ’train’ more than once.
The names of the parameters to exclude, however, must be prefixed by ’˜’: GenParamNames = [’train’,’train’]
and GenParamValues = [’all’,’˜contrast’], e.g., trains all parameters except the minimum contrast.
For training the model, the following aspects should be considered:
• To use several images for the training, the operator find_data_code_2d must be called with the param-
eter ’train’ once for every sample image.
• It is also possible to train the model with several symbols in one image. Here, the generic parameter
’stop_after_result_num’ must be passed as a tuple to GenParamNames together with ’train’. The num-
ber of symbols in the image is passed in GenParamValues together with the training parameters.
• If the training image contains more symbols than the one that shall be used for the training the domain of the
image should be reduced to the symbol of interest with reduce_domain.
• In an application with very similar images, one image for training may be sufficient if the following assump-
tions are true: The symbol size (in modules) is the same for all symbols used in the application, foreground
and background modules are of the same size and there is no gap between neighboring foreground modules,
the background has no distinct texture; and the contrast of all images is almost the same. Otherwise, several
images should be used for training.
• In applications where the symbol size (in modules) is not fixed, the smallest as well as the biggest symbols
should be used for the training. If this can not be guaranteed, the limits for the symbol size should be adapted
manually after the training, or the symbol size should entirely be excluded from the training.
HALCON 8.0.2
1162 CHAPTER 15. TOOLS
• During the first call of find_data_code_2d in the training mode, the trained model parameters are
restricted to the properties of the detected symbol. Any successive training call will, where necessary, extend
the parameter range to cover the already trained symbols as well as the new symbols. Resetting the model with
set_data_code_2d_param to one of its default settings (’default_parameters’ = ’standard_recognition’
or ’enhanced_recognition’) will also reset the training state of the model.
• If find_data_code_2d is not able to read the symbol in the training image, this will produce
no error or exception handling. This can simply be tested in the program by checking the results of
find_data_code_2d: SymbolXLDs, ResultHandles, DecodedDataStrings. These tuples
will be empty, and the model will not be modified.
Functionality of the symbol search
Depending on the current settings of the 2D data code model (see set_data_code_2d_param), the operator
find_data_code_2d performs several passes for searching the data code symbols. The search starts at the
highest pyramid level, where – according to the maximum module size defined in the data code model – the
modules can be separated. In addition, in every pyramid level the preprocessing can vary depending on the presets
for the module gap. If the data code model enables dark symbols on a light background as well as light symbols
on a dark background, within the current pyramid level the dark symbols are searched first. Then the passes for
searching light symbols follow. A pass consists of two phases: The search phase is used to look for the finder
patterns and to generate a symbol candidate for every detected finder pattern, and the evaluation phase, where in a
lower pyramid level all candidates are investigated and – if possible – read.
The operator call is terminated after that pass in which the requested number of 2D data code symbols was suc-
cessfully decoded. The required number of symbols can be specified with the generic paramter GenParamNames
= ’stop_after_result_num’. The appropriate value is passed in GenParamValues; the default is 1.
While searching for more than one symbol in the image, it may happen that not all symbols are detected in the
same pass. In this case find_data_code_2d automatically continues the search until all symbols are found
or until the last pass was performed. Otherwise, if the input image contains several symbols but not all have to
be read, it is possible (especially if the symbols look similar) that more than the requested number of symbols are
returned as a result.
Query results of the symbol search
With the result handles and the operators get_data_code_2d_results and
get_data_code_2d_objects, additional data can be requested about the search process, e.g., the number
of internal search passes or the number of investigated candidates, and – together with the ResultHandles –
about the symbols, like the symbol and module size, the contrast, or the raw data coded in the symbol. In addition,
these operators provide information about all investigated candidates that could not be read. In particular, this
helps to determine if a candidate was actually generated at the symbol’s position during the preprocessing and – by
the value of a status variable – why the search or reading was aborted. Further information about the parameters
can be found with the operators get_data_code_2d_results and get_data_code_2d_objects.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .image ; Hobject : byte
Input image.
. SymbolXLDs (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; Hobject *
XLD contours that surround the successfully decoded data code symbols.
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; (Htuple .) Hlong
Handle of the 2D data code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of (optional) parameters for controlling the behavior of the operator.
Default Value : []
List of values : GenParamNames ∈ {"train", "stop_after_result_num"}
. GenParamValues (input_control) . . . . . . attribute.value(-array) ; (Htuple .) Hlong / double / const char *
Values of the optional generic parameters.
Default Value : []
Suggested values : GenParamValues ∈ {"all", "model_type", "symbol_size", "version", "module_size",
"module_shape", "polarity", "mirrored", "contrast", "module_grid", "image_proc", 1, 2, 3}
. ResultHandles (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; (Htuple .) Hlong *
Handles of all successfully decoded 2D data code symbols.
. DecodedDataStrings (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) char *
Decoded data strings of all detected 2D data code symbols in the image.
* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Display all symbols, the strings encoded in them, and the module size
dev_set_color (’green’)
for i := 0 to |ResultHandles| - 1 by 1
SymbolXLD := SymbolXLDs[i+1]
dev_display (SymbolXLD)
get_contour_xld (SymbolXLD, Row, Col)
set_tposition (WindowHandle, max(Row), min(Col))
write_string (WindowHandle, DecodedDataStrings[i])
get_data_code_2d_results (DataCodeHandle, ResultHandles[i],
[’module_height’,’module_width’], ModuleSize)
new_line (WindowHandle)
write_string (WindowHandle, ’module size = ’ + ModuleSize[0] + ’x’ +
ModuleSize[1])
endfor
Result
The operator find_data_code_2d returns the value H_MSG_TRUE if the given parameters are correct.
Otherwise, an exception will be raised.
Parallelization Information
find_data_code_2d is reentrant and processed without parallelization.
Possible Predecessors
create_data_code_2d_model, read_data_code_2d_model, set_data_code_2d_param
Possible Successors
get_data_code_2d_results, get_data_code_2d_objects, write_data_code_2d_model
See also
create_data_code_2d_model, set_data_code_2d_param, get_data_code_2d_results,
get_data_code_2d_objects
Module
Data Code
HALCON 8.0.2
1164 CHAPTER 15. TOOLS
Access iconic objects that were created during the search for 2D data code symbols.
The operator get_data_code_2d_objects facilitates to access iconic objects that were created dur-
ing the last call of find_data_code_2d while searching and reading the 2D data code symbols. Be-
sides the name of the object (ObjectName), the 2D data code model (DataCodeHandle) must be passed
to get_data_code_2d_objects. In addition, in CandidateHandle a handle of a result or candi-
date structure or a string identifying a group of candidates (see get_data_code_2d_results) must be
passed. These handles are returned by find_data_code_2d for all successfully decoded symbols and by
get_data_code_2d_results for a group of candidates. If these operators return several handles in a tuple,
the individual handles can be accessed by normal tuple operations.
Some objects are not accessible without setting the model parameter ’persistence’ to 1 (see
set_data_code_2d_param). The persistence must be set before calling find_data_code_2d, either
while creating the model with create_data_code_2d_model or with set_data_code_2d_param.
Currently, the following iconic objects can be retrieved:
Regions of the modules
These region arrays correspond to the areas that were used for the classification. The returned object is a region
array. Hence it cannot be requested for a group of candidates. Therefore, a single result handle must be passed in
CandidateHandle. The model persistence must be 1 for this object. In addition, requesting the module ROIs
makes sense only for symbols that were detected as valid symbols. For other candidates, whose processing was
aborted earlier, the module ROIs are not available.
XLD contour
This object can be requested for any group of results or for any single candidate or symbol handle. The persistence
setting is of no relevance.
Pyramid images
* Example demonstrating how to access the iconic objects of the data code
* search.
* Get the handles of all candidates that were detected as a symbol but
* could not be read
get_data_code_2d_results (DataCodeHandle, ’all_undecoded’, ’handle’,
HandlesUndecoded)
* For every undecoded symbol, get the contour and the classified
* module regions
for i := 0 to |HandlesUndecoded| - 1 by 1
* Get the contour of the symbol
dev_set_color (’blue’)
get_data_code_2d_objects (SymbolXLD, DataCodeHandle, HandlesUndecoded[i],
’candidate_xld’)
* Get the module regions of the foreground modules
dev_set_color (’green’)
get_data_code_2d_objects (ModuleFG, DataCodeHandle, HandlesUndecoded[i],
’module_1_rois’)
* Get the module regions of the background modules
dev_set_color (’red’)
get_data_code_2d_objects (ModuleBG, DataCodeHandle, HandlesUndecoded[i],
’module_0_rois’)
* Stop for inspecting the image
stop ()
endfor
Result
The operator get_data_code_2d_objects returns the value H_MSG_TRUE if the given parameters are
correct and the requested objects are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_data_code_2d_objects is reentrant and processed without parallelization.
Possible Predecessors
find_data_code_2d, query_data_code_2d_params
Possible Successors
get_data_code_2d_results
See also
query_data_code_2d_params, get_data_code_2d_results, get_data_code_2d_param,
set_data_code_2d_param
Module
Data Code
HALCON 8.0.2
1166 CHAPTER 15. TOOLS
Get one or several parameters that describe the 2D data code model.
The operator get_data_code_2d_param allows to query the parameters that are used to describe the 2D
data code model. The names of the desired parameters are passed in the generic parameter GenParamNames,
the corresponding values are returned in GenParamValues. All these parameters can be set and changed at any
time with the operator set_data_code_2d_param. A list with the names of all parameters that are valid for
the used 2D data code type is returned by the operator query_data_code_2d_params.
The following parameters can be queried – ordered by different categories and data code types:
Size and shape of the symbol:
It is possible to query the values of several or all parameters with a single operator call by passing a tuple con-
taining the names of all desired parameters to GenParamNames. As a result a tuple of the same length with the
corresponding values is returned in GenParamValues.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; (Htuple .) Hlong
Handle of the 2D data code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that are to be queried for the 2D data code model.
Default Value : "contrast_min"
List of values : GenParamNames ∈ {"strict_model", "persistence", "polarity", "mirrored", "contrast_min",
"model_type", "version_min", "version_max", "symbol_size_min", "symbol_size_max", "symbol_cols_min",
"symbol_cols_max", "symbol_rows_min", "symbol_rows_max", "symbol_shape", "module_size_min",
"module_size_max", "module_width_min", "module_width_max", "module_aspect_min",
"module_aspect_max", "module_gap_col_min", "module_gap_col_max", "module_gap_row_min",
"module_gap_row_max", "slant_max", "module_grid", "position_pattern_min"}
. GenParamValues (output_control) . . . . . . . attribute.value(-array) ; (Htuple .) char * / Hlong * / double *
Values of the generic parameters.
Result
The operator get_data_code_2d_param returns the value H_MSG_TRUE if the given parameters are cor-
rect. Otherwise, an exception will be raised.
Parallelization Information
get_data_code_2d_param is reentrant and processed without parallelization.
HALCON 8.0.2
1168 CHAPTER 15. TOOLS
Possible Predecessors
query_data_code_2d_params, set_data_code_2d_param, find_data_code_2d
Possible Successors
find_data_code_2d, write_data_code_2d_model
Alternatives
write_data_code_2d_model
See also
query_data_code_2d_params, set_data_code_2d_param, get_data_code_2d_results,
get_data_code_2d_objects, find_data_code_2d
Module
Data Code
Get the alphanumerical results that were accumulated during the search for 2D data code symbols.
The operator get_data_code_2d_results allows to access several alphanumerical results that were calcu-
lated while searching and reading the 2D data code symbols. These results describe the search process in general
or one of the investigated candidates – independently of whether it could be read or not. The results are in most
cases not related to the symbol with the highest resolution but depend on the pyramid level that was investigated
when the reading process was aborted. To access a result, the name of the parameter (ResultNames) and the 2D
data code model (DataCodeHandle) must be passed. In addition, in CandidateHandle a handle of a result
or candidate structure or a string identifying a group of candidates must be passed. These handles are returned by
find_data_code_2d for all successfully decoded symbols and by get_data_code_2d_results for a
group of candidates. If these operators return several handles in a tuple, the individual handles can be accessed by
normal tuple operations.
Most results consist of one value. Several of these results can be queried for a specific candidate in a single call.
The values returned in ResultValues correspond to the appropriate parameter names in the ResultNames
tuple. As an alternative, these results can also be queried for a group of candidates (see below). In this case, only
one parameter can be requested per call, and ResultValues contains one value for every candidate.
Furthermore, there exists another group of results that consist of more than one value (e.g., ’bin_module_data’),
which are returned as a tuple. These parameters must always be queried exclusively: one result for one specific
candidate.
Apart from the candidate-specific results there are a number of results referring to the search process in general.
This is indicated by passing the string ’general’ in CandidateHandle instead of a candidate handle.
Candidate groups
The following candidate group names are predefined and can be passed as CandidateHandle instead of a
single handle:
’general’: This value is used for results that refer to the last find_data_code_2d call in general but not to a
specific candidate.
’all_candidates’: All candidates (including the successfully decoded symbols) that were investigated during the
last call of find_data_code_2d.
’all_results’: All symbols that were successfully decoded during the last call of find_data_code_2d.
’all_undecoded’: All candidates of the last call of find_data_code_2d that were detected as 2D data code
symbols, but could not be decoded. For these candidates the error correction detected too many errors, or
there was an failure while decoding the error-corrected data because of inconsistent data.
’all_aborted’: All candidates of the last call of find_data_code_2d that could not be identified as valid 2D
data code symbols and for which the processing was aborted.
Supported results
Currently, the access to the following results, which are returned in ResultValues, is supported:
General results that do not depend on specific candidates (all data code types) – ’general’:
HALCON 8.0.2
1170 CHAPTER 15. TOOLS
If required, this Symbology Identifier composed of the prefix and the value m has to be preceded the
decoded string (normally only if m > 1) manually. Symbols that contain ECI codes (and hence doubled
backslashs) can be recognised by the following identifier values: ECC 200: 4, 5, and 6, QR Code: 2, 4,
and 6, PDF417: 1.
• QR Codes:
’version’: version number that corresponds to the size of the symbol (version 1 = 21 × 21, version 2 = 25 ×
25, . . . , version 40 = 177 × 177).
’symbol_size’: detected size of the symbol in modules.
’model_type’: Type of the QR Code Model. In HALCON the older, original specification for QR Codes
Model 1 as well as the newer, enhanced form Model 2 are supported.
’mask_pattern_ref’, ’error_correction_level’: If a candidate is recognized as an QR Code the first step is
to read the format information encoded in the symbol. This includes a code for the pattern that was
used for masking the data modules (0 ≤ ’mask_pattern_ref’ ≤ 7) and the level of the error correction
(’error_correction_level’ ∈ [’L’, ’M’, ’Q’, ’H’]).
• PDF417:
’module_aspect’: module aspect ratio; this corresponds to the ratio of ’module_height’ to ’module_width’.
’error_correction_level’: If a candidate is recognized as a PDF417 the first step is to read the format infor-
mation encoded in the symbol. This includes the error correction level, which was used during encoding
(’error_correction_level’ ∈ [0, 8]).
Results that return a tuple of values and hence can be requested only separately and only for a single candidate:
For the 2D data codes ECC200 and QR Code, the print quality is described in a tuple with eight ele-
ments: (overall quality, contrast, modulation, fixed pattern damage, decode, axial nonuniformity, grid
nonuniformity, unused error correction).
The definition of the respective elements is as follows: The overall quality is the minimum of all indi-
vidual grades. The contrast is the range between the minimal and the maximal pixel intensity in the data
code domain, and a strong contrast results in a good grading. The modulation indicates how strong the
amplitudes of the data code modules are. Big amplitudes make the assignment of the modules to black
or white more certain, resulting in a high modulation grade. It is to note that the computation of the
modulation grade is influenced by the specific level of error correction capacity, meaning that the mod-
ulation degrades less for codes with higher error correction capacity. The fixed pattern of both ECC200
and QR Code is of high importance for detecting and decoding the codes. Degradation or damage of the
fixed pattern, or the respective quiet zones, is assessed with the fixed pattern damage quality. The decode
quality always takes the grade 4, meaning that the code could be decoded. Naturally, codes which can
not be decoded can not be assessed concerning print quality either. Originally, data codes have squared
modules, i.e. the width and height of the modules are the same. Due to a potentially oblique view
of the camera onto the data code or a defective fabrication of the data code itself, the width to height
ratio can be distorted. This deterioration results in a degraded axial nonuniformity. If apart from an
affine distortion the data code is subject to perspective or any other distortions too this degrades the grid
nonuniformity quality. As data codes are redundant codes, errors in the modules or codewords can be
corrected. The amount of error correcting capacities which is not already used by the present data code
symbol is expressed in the unused error correction quality. In a way, this grade reflects the reliability of
the decoding process. Note, that even codes with an unused error correction grading of 0, which could
possibly mean a false decoding result, can be decoded by the find_data_code_2d operator in a re-
liable way, because the implemented decoding functionality is more sophisticated and robust compared
to the reference decode algorithm proposed by the standard.
For the 2D stacked code PDF417 the print quality is described in a tuple with seven elements: (overall
quality, start/stop pattern, codeword yield, unused error correction, modulation, decodability, defects).
The definition of the respective elements is as follows: The overall quality is the minimum of all individ-
ual grades. As the PDF417 data code is a stacked code, which can be read by line scan devices as well,
print quality assessment is basically based on techniques for linear bar codes: a set of scan reflectance
profiles is generated across the symbol followed by the evaluation of the respective print qualities within
each scan, which are finally subsumed as overall print qualities. For more details the user is referred
to the standard for linear symbols ISO/IEC 14516. In start/stop pattern the start and stop patterns are
assessed concerning the quality of the reflectance profile and the correctness of the bar and space se-
quence. The grade codeword yield counts and evaluates the relative number of correct decoded words
acquired by the set of scan profiles. For the grade unused error correction the relative number of false
decoded words within the error correction blocks are counted. As for 2D data codes, the modulation
grade indicates how strong the amplitudes, i.e. the extremal intensities, of the bars and spaces are. The
grade decodability measures the deviation of the actual length of bars and spaces with respect to their
reference length. And finally, the grade defects refers to a measurement of how perfect the reflectance
profiles of bars and spaces are.
• PDF417:
’macro_exist’: symbols that are part of a group of symbols are called "‘Macro PDF417"’ symbols. These
symbols contain additional information within a control block. For macro symbols ’macro_exist’ returns
the value 1 while for conventional symbols 0 is returned.
’macro_segment_index’: returns the index of the symbol in the group. For macro symbols this information
is obligatory.
’macro_file_id’: returns the group identifier as a string. For macro symbols this information is obligatory.
’macro_segment_count’: returns the number of symbols that belong to the group. For macro symbols this
information is optional.
’macro_time_stamp’: returns the time stamp on the source file expressed as the elapsed time in seconds since
1970:01:01:00:00:00 GMT as a string. For macro symbols this information is optional.
HALCON 8.0.2
1172 CHAPTER 15. TOOLS
’macro_checksum’: returns the CRC checksum computed over the entire source file using the CCITT-16
polynomial. For macro symbols this information is optional.
’macro_last_symbol’: returns 1 if the symbol is the last one within the group of symbols. Otherwise 0 is
returned. For macro symbols this information is optional.
Status message
The status parameter that can be queried for all candidates reveals why and where in the evaluation phase a candi-
date was discarded. The following list shows the most important status messages in the order of their generation
during the evaluation phase:
• QR Code:
’aborted: too close to image border’ – The symbol candidate is too close to the border. Only symbols that
are completely within the image can be read.
’aborted adjusting: finder patterns’ – It is not possible to determine the exact position of the finder pattern
in the processing image.
’aborted symbol: different number of rows and columns’ – It is not possible to determine for both dimen-
sions a consistent symbol size by the size and the position of the detected finder pattern. When reading
Model 2 symbols, this error may occur only with small symbols (< version 7 or 45 × 45 modules). For
bigger symbols the size is coded within the symbol in the version information region. The estimated size
is used only as a hint for finding the version information region.
’aborted symbol: invalid size’ – The size determined by the size and the position of the detected finder pat-
tern is too small or (only Model 1) too big.
’decoding of version information failed’ – While processing a Model 2 symbol, the symbol version as deter-
mined by the finder pattern is at least 7 (≥ 45 × 45 modules). However, reading the version from the
appropriate region in the symbol failed.
’aborted symbol: size does not fit strict model definition’ – Although the deduced symbol size is valid, it is
not inside the range predefined by the model.
’decoding of format information failed’ – Reading the format information (mask pattern and error correction
level) from the appropriate region in the symbol failed.
’error correction failed’ – The error correction failed because there are too many modules that couldn’t be
interpreted correctly. Normally, this indicates that the print and/or image quality is too bad, but it may
also be provoked by a wrong mirroring specification in the model.
’decoding failed: inconsistent data’ – The data coded in the symbol is not consistent and therefore cannot
be read.
• PDF417:
’aborted: too close to image border’ – The symbol candidate is too close to the border. Only symbols that
are completely within the image can be read.
’aborted symbol: size does not fit strict model definition’ – Although the deduced symbol size is valid, it is
not inside the range predefined by the model.
’error correction failed’ – The error correction failed because there are too many modules that couldn’t be
interpreted correctly. Normally, this indicates that the print and/or image quality is too bad, but it may
also be provoked by a wrong mirroring specification in the model.
’decoding failed: special decoding reader requested’ – The decoded data contains a message for program-
ming the data code reader. This feature is not supported.
’decoding failed: inconsistent data’ – The data coded in the symbol is not consistent and therefore cannot
be read.
While processing a candidate, it is possible that internally several iterations for reading the symbol are performed.
If all attempts fail, normally the last abortion state is stored in the candidate structure. E.g., if the QR Code
model enables symbols with Model 1 and Model 2 specification, find_data_code_2d tries first to inter-
pret the symbol as Model 2 type. If this fails, Model 1 interpretation is performed. If this also fails, the sta-
tus variable is set to the latest failure state of the Model 1 interpretation. In order to get the error state of
the Model 2 branch, the ’model_type’ parameter of the data code model must be restricted accordingly (with
set_data_code_2d_param).
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; (Htuple .) Hlong
Handle of the 2D data code model.
. CandidateHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; (Htuple .) const char * / Hlong
Handle of the 2D data code candidate or name of a group of candidates for which the data is required.
Default Value : "all_candidates"
Suggested values : CandidateHandle ∈ {0, 1, 2, "general", "all_candidates", "all_results",
"all_undecoded", "all_aborted"}
. ResultNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the results of the 2D data code to return.
Default Value : "status"
Suggested values : ResultNames ∈ {"min_search_level", "max_search_level", "pass_num", "result_num",
"candidate_num", "undecoded_num", "aborted_num", "handle", "pass", "status", "search_level",
"process_level", "polarity", "module_gap", "mirrored", "model_type", "symbol_rows", "symbol_cols",
"symbol_size", "version", "module_height", "module_width", "module_aspect", "slant", "contrast",
"module_grid", "decoded_string", "decoding_error", "symbology_ident", "mask_pattern_ref",
"error_correction_level", "bin_module_data", "raw_coded_data", "corr_coded_data", "decoded_data",
"quality_isoiec15415", "structured_append", "macro_exist", "macro_segment_index", "macro_file_id",
"macro_segment_count", "macro_time_stamp", "macro_checksum", "macro_last_symbol"}
. ResultValues (output_control) . . . . . . . . . . attribute.value(-array) ; (Htuple .) char * / Hlong * / double *
List with the results.
Example (Syntax: HDevelop)
* Example demonstrating how to access the results of the data code search.
HALCON 8.0.2
1174 CHAPTER 15. TOOLS
* For every undecoded symbol, get the contour, the symbol size, and
* the binary module data
dev_set_color (’red’)
for i := 0 to |HandlesUndecoded| - 1 by 1
* Get the contour of the symbol
get_data_code_2d_objects (SymbolXLD, DataCodeHandle, HandlesUndecoded[i],
’candidate_xld’)
* Get the symbol size
get_data_code_2d_results (DataCodeHandle, HandlesUndecoded[i],
[’symbol_rows’,’symbol_cols’], SymbolSize)
* Get the binary module data (has to be queried exclusively)
get_data_code_2d_results (DataCodeHandle, HandlesUndecoded[i],
’bin_module_data’, BinModuleData)
* Stop for inspecting the data
stop ()
endfor
Result
The operator get_data_code_2d_results returns the value H_MSG_TRUE if the given parameters are
correct and the requested results are available for the last symbol search. Otherwise, an exception will be raised.
Parallelization Information
get_data_code_2d_results is reentrant and processed without parallelization.
Possible Predecessors
find_data_code_2d, query_data_code_2d_params
Possible Successors
get_data_code_2d_objects
See also
query_data_code_2d_params, get_data_code_2d_objects, get_data_code_2d_param,
set_data_code_2d_param
Module
Data Code
Get for a given 2D data code model the names of the generic parameters or objects that can be used in the other
2D data code operators.
The operator query_data_code_2d_params returns the names of the generic parameters that are sup-
ported by the 2D data code operators set_data_code_2d_param, get_data_code_2d_param,
find_data_code_2d, get_data_code_2d_results, and get_data_code_2d_objects. The
parameter QueryName is used to select the desired parameter group:
The returned parameter list depends only on the type of the data code and not on the current state of the model or
its results.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; Htuple . Hlong
Handle of the 2D data code model.
. QueryName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; Htuple . const char *
Name of the parameter group.
Default Value : "get_result_params"
List of values : QueryName ∈ {"get_model_params", "set_model_params", "find_params",
"get_result_params", "get_result_objects"}
. GenParamNames (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; Htuple . char *
List containing the names of the supported generic parameters.
Example (Syntax: HDevelop)
* This example demonstrates how the names of all available model parameters
* can be queried. This is used to request first the settings of the
* untrained and then the settings of the trained model.
HALCON 8.0.2
1176 CHAPTER 15. TOOLS
Result
The operator query_data_code_2d_params returns the value H_MSG_TRUE if the given parameters are
correct. Otherwise, an exception will be raised.
Parallelization Information
query_data_code_2d_params is reentrant and processed without parallelization.
Possible Predecessors
create_data_code_2d_model
Possible Successors
get_data_code_2d_param, get_data_code_2d_results, get_data_code_2d_objects
Module
Data Code
Read a 2D data code model from a file and create a new model.
The operator read_data_code_2d_model reads the 2D data code model file FileName and creates a new
model that is an identical copy of the saved model. The parameter DataCodeHandle returns the handle of the
new model. The model file FileName must be created by the operator write_data_code_2d_model.
Parameter
Result
The operator read_data_code_2d_model returns the value H_MSG_TRUE if the named 2D data code file
was found and correctly read. Otherwise, an exception will be raised.
Parallelization Information
read_data_code_2d_model is processed completely exclusively without parallelization.
Possible Successors
find_data_code_2d
Alternatives
create_data_code_2d_model
See also
write_data_code_2d_model, clear_data_code_2d_model,
clear_all_data_code_2d_models
Module
Data Code
HALCON 8.0.2
1178 CHAPTER 15. TOOLS
’contrast_min’: minimum contrast between the foreground and the background of the symbol (this measure
corresponds with the minimum gradient between the symbol’s foreground and the background).
Values: [1 . . . 100]
Default: 30 (enhanced: 10)
• Datamatrix ECC 200 und QR-Code:
’module_size_min’: minimum size of the modules in the image in pixels.
Values: [2 . . . 100]
Default: 6 (enhanced: 2)
’module_size_max’: maximum size of the modules in the image in pixels.
Values: [2 . . . 100]
Default: 20 (enhanced: 100)
’module_size’: set ’module_size_min’ and ’module_size_max’ to the same value.
It is possible to specify whether neighboring foreground modules are connected or whether there is or may be
a gap between them. If the foreground modules are connected and fill the module space completely the gap
parameter can be set to ’no’. The parameter is set to ’small’ if there is a very small gap between two modules;
it can be set to ’big’ if the gap is slightly bigger. The last two settings may also be useful if the foreground
modules – although being connected – appear thinner as their entitled space (e.g., as a result of blooming
caused by a bright illuminant). If the foreground modules appear only as very small dots (in relation to the
module size: < 50%), in general, an appropriate preprocessing of the image for detecting or enlarging the
modules will be necessary (e.g., by gray_erosion_shape or gray_dilation_shape):
’module_gap_col_min’: minimum gap in direction of the symbol columns.
Values: ’no’, ’small’, ’big’
Default: ’no’
’module_gap_col_max’: maximum gap in direction of the symbol columns.
Values: ’no’, ’small’, ’big’
Default: ’small’ (enhanced: ’big’)
’module_gap_row_min’: minimum gap in direction of the symbol rows.
Values: ’no’, ’small’, ’big’
Default: ’no’
’module_gap_row_max’: maximum gap in direction of the symbol rows.
Values: ’no’, ’small’, ’big’
Default: ’small’ (enhanced: ’big’)
’module_gap_col’: set ’module_gap_col_min’ and ’module_gap_col_max’ to the same value.
’module_gap_row’: set ’module_gap_row_min’ and ’module_gap_row_max’ to the same value.
’module_gap_min’: set ’module_gap_col_min’ and ’module_gap_row_min’ to the same value.
’module_gap_max’: set ’module_gap_col_max’ and ’module_gap_row_max’ to the same value.
’module_gap’: set ’module_gap_col_min’, ’module_gap_col_max’, ’module_gap_row_min’, and ’mod-
ule_gap_row_max’ to the same value.
• PDF417:
’module_width_min’: minimum module width in the image in pixels.
Values: [2 . . . 100]
Default: 3 (enhanced: 2)
’module_width_max’: maximum module width in the image in pixels.
Values: [2 . . . 100]
Default: 15 (enhanced: 100)
’module_width’: set ’module_width_min’ and ’module_width_max’ to the same value.
’module_aspect_min’: minimum module aspect ratio (module height to module width).
Values: [0.5 . . . 20.0]
Default: 1.0
’module_aspect_max’: maximum module aspect ratio (module height to module width).
Values: [0.5 . . . 20.0]
Default: 4.0 (enhanced: 10.0)
’module_aspect’: set ’module_aspect_min’ and ’module_aspect_max’ to the same value.
• Data matrix ECC 200:
HALCON 8.0.2
1180 CHAPTER 15. TOOLS
’slant_max’: maximum deviation of the angle of the L-shaped finder pattern from the (ideal) right angle (the
angle is specified in radians and corresponds to the distortion that occurs when the symbol is printed or
during the image acquisition).
Value range: [0.0 . . . 0.5235]
Default: 0.1745 = 10◦ (enhanced: 0.5235 = 30◦ )
’module_grid’: describes whether the size of the modules may vary (in a specific range) or not. Dependent
on this parameter different algorithms are used for calculating the module’s center positions. If it is set to
’fixed’, an equidistant grid is used. Allowing a variable module size (’variable’), the grid is aligned only
to the alternating side of the finder pattern. With ’any’ both approaches are tested one after the other.
Values: ’fixed’, ’variable’, ’any’
Default: ’fixed’ (enhanced: ’any’)
• QR Code:
’position_pattern_min’: Number of position detection patterns that have to be visible for generating a new
symbol candidate.
Value range: [2, 3]
Default: 3 (enhanced: 2)
When setting the model parameters, attention should be payed especially to the following issues:
• Symbols whose size does not comply with the size restrictions made in the model (with the generic parameters
’symbol_rows*’, ’symbol_cols*’, ’symbol_size*’, or ’version*’) will not be read if ’strict_model’ is set to
’yes’, which is the default. This behavior is useful if symbols of a specific size have to be detected while
other symbols should be ignored. On the other hand, neglecting this parameter can lead to problems, e.g.,
if one symbol of an image sequence is used to adjust the model (including the symbol size), but later in the
application the symbol size varies, which is quite common in practice.
• The run-time of find_data_code_2d depends mostly on the following model parameters, namely in
cases where the requested number of symbols cannot be found in the image: ’polarity’, ’module_size_min’
(ECC 200 and QR Code) and ’module_size_min’ together with ’module_aspect_min’ (PDF417), and if the
minimum module size is very small also the parameters ’module_gap_*’ (ECC 200 and QR Code), for QR
Code also ’position_pattern_min’.
Parameter
. DataCodeHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datacode_2d ; (Htuple .) Hlong
Handle of the 2D data code model.
. GenParamNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; (Htuple .) const char *
Names of the generic parameters that shall be adjusted for the 2D data code.
Default Value : "polarity"
List of values : GenParamNames ∈ {"default_parameters", "strict_model", "persistence", "polarity",
"mirrored", "contrast_min", "model_type", "version", "version_min", "version_max", "symbol_size",
"symbol_size_min", "symbol_size_max", "symbol_cols", "symbol_cols_min", "symbol_cols_max",
"symbol_rows", "symbol_rows_min", "symbol_rows_max", "symbol_shape", "module_size",
* Read an image
read_image (Image, ’datacode/ecc200/ecc200_cpu_010’)
* Read the symbol in the image
find_data_code_2d (Image, SymbolXLDs, DataCodeHandle, [], [],
ResultHandles, DecodedDataStrings)
* Clear the model
clear_data_code_2d_model (DataCodeHandle)
Result
The operator set_data_code_2d_param returns the value H_MSG_TRUE if the given parameters are cor-
rect. Otherwise, an exception will be raised.
Parallelization Information
set_data_code_2d_param is reentrant and processed without parallelization.
Possible Predecessors
create_data_code_2d_model, read_data_code_2d_model
Possible Successors
get_data_code_2d_param, find_data_code_2d, write_data_code_2d_model
Alternatives
read_data_code_2d_model
See also
query_data_code_2d_params, get_data_code_2d_param, get_data_code_2d_results,
get_data_code_2d_objects
Module
Data Code
HALCON 8.0.2
1182 CHAPTER 15. TOOLS
Result
The operator write_data_code_2d_model returns the value H_MSG_TRUE if the passed handle is valid
and if the model can be written into the named file. Otherwise, an exception will be raised.
Parallelization Information
write_data_code_2d_model is reentrant and processed without parallelization.
Possible Predecessors
set_data_code_2d_param, find_data_code_2d
Alternatives
get_data_code_2d_param
See also
create_data_code_2d_model, set_data_code_2d_param, find_data_code_2d
Module
Data Code
15.7 Fourier-Descriptor
Normalizing of the Fourier coefficients with respect to the displacment of the starting point.
The operator abs_invar_fourier_coeff normalizes the Fourier coefficients with regard to the displace-
ments of the starting point. These occur when an object is rotated. The contour tracer get_region_contour
starts with recording the contour in the upper lefthand corner of the region and follows the contour clockwise. If
the object is rotated, the starting value for the contour point chain is different which leads to a phase shift in the
frequency space. The following two kinds of normalizing are available:
abs_amount: The phase information will be eliminated; the normalizing does not retain the structure, i.e. if the
AZ-invariants are backtransformed, no similarity with the pattern can be recognized anymore.
az_invar1: AZ-invariants of the 1st order execute the normalizing with respect to displacing the starting point so
that the structure is retained; they are however more prone to local and global disturbances, in particular to
projective distortions.
Parameter
get_region_contour(single,&row,&col);
length_of_contour = length_tuple(row);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
fourier_1dim_inv(absrow,abscol,length_of_contour,&fsynrow,&fsyncol);
Parallelization Information
abs_invar_fourier_coeff is reentrant and processed without parallelization.
Possible Predecessors
invar_fourier_coeff
Possible Successors
fourier_1dim_inv, match_fourier_coeff
Module
Foundation
HALCON 8.0.2
1184 CHAPTER 15. TOOLS
get_region_contour(single,&row,&col);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
Parallelization Information
fourier_1dim is reentrant and processed without parallelization.
Possible Predecessors
prep_contour_fourier
Possible Successors
invar_fourier_coeff, disp_polygon
Module
Foundation
Backtransformation of Fourier coefficients respectively of Fourier descriptors. The number of values to be back-
transformed should not exceed the length of the transformed contour.
Parameter
. RealCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Real parts.
. ImaginaryCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Imaginary parts.
. MaxCoef (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Input of the steps for the backtransformation.
Default Value : 100
Suggested values : MaxCoef ∈ {5, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 150, 200, 400}
Restriction : MaxCoef ≥ 1
. Rows (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.y-array ; Htuple . double *
Row coordinates.
. Columns (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contour.x-array ; Htuple . double *
Column coordinates.
Example (Syntax: C++)
get_region_contour(single,&row,&col);
length_of_contour = row.Num();
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
fourier_1dim_inv(absrow,abscol,length_of_contour,&fsynrow,&fsyncol);
Parallelization Information
fourier_1dim_inv is reentrant and processed without parallelization.
Possible Predecessors
invar_fourier_coeff, fourier_1dim
Possible Successors
disp_polygon
Module
Foundation
HALCON 8.0.2
1186 CHAPTER 15. TOOLS
Parameter
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,"az_invar1",&absrow,&abscol);
Parallelization Information
invar_fourier_coeff is reentrant and processed without parallelization.
Possible Predecessors
fourier_1dim
Possible Successors
invar_fourier_coeff
Module
Foundation
none: No attenuation.
1/index: Absolute amounts of the Fourier coefficients will be divided by their index.
1/(index*index): Absolute amounts of the Fourier coefficients will be divided by their square index.
The higher the result value, the greater the differences between the pattern and the test contour. If the number of
coefficients is not the same, only the first n coefficients will be compared. The parameter MaxCoef indicates the
number of the coefficients to be compared. If MaxCoef is set to zero, all coefficients will be used.
Parameter
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,50,&frow,&fcol);
invar_fourier_coeff(frow,fcol,1,"affine_invar",&invrow,&invcol);
abs_invar_fourier_coeff(invrow,invcol,1,2,
"az_invar1",&absrow,&abscol);
match_fourier_coeff(contur1_row, contur1_col,
contur2_row, contur2_col, 50,
"1/index", &Distance_wert);
Parallelization Information
match_fourier_coeff is reentrant and processed without parallelization.
Possible Predecessors
invar_fourier_coeff
Module
Foundation
HALCON 8.0.2
1188 CHAPTER 15. TOOLS
Parallelization Information
move_contour_orig is processed completely exclusively without parallelization.
Possible Predecessors
get_region_contour
Possible Successors
prep_contour_fourier
Module
Foundation
Please note that in contrast to the signed or unsigned area the affine mapping of the radian will not be transformed
linearly.
Parameter
get_region_contour(single,&row,&col);
move_contour_orig(row,col,&trow,&tcol);
prep_contour_fourier(trow,tcol,"unsigned_area",¶m_scale);
fourier_1dim(trow,tcol,param_scale,&frow,&fcol);
Parallelization Information
prep_contour_fourier is reentrant and processed without parallelization.
Possible Predecessors
move_contour_orig
Possible Successors
fourier_1dim
Module
Foundation
15.8 Function
T_abs_funct_1d ( const Htuple Function, Htuple *FunctionAbsolute )
ComposedFunction(x) = Function2(Function1(x)) .
ComposedFunction has the same domain (x-range) as Function1. If the range (y-value range) of
Function1 is larger than the domain of Function2, the parameter Border determines the border treatment of
Function2. For Border=’zero’ values outside the domain of Function2 are set to 0, for Border=’constant’
they are set to the corresponding value at the border, for Border=’mirror’ they are mirrored at the border, and for
Border=’cyclic’ they are continued cyclically. To obtain y-values, Function2 is interpolated linearly.
Parameter
. Function1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function 1.
. Function2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function 2.
. Border (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Border treatment for the input functions.
Default Value : "constant"
List of values : Border ∈ {"zero", "constant", "mirror", "cyclic"}
. ComposedFunction (output_control) . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Composed function.
Parallelization Information
compose_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation
HALCON 8.0.2
1190 CHAPTER 15. TOOLS
create_funct_1d_array creates a one-dimensional function from a set of y-values YValues. The resulting
function can then be processed and analyzed with the operators for 1d functions. YValues is interpreted as
follows: the first value of YValues is the function value at zero, the second value is the function value at one, etc.
Thus, the values define a function at equidistant x values (with distance 1), starting at 0.
Alternatively, the operator create_funct_1d_pairs can be used to create a function.
create_funct_1d_pairs also allows to define a function with non-equidistant x valus by specifiying
them explicitely. Thus to get the same definition as with create_funct_1d_array, one would pass a tuple
of x values to create_funct_1d_pairs that has the same length as YValues and contains values starting
at 0 and increasing by 1 in each position. Note, however, that create_funct_1d_pairs leads to a different
internal representation of the function which needs more storage (because all (x,y) pairs are stored) and sometimes
cannot be processed as efficiently as functions created by create_funct_1d_array.
Parameter
Alternatives
create_funct_1d_array, read_funct_1d
See also
funct_1d_to_pairs
Module
Foundation
HALCON 8.0.2
1192 CHAPTER 15. TOOLS
Parallelization Information
distance_funct_1d is reentrant and processed without parallelization.
Module
Foundation
get_y_value_funct_1d returns the y value of the function Function at the x coordinates specified by X. To
obtain the y values, the input function is interpolated linearly. The parameter Border determines the values of the
function Function outside of its domain. For Border=’zero’ these values are set to 0, for Border=’constant’
they are set to the corresponding value at the border, for Border=’mirror’ they are mirrored at the border, for
Border=’cyclic’ they are continued cyclically, and for Border=’error’ an exception handling is raised.
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function.
. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double
X coordinate at which the function should be evaluated.
. Border (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Border treatment for the input function.
Default Value : "constant"
List of values : Border ∈ {"zero", "constant", "mirror", "cyclic", "error"}
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double *
Y value at the given x value.
Parallelization Information
get_y_value_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array
Module
Foundation
HALCON 8.0.2
1194 CHAPTER 15. TOOLS
invert_funct_1d calculates the inverse function of the input function Function and returns it in
InverseFunction. The function Function must be monotonic. If this is not the case an error message
is returned.
Parameter
y1 (x) = a1 y2 (a3 x + a4 ) + a2 .
The transformation parameters are determined by a least-squares minimization of the following function:
n−1
X 2
y1 (xi ) − a1 y2 (a3 xi + a4 ) + a2 .
i=0
The values of the function y2 are obtained by linear interpolation. The parameter Border determines the val-
ues of the function Function2 outside of its domain. For Border=’zero’ these values are set to 0, for
Border=’constant’ they are set to the corresponding value at the border, for Border=’mirror’ they are mirrored
at the border, and for Border=’cyclic’ they are continued cyclically. The calculated transformation parameters
are returned as a 4-tuple in Params. If some of the parameter values are known, the respective parameters can
be excluded from the least-squares adjustment by setting the corresponding value in the tuple UseParams to the
value ’false’. In this case, the tuple ParamsConst must contain the known value of the respective parameter. If
a parameter is used for the adjustment (UseParams = ’true’), the corresponding parameter in ParamsConst is
ignored. On output, match_funct_1d_trans additionally returns the sum of the squared errors ChiSquare
of the resulting function, i.e., the function obtained by transforming the input function with the transformation pa-
rameters, as well as the covariance matrix Covar of the transformation parameters Params. These parameters
can be used to decide whether a successful matching of the functions was possible.
Parameter
HALCON 8.0.2
1196 CHAPTER 15. TOOLS
Parallelization Information
match_funct_1d_trans is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_array, create_funct_1d_pairs
See also
gray_projections
Module
Foundation
Parameter
HALCON 8.0.2
1198 CHAPTER 15. TOOLS
yt (x) = a1 y(a3 x + a4 ) + a2 .
The output function TransformedFunction is obtained by transforming the x and y values of the input func-
tion separately with the above formula, i.e., the output function is not sampled again. Therefore, the parameter a3
is restricted to a3 6= 0.0 . To resample a function, the operator sample_funct_1d can be used.
HALCON 8.0.2
1200 CHAPTER 15. TOOLS
Parameter
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d-array ; Htuple . double / Hlong
Input function.
. Params (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double
Transformation parameters between the functions.
Number of elements : 4
. TransformedFunction (output_control) . . . . . . . . . . . . function_1d-array ; Htuple . double * / Hlong *
Transformed function.
Parallelization Information
transform_funct_1d is reentrant and processed without parallelization.
Possible Predecessors
create_funct_1d_pairs, create_funct_1d_array, match_funct_1d_trans
Module
Foundation
HALCON 8.0.2
1202 CHAPTER 15. TOOLS
15.9 Geometry
RowA1 := 255
ColumnA1 := 10
RowA2 := 255
ColumnA2 := 501
disp_line (WindowHandle, RowA1, ColumnA1, RowA2, ColumnA2)
RowB1 := 255
ColumnB1 := 255
for i := 1 to 360 by 1
RowB2 := 255 + sin(rad(i)) * 200
ColumnB2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, RowB1, ColumnB1, RowB2, ColumnB2)
angle_ll (RowA1, ColumnA1, RowA2, ColumnA2,
RowB1, ColumnB1, RowB2, ColumnB2, Angle)
endfor
Result
angle_ll returns H_MSG_TRUE.
Parallelization Information
angle_ll is reentrant and processed without parallelization.
Alternatives
angle_lx
Module
Foundation
T_angle_lx ( const Htuple Row1, const Htuple Column1, const Htuple Row2,
const Htuple Column2, Htuple *Angle )
Calculate the angle between one line and the vertical axis.
The operator angle_lx calculates the angle between one line and the abscissa. As input the coordinates of two
points on the line (Row1,Column1, Row2,Column2) are expected. The calculation is performed as follows: We
interprete the line as a vector with starting point Row1,Column1 and end point Row2,Column2. Rotating the
vector counter clockwise onto the abscissa (center of rotation is the intersection point of the abscissa) yields the
angle. The result depends of the order of the points on line. The parameter Angle returns the angle in radians,
ranging from −π ≤ Angle ≤ π.
Parameter
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate the first point of the line.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line.
. Angle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Angle between the line and the abscissa [rad].
Example (Syntax: HDevelop)
RowX1 := 255
ColumnX1 := 10
RowX2 := 255
ColumnX2 := 501
disp_line (WindowHandle, RowX1, ColumnX1, RowX2, ColumnX2)
Row1 := 255
Column1 := 255
for i := 1 to 360 by 1
Row2 := 255 + sin(rad(i)) * 200
Column2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, Row1, Column1, Row2, Column2)
angle_lx (Row1, Column1, Row2, Column2, Angle)
endfor
Result
angle_lx returns H_MSG_TRUE.
Parallelization Information
angle_lx is reentrant and processed without parallelization.
Alternatives
angle_ll
Module
Foundation
HALCON 8.0.2
1204 CHAPTER 15. TOOLS
Result
distance_cc returns H_MSG_TRUE.
Parallelization Information
distance_cc is reentrant and processed without parallelization.
Alternatives
distance_sc, distance_pc, distance_cc_min
See also
distance_sr, distance_pr
Module
Foundation
distance_cc_min calculates the minimum distance between two contours Contour1 and Contour2. The
minimum distance is returned in DistanceMin.
The parameter Mode sets the type of computing the distance. ’point_to_point’ determines the distance of the
closest contour points, ’fast_point_to_segment’ calculates the distance of the line segments adjacent to these points,
and ’point_to_segment’ determines the actual minimum distance of the contour segments.
While ’point_to_point’ and ’fast_point_to_segment’ are efficient algorithms with a complexity of n*log(n),
’point_to_segment’ has quadratic complexity and thus takes a longer time to execute, especially for contours with
many line segments.
Parameter
Result
distance_cc_min returns H_MSG_TRUE.
Parallelization Information
distance_cc_min is reentrant and processed without parallelization.
Alternatives
distance_sc, distance_pc, distance_cc
See also
distance_sr, distance_pr
Module
Foundation
HALCON 8.0.2
1206 CHAPTER 15. TOOLS
Parameter
. Contour (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont ; Hobject
Input contour.
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the first point of the line.
. Column1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the first point of the line.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; (Htuple .) double / Hlong
Row coordinate of the second point of the line.
. Column2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; (Htuple .) double / Hlong
Column coordinate of the second point of the line.
. DistanceMin (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Minimum distance between the line and the contour.
. DistanceMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double *
Maximum distance between the line and the contour.
Result
distance_lc returns H_MSG_TRUE.
Parallelization Information
distance_lc is reentrant and processed without parallelization.
Alternatives
distance_pc, distance_sc, distance_cc, distance_cc_min
See also
distance_lr, distance_pr, distance_sr
Module
Foundation
dev_close_window ()
read_image (Image, ’fabrik’)
dev_open_window (0, 0, 512, 512, ’white’, WindowHandle)
threshold (Image, Region, 180, 255)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’,
5000, 100000000)
dev_clear_window ()
dev_set_color (’black’)
dev_display (SelectedRegions)
dev_set_color (’red’)
Row1 := 100
Row2 := 400
for Col := 50 to 400 by 4
disp_line (WindowHandle, Row1, Col+100, Row2, Col)
distance_lr (SelectedRegions, Row1, Col+100, Row2, Col,
DistanceMin, DistanceMax)
endfor
Result
distance_lr returns H_MSG_TRUE.
Parallelization Information
distance_lr is reentrant and processed without parallelization.
Alternatives
distance_lc, distance_pr, distance_sr, diameter_region
See also
hamming_distance, select_region_point, test_region_point, smallest_rectangle2
Module
Foundation
HALCON 8.0.2
1208 CHAPTER 15. TOOLS
double row,column,row1,column1,row2,column2,distance;
draw_point(WindowHandle,&row,&column);
draw_line(WindowHandle,&row1,&column1,&row2,&column2);
distance_pl(row,column,row1,column1,row2,column2,&distance);
Result
distance_pl returns H_MSG_TRUE.
Parallelization Information
distance_pl is reentrant and processed without parallelization.
Alternatives
distance_ps
See also
distance_pp, distance_pr
Module
Foundation
double row1,column1,row2,column2,distance;
draw_point(WindowHandle,&row1,&column1);
draw_point(WindowHandle,&row2,&column2);
distance_pp(row1,column1,row2,column2,&distance);
Result
distance_pp returns H_MSG_TRUE.
Parallelization Information
distance_pp is reentrant and processed without parallelization.
Alternatives
distance_ps
See also
distance_pl, distance_pr
Module
Foundation
HALCON 8.0.2
1210 CHAPTER 15. TOOLS
dev_close_window ()
read_image (Image, ’mreut’)
dev_open_window (0, 0, 512, 512, ’white’, WindowHandle)
dev_set_color (’black’)
threshold (Image, Region, 180, 255)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, ’area’, ’and’,
10000, 100000000)
Row1 := 255
Column1 := 255
dev_clear_window ()
dev_display (SelectedRegions)
dev_set_color (’red’)
for i := 1 to 360 by 1
Row2 := 255 + sin(rad(i)) * 200
Column2 := 255 + cos(rad(i)) * 200
disp_line (WindowHandle, Row1, Column1, Row2, Column2)
distance_pr (SelectedRegions, Row2, Column2,
DistanceMin, DistanceMax)
endfor
Result
distance_pr returns H_MSG_TRUE.
Parallelization Information
distance_pr is reentrant and processed without parallelization.
Alternatives
distance_pc, distance_lr, distance_sr, diameter_region
See also
hamming_distance, select_region_point, test_region_point, smallest_rectangle2
Module
Foundation
double row,column,row1,column1,row2,column2;
double distance_min,distance_max;
distance_ps(row,column,row1,column1,row2,column2,
&distance_min,&distance_max);
Result
distance_ps returns H_MSG_TRUE.
Parallelization Information
distance_ps is reentrant and processed without parallelization.
Alternatives
distance_pl
See also
distance_pp, distance_pr
Module
Foundation
HALCON 8.0.2
1212 CHAPTER 15. TOOLS
N umberiterations ∗ 2 − 1
.
The mask ’h’ has the effect that precisely the maximum metrics are calculated.
Attention
Both parameters must contain the same number of regions. The regions must not be empty.
Parameter
HALCON 8.0.2
1214 CHAPTER 15. TOOLS
Parameter
create_tuple(&RowA1, 1);
set_i(RowA1, 8, 0);
create_tuple(&ColumnA1, 1);
set_i(ColumnA1, 7, 0);
create_tuple(&RowA2, 1);
set_i(RowA2, 15, 0);
create_tuple(&ColumnA2, 1);
set_i(ColumnA2, 11, 0);
create_tuple(&RowB1, 1);
set_i(RowB1, 2, 0);
create_tuple(&ColumnB1, 1);
set_i(ColumnB1, 4, 0);
create_tuple(&RowB2, 1);
set_i(RowB2, 6, 0);
create_tuple(&ColumnB2, 1);
set_i(ColumnB2, 10, 0);
T_distance_sl(RowA1,ColumnA1,RowA2,ColumnA2,RowB1,ColumnB1,RowB2,ColumnB2,
&distance_min,&distance_max);
aa_min = get_d(distance_min,0);
aa_max = get_d(distance_max,0);
Result
distance_sl returns H_MSG_TRUE.
Parallelization Information
distance_sl is reentrant and processed without parallelization.
Alternatives
distance_pl
See also
distance_ps, distance_pp
Module
Foundation
HALCON 8.0.2
1216 CHAPTER 15. TOOLS
Attention
To enhance distance_sr, holes are ignored.
Parameter
Result
distance_sr returns H_MSG_TRUE.
Parallelization Information
distance_sr is reentrant and processed without parallelization.
Alternatives
distance_sc, distance_lr, distance_pr, diameter_region
See also
hamming_distance, select_region_point, test_region_point, smallest_rectangle2
Module
Foundation
Parameter
create_tuple(&RowA1, 1);
set_i(RowA1, 8, 0);
create_tuple(&ColumnA1, 1);
set_i(ColumnA1, 7, 0);
create_tuple(&RowA2, 1);
set_i(RowA2, 15, 0);
create_tuple(&ColumnA2, 1);
set_i(ColumnA2, 11, 0);
create_tuple(&RowB1, 1);
set_i(RowB1, 2, 0);
create_tuple(&ColumnB1, 1);
set_i(ColumnB1, 4, 0);
create_tuple(&RowB2, 1);
set_i(RowB2, 6, 0);
create_tuple(&ColumnB2, 1);
set_i(ColumnB2, 10, 0);
T_distance_ss(RowA1,ColumnA1,RowA2,ColumnA2,RowB1,ColumnB1,RowB2,ColumnB2,
&distance_min,&distance_max);
aa_min = get_d(distance_min,0);
aa_max = get_d(distance_max,0);
Result
distance_ss returns H_MSG_TRUE.
Parallelization Information
distance_ss is reentrant and processed without parallelization.
Alternatives
distance_pp
See also
distance_pl, distance_ps
Module
Foundation
HALCON 8.0.2
1218 CHAPTER 15. TOOLS
draw_ellipse(WindowHandle,Row,Column,Phi,Radius1,Radius2)
get_points_ellipse([0,3.14],Row,Column,Phi,Radius1,Radius2,RowPoint,ColPoint)
Result
get_points_ellipse returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
is raised.
Parallelization Information
get_points_ellipse is reentrant and processed without parallelization.
Possible Predecessors
fit_ellipse_contour_xld, draw_ellipse, gen_ellipse_contour_xld
See also
gen_ellipse_contour_xld
Module
Foundation
create_tuple(&rowA1, 1);
set_i(rowA1, 8, 0);
create_tuple(&columnA1, 1);
set_i(columnA1, 7, 0);
create_tuple(&rowA2, 1);
set_i(rowA2, 15, 0);
create_tuple(&columnA2, 1);
set_i(columnA2, 11, 0);
create_tuple(&RowB1, 1);
set_i(RowB1, 2, 0);
create_tuple(&ColumnB1, 1);
set_i(ColumnB1, 4, 0);
create_tuple(&RowB2, 1);
set_i(RowB2, 6, 0);
HALCON 8.0.2
1220 CHAPTER 15. TOOLS
create_tuple(&ColumnB2, 1);
set_i(ColumnB2, 10, 0);
T_intersection_ll(rowA1,columnA1,rowA2,columnA2,RowB1,ColumnB1,RowB2,ColumnB2,
&row_i,&column_i,¶llel);
aa_min = get_d(row_i,0);
aa_max = get_d(column_i,0);
Result
intersection_ll returns H_MSG_TRUE.
Parallelization Information
intersection_ll is reentrant and processed without parallelization.
Module
Foundation
projection_pl(row,column,row1,column1,row2,column2,
&row_proj,&col_proj);
Result
projection_pl returns H_MSG_TRUE.
Parallelization Information
projection_pl is reentrant and processed without parallelization.
Module
Foundation
15.10 Grid-Rectification
T_connect_grid_points ( const Hobject Image, Hobject *ConnectingLines,
const Htuple Row, const Htuple Col, const Htuple Sigma,
const Htuple MaxDist )
HALCON 8.0.2
1222 CHAPTER 15. TOOLS
saddle_points_sub_pix and connect_grid_points can be prevented from detecting false grid points
and connecting lines.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. GridRegion (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; Hobject *
Output region containing the rectification grid.
. MinContrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; double / Hlong
Minimum contrast.
Default Value : 8.0
Suggested values : MinContrast ∈ {2.0, 4.0, 8.0, 16.0, 32.0}
Restriction : MinContrast ≥ 0
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; double / Hlong
Radius of the circular structuring element.
Default Value : 7.5
Suggested values : Radius ∈ {1.5, 2.5, 3.5, 4.5, 5.5, 7.5, 9.5, 12.5, 15.5, 19.5, 25.5, 33.5, 45.5, 60.5, 110.5}
Restriction : Radius ≥ 0.5
Example (Syntax: HDevelop)
Result
find_rectification_grid returns H_MSG_TRUE if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
find_rectification_grid is reentrant and processed without parallelization.
Possible Successors
dilation_circle, reduce_domain
Module
Calibration
Generate a projection map that describes the mapping between an arbitrarily distorted image and the rectified
image.
gen_arbitrary_distortion_map computes the mapping Map between an arbitrarily distorted image and
the rectified image. Assuming that the points (Row,Col) form a regular grid in the rectified image, each grid cell,
which is defined by the coordinates (Row,Col) of its four corners in the distorted image, is projected onto a square
of GridSpacing×GridSpacing pixels. The coordinates of the grid points must be passed line by line in Row
and Col. GridWidth is the width of the point grid in grid points. To compute the mapping Map, additionally
the width ImageWidth and height ImageHeight of the images to be rectified must be passed.
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image, the
linearized coordinates of the pixel in the input image that is in the upper left position relative to the transformed co-
ordinates are stored. The four other channels contain the weights of the four neighboring pixels of the transformed
coordinates, which are used for the bilinear interpolation, in the following order:
HALCON 8.0.2
1224 CHAPTER 15. TOOLS
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
In contrary to gen_grid_rectification_map, gen_arbitrary_distortion_map is used when
the coordinates (Row,Col) of the grid points in the distorted image are already known or the relevant part of the
image consist of regular grid structures, which the coordinates can be derived from.
Parameter
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
gen_grid_rectification_map calculates the mapping between the grid points (Row,Col), which have
been actually detected in the distorted image Image (typically using saddle_points_sub_pix), and the
corresponding grid points of the ideal regular point grid. First, all paths that lead from their initial point via ex-
actly four different connecting lines back to the initial point are assembled from the grid points (Row,Col) and
the connecting lines ConnectingLines (detected by connect_grid_points). In case that the input of
grid points (Row,Col) and of connecting lines ConnectingLines was meaningful, one such ’mesh’ corre-
sponds to exactly one grid cell in the rectification grid. Afterwards, the meshes are combined to the point grid.
According to the value of Rotation, the point grid is rotated by 0, 90, 180 or 270 degrees. Note that the point
grid does not necessarily have the correct orientation. When passing ’auto’ in Rotation, the point grid is ro-
tated such that the black circular mark in the rectification grid is positioned to the left of the white one (see also
create_rectification_grid). Finally, the mapping Map between the distorted image and the rectified
image is calculated by interpolation between the grid points. Each grid cell, for which the coordinates (Row,Col)
of all four corner points are known, is projected onto a square of GridSpacing × GridSpacing pixels.
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image, the
linearized coordinates of the pixel in the input image that is in the upper left position relative to the transformed co-
ordinates are stored. The four other channels contain the weights of the four neighboring pixels of the transformed
coordinates, which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates.
gen_grid_rectification_map additionally returns the calculated meshes as XLD contours in Meshes.
In contrary to gen_arbitrary_distortion_map, gen_grid_rectification_map and its prede-
cessors are used when the coordinates (Row,Col) of the grid points in the distorted image are neither known nor
can be derived from the image contents.
Attention
Each input XLD contour ConnectingLines must own the global attribute ’bright_dark’, as it is described with
connect_grid_points!
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. ConnectingLines (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld-array ; Hobject
Input contours.
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; Hobject * : int4 / uint2
Image containing the mapping data.
. Meshes (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld-array ; Hobject *
Output contours.
. GridSpacing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Distance of the grid points in the rectified image.
Restriction : GridSpacing > 0
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char * / Hlong
Rotation to be applied to the point grid.
Default Value : "auto"
List of values : Rotation ∈ {"auto", 0, 90, 180, 270}
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double
Row coordinates of the grid points.
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double
Column coordinates of the grid points.
Restriction : number(Col) = number(Row)
Result
gen_grid_rectification_map returns H_MSG_TRUE if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
gen_grid_rectification_map is reentrant and processed without parallelization.
Possible Predecessors
connect_grid_points
Possible Successors
map_image
See also
gen_arbitrary_distortion_map
Module
Calibration
HALCON 8.0.2
1226 CHAPTER 15. TOOLS
15.11 Hough
HALCON 8.0.2
1228 CHAPTER 15. TOOLS
Parameter
Compute the Hough transform for lines using local gradient direction.
The operator hough_line_trans_dir calculates the Hough transform for lines in those regions passed in
the domain of ImageDir. To do so, the angles and the lengths of the lines’ normal vectors are registered in the
parameter space (the so-called Hough or accumulator space).
In contrast to hough_line_trans, additionally the edge direction in ImageDir (e.g., returned by
sobel_dir or edges_image) is taken into account. This results in a more efficient computation and in a
reduction of the noise in the Hough space.
The parameter DirectionUncertainty describes how much the edge direction of the individual points
within a line is allowed to vary. For example, with DirectionUncertainty = 10 a horizon-
tal line (i.e., edge direction = 0 degrees) may contain points with an edge direction between -10 and
+10 degrees. The higher DirectionUncertainty is chosen, the higher the computation time will
be. For DirectionUncertainty = 180 hough_line_trans_dir shows the same behavior as
hough_line_trans, i.e., the edge direction is ignored. DirectionUncertainty should be chosen at
least as high as the step width of the edge direction stored in ImageDir. The minimum step width is 2 degrees
(defined by the image type ’direction’).
The result is stored in a newly generated UINT2-Image (HoughImage), where the x-axis (i.e., columns) repre-
sents the angle between the normal vector and the x-axis of the original image, and the y-axis (i.e., rows) represents
the distance of the line from the origin.
The angle ranges from -90 to 180 degrees and will be stored with a resolution of 1/AngleResolution, which
means that one pixel in x-direction is equivalent to 1/AngleResolution degrees and that the HoughImage
has a width of 270∗AngleResolution+1 pixels. The height of the HoughImage corresponds to the distance
between the lower right corner of the surrounding rectangle of the input region and the origin.
The local maxima in the result image are equivalent to the parameter values of the lines in the original image.
Parameter
Detect lines in edge images with the help of the Hough transform and returns it in HNF.
The operator hough_lines allows the selection of linelike structures in a region, whereby it is not necessary
that the individual points of a line are connected. This process is based on the Hough transform. The lines are
returned in HNF, that is by the direction and length of their normal vector.
The parameter AngleResolution defines the degree of exactness concerning the determination of the angles.
It amounts to 1/AngleResolution degree. The parameter Threshold determines by how many points
of the original region a line’s hypothesis has to be supported at least in order to be taken over into the output.
The parameters AngleGap and DistGap define a neighborhood of the points in the Hough image in order to
determine the local maxima. The lines are returned in HNF.
Parameter
HALCON 8.0.2
1230 CHAPTER 15. TOOLS
Detect lines in edge images with the help of the Hough transform using local gradient direction and return them in
normal form.
The operator hough_lines_dir selects line-like structures in a region based on the Hough transform. The
individual points of a line can be unconnected. The region is given by the domain of ImageDir. The lines are
returned in Hessian normal form (HNF), that is by the direction and length of their normal vector.
In contrast to hough_lines, additionally the edge direction in ImageDir (e.g., returned by sobel_dir or
edges_image) is taken into account. This results in a more efficient computation and in a reduction of the noise
in the Hough space.
The parameter DirectionUncertainty describes how much the edge direction of the individual points
within a line is allowed to vary. For example, with DirectionUncertainty = 10 a horizontal line
(i.e., edge direction = 0 degrees) may contain points with an edge direction between -10 and +10 de-
grees. The higher DirectionUncertainty is chosen, the higher the computation time will be. For
HALCON 8.0.2
1232 CHAPTER 15. TOOLS
Select those lines from a set of lines (in HNF) which fit best into a region.
Lines which fit best into a region can be selected from a set of lines which are available in HNF with the help of the
operator select_matching_lines; the region itself is also transmitted as a parameter (RegionIn). The
width of the lines can be indicated by the parameter LineWidth. The selected lines will be returned in HNF and
as regions (RegionLines).
The lines are selected iteratively in a loop: At first, the line showing the greatest overlap with the input region
is selected from the set of input lines. This line will then be taken over into the output set whereby all points
belonging to that line will not be considered in the further steps determining overlaps. The loop will be left when
the maximum overlap value of the region and the lines falls below a certain threshold value (Thresh). The
selected lines will be returned as regions as well as in HNF.
Parameter
15.12 Image-Comparison
clear_all_variation_models ( )
T_clear_all_variation_models ( )
HALCON 8.0.2
1234 CHAPTER 15. TOOLS
This mode is identical to compare_variation_model. For Mode = ’light’, Region contains all points that
are too bright:
c(x, y) > tu (x, y) .
For Mode = ’dark’, Region contains all points that are too dark:
Finally, for Mode = ’light_dark’ two regions are returned in Region. The first region contains the result of Mode
= ’light’, while the second region contains the result of Mode = ’dark’. The respective regions can be selected
with select_obj.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Image of the object to be trained.
. Region (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; Hobject *
Region containing the points that differ substantially from the model.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong
ID of the variation model.
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Method used for comparing the variation model.
Default Value : "absolute"
Suggested values : Mode ∈ {"absolute", "light", "dark", "light_dark"}
Example (Syntax: HDevelop)
HALCON 8.0.2
1236 CHAPTER 15. TOOLS
’false’)
compare_ext_variation_model (ImageTrans, RegionDiff, ModelID,
’light’)
disp_obj (RegionDiff, WindowHandle)
endif
endfor
clear_shape_model (TemplateID)
clear_variation_model (ModelID)
close_framegrabber (FGHandle)
Result
compare_ext_variation_model returns H_MSG_TRUE if all parameters are correct and
if the internal threshold images have been generated with prepare_variation_model or
prepare_direct_variation_model.
Parallelization Information
compare_ext_variation_model is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
prepare_variation_model, prepare_direct_variation_model
Possible Successors
select_obj, connection
Alternatives
compare_variation_model, dyn_threshold
See also
get_thresh_images_variation_model
Module
Matching
Result
compare_variation_model returns H_MSG_TRUE if all parameters are correct and if
the internal threshold images have been generated with prepare_variation_model or
prepare_direct_variation_model.
Parallelization Information
compare_variation_model is reentrant and automatically parallelized (on tuple level, domain level).
Possible Predecessors
prepare_variation_model, prepare_direct_variation_model
Possible Successors
connection
Alternatives
compare_ext_variation_model, dyn_threshold
See also
get_thresh_images_variation_model
Module
Matching
HALCON 8.0.2
1238 CHAPTER 15. TOOLS
The variation model consists of an ideal image of the object to which the images of the objects to be tested are
compared later on with compare_variation_model or compare_ext_variation_model, and an
image that represents the amount of gray value variation at every point of the object. The size of the images with
which the object model is trained and with which the model is compared later on is passed in Width and Height,
respectively. The image type of the images used for training and comparison is passed in Type.
The variation model is trained using multiple images of good objects. Therefore, it is essential that the training
images show the objects in the same position and rotation. If this cannot be guarateed by external means, the pose
of the object can, for example, be determined by using matching (see find_shape_model). The image can
then be transformed to a reference pose with affine_trans_image.
The parameter Mode is used to determine how the image of the ideal object and the corresponding variation
image are computed. For Mode=’standard’, the ideal image of the object is computed as the mean of all training
images at the respective image positions. The corresponding variation image is computed as the standard deviation
of the training images at the respective image positions. This mode has the advantage that the variation model
can be trained iteratively, i.e., as soon as an image of a good object becomes available, it can be trained with
train_variation_model. The disadvantage of this mode is that great care must be taken to ensure that only
images of good objects are trained, because the mean and standard deviation are not robust against outliers, i.e., if
an image of a bad object is trained inadvertently, the accuracy of the ideal object image and that of the variation
image might be degraded.
If it cannot be avoided that the variation model is trained with some images of objects that can contain errors, Mode
can be set to ’robust’. In this mode, the image of the ideal object is computed as the median of all training images
at the respective image positions. The corresponding variation image is computed as a suitably scaled median
absolute deviation of the training images and the median image at the respective image positions. This mode has
the advantage that it is robust against outliers. It has the disadvantage that it cannot be trained iteratively, i.e., all
training images must be accumulated using concat_obj and be trained with train_variation_model
in a single call.
In some cases, it is impossible to acquire multiple training images. In this case, a useful variation image cannot
be trained from the single training image. To solve this problem, variations of the training image can be created
synthetically, e.g., by shifting the training image by ±1 pixel in the row and column directions or by using gray
value morphology (e.g., gray_erosion_shape und gray_dilation_shape), and then training the syn-
thetically modified images. A different possibility to create the variation model from a single image is to create
the model with Mode=’direct’. In this case, the variation model can only be trained by specifying the ideal image
and the variation image directly with prepare_direct_variation_model. Since the variation typically
is large at the edges of the object, edge operators like sobel_amp, edges_image, or gray_range_rect
should be used to create the variation image.
Parameter
14 ∗ Width ∗ Height are required. For Mode = ’direct’ and after the training data has been cleared with
clear_train_data_variation_model, 2 ∗ Width ∗ Height bytes are required for Type = ’byte’ and
4 ∗ Width ∗ Height for the other image types.
Result
create_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
create_variation_model is processed completely exclusively without parallelization.
Possible Successors
train_variation_model, prepare_direct_variation_model
See also
prepare_variation_model, clear_variation_model,
clear_train_data_variation_model, find_shape_model, affine_trans_image
Module
Matching
Return the threshold images used for image comparison by a variation model.
get_thresh_images_variation_model returns the threshold images of the variation
model ModelID in MaxImage and MinImage. The threshold images must be computed
with prepare_variation_model or prepare_direct_variation_model before
they can be read out. The formula used for calculating the threshold images is described with
prepare_variation_model or prepare_direct_variation_model. The threshold images
are used in compare_variation_model and compare_ext_variation_model to detect too large
deviations of an image with respect to the model. As described with compare_variation_model and
compare_ext_variation_model, gray values outside the interval given by MinImage and MaxImage
are regarded as errors.
Parameter
HALCON 8.0.2
1240 CHAPTER 15. TOOLS
the current image and the ideal image. AbsThreshold and VarThreshold each can contain one or two values.
If two values are specified, different thresholds can be determined for too bright and too dark pixels. In this mode,
the first value refers to too bright pixels, while the second value refers to too dark pixels. If one value is specified,
this value refers to both the too bright and too dark pixels. Let i(x, y) be the ideal image RefImage, v(x, y) the
variation image VarImage, au = AbsThreshold[0], al = AbsThreshold[1], bu = VarThreshold[0],
and bl = VarThreshold[1] (or au = AbsThreshold, al = AbsThreshold, bu = VarThreshold, and
bl = VarThreshold, respectively). Then the two threshold images tu,l are computed as follows:
tu (x, y) = i(x, y) + max{au , bu v(x, y)} tl (x, y) = i(x, y) − max{al , bl v(x, y)} .
If the current image c(x, y) is compared to the variation model using compare_variation_model, the output
region contains all points that differ substantially from the model, i.e., that fulfill the following condition:
In compare_ext_variation_model, extended comparison modes are available, which return only too
bright errors, only too dark errors, or bright and dark errors as separate regions.
After the threshold images have been created they can be read out with
get_thresh_images_variation_model.
It should be noted that RefImage and VarImage are not stored as the ideal and variation images in the model
to save memory in the model.
Parameter
. RefImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Reference image of the object.
. VarImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; Hobject : byte / int2 / uint2
Variation image of the object.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; (Htuple .) Hlong
ID of the variation model.
. AbsThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Absolute minimum threshold for the differences between the image and the variation model.
Default Value : 10
Suggested values : AbsThreshold ∈ {0, 5, 10, 15, 20, 30, 40, 50}
Restriction : AbsThreshold ≥ 0
. VarThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Threshold for the differences based on the variation of the variation model.
Default Value : 2
Suggested values : VarThreshold ∈ {1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5}
Restriction : VarThreshold ≥ 0
Example (Syntax: HDevelop)
Result
prepare_direct_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
prepare_direct_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
sobel_amp, edges_image, gray_range_rect
Possible Successors
compare_variation_model, compare_ext_variation_model,
get_thresh_images_variation_model, write_variation_model
HALCON 8.0.2
1242 CHAPTER 15. TOOLS
Alternatives
prepare_variation_model
See also
create_variation_model
Module
Matching
tu (x, y) = i(x, y) + max{au , bu v(x, y)} tl (x, y) = i(x, y) − max{al , bl v(x, y)} .
If the current image c(x, y) is compared to the variation model using compare_variation_model, the output
region contains all points that differ substantially from the model, i.e., that fulfill the following condition:
In compare_ext_variation_model, extended comparison modes are available, which return only too
bright errors, only too dark errors, or bright and dark errors as separate regions.
After the threshold images have been created they can be read out with
get_thresh_images_variation_model. Furthermore, the training data can be deleted with
clear_train_data_variation_model to save memory.
Parameter
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; (Htuple .) Hlong
ID of the variation model.
. AbsThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Absolute minimum threshold for the differences between the image and the variation model.
Default Value : 10
Suggested values : AbsThreshold ∈ {0, 5, 10, 15, 20, 30, 40, 50}
Restriction : AbsThreshold ≥ 0
. VarThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; (Htuple .) double / Hlong
Threshold for the differences based on the variation of the variation model.
Default Value : 2
Suggested values : VarThreshold ∈ {1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5}
Restriction : VarThreshold ≥ 0
Result
prepare_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
prepare_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
train_variation_model
Possible Successors
compare_variation_model, compare_ext_variation_model,
get_thresh_images_variation_model, clear_train_data_variation_model,
write_variation_model
Alternatives
prepare_direct_variation_model
See also
create_variation_model
Module
Matching
HALCON 8.0.2
1244 CHAPTER 15. TOOLS
train_variation_model. The ideal image of the object is computed as the mean of all previous training
images and the images that are passed in Images. The corresponding variation image is computed as the standard
deviation of the training images and the images that are passed in Images.
If the variation model has been created using the mode ’robust’, the model cannot be trained iteratively, i.e., all
training images must be accumulated using concat_obj and be trained with train_variation_model
in a single call. If any images have been trained previously, the training information of the previous call is dis-
carded. The image of the ideal object is computed as the median of all training images passed in Images. The
corresponding variation image is computed as a suitably scaled median absolute deviation of the training images
and the median image.
Attention
At most 65535 training images can be trained.
Parameter
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte / int2 / uint2
Images of the object to be trained.
. ModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . variation_model ; Hlong
ID of the variation model.
Example (Syntax: HDevelop)
Result
train_variation_model returns H_MSG_TRUE if all parameters are correct.
Parallelization Information
train_variation_model is processed completely exclusively without parallelization.
Possible Predecessors
create_variation_model, find_shape_model, affine_trans_image, concat_obj
Possible Successors
prepare_variation_model
See also
prepare_variation_model, compare_variation_model, compare_ext_variation_model,
clear_variation_model
Module
Matching
15.13 Kalman-Filter
T_filter_kalman ( const Htuple Dimension, const Htuple Model,
const Htuple Measurement, const Htuple PredictionIn,
Htuple *PredictionOut, Htuple *Estimate )
Estimate the current state of a system with the help of the Kalman filtering.
The operator filter_kalman returns an estimate of the current state (or also a prediction of a future state)
of a discrete, stochastically disturbed, linear system. In practice, Kalman filters are used successfully in image
processing in the analysis of image sequences (background identification, lane tracking with the help of line tracing
or region analysis, etc.). A short introduction concerning the theory of the Kalman filters will be followed by a
detailed description of the routine filter_kalman itself.
KALMAN FILTER: A discrete, stochastically disturbed, linear system is characterized by the following markers:
• State x(t): Describes the current state of the system (speeds, temperatures,...).
• Parameter u(t): Inputs from outside into the system.
• Measurement y(t): Measurements gained by observing the system. They indicate the state of the system (or
at least parts of it).
HALCON 8.0.2
1246 CHAPTER 15. TOOLS
The output function and the transition function are linear. Their application can therefore be written as a multipli-
cation with a matrix.
The transition function is described with the help of the transition matrix A(t) and the parameter matrix , the initial
function is described by the measurement matrix C(t). Hereby C(t) characterizes the dependency of the new state
on the old, G(t) indicates the dependency on the parameters. In practice it is rarely possible (or at least too time
consuming) to describe a real system and its behaviour in a complete and exact way. Normally only a relatively
small number of variables will be used to simulate the behaviour of the system. This leads to an error, the so called
system error (also called system disturbance) v(t).
The output function, too, is usually not exact. Each measurement is faulty. The measurement errors will be called
w(t). Therefore the following system equations arise:
x(t + 1) = A(t)x(t) + G(t)u(t) + v(t)
y(t) = c(t)x(t) + w(t)
The system error v(t) and the measurement error w(t) are not known. As far as systems are concerned which
are interpreted with the help of the Kalman filter, these two errors are considered as Gaussian distributed random
vectors (therefore the expression "‘stochastically disturbed systems"’). Therefore the system can be calculated, if
the corresponding expected values for v(t) and w(t) as well as the covariance matrices are known.
The estimation of the state of the system is carried out in the same way as in the Gaussian-Markov-estimation.
However, the Kalman filter is a recursive algorithm which is based only on the current measurements y(t) and the
latest state x(t). The latter implicitly also includes the knowlegde about earlier measurements.
A suitable estimate value x_0, which is interpreted as the expected value of a random variable for x(0), must be
indicated for the initial value x(0). This variable should have an expected error value of 0 and the covariance
matrix P _0 which also has to be indicated. At a certain time t the expected values of both disturbances v(t) and
w(t) should be 0 and their covariances should be Q(t) and R(t). x(t), v(t) and w(t) will usually be assumed to be
not correlated (any kind of noise-process can be modelled - however the development of the necessary matrices by
the user will be considerably more demanding). The following conditions must be met by the searched estimate
values xt :
• The estimate values xt are linearly dependent on the actual value x(t) and on the measurement sequence
y(0), y(1), · · · , y(t).
• xt being hereby considered to meet its expectations, i.e. Ext = Ex(t).
• The grade criterion for xt is the criterion of minimal variance, i.e. the variance of the estimation error defined
as x(t) − xt , being as small as possible.
P̂ (t)C 0 (t)
(K − III) K(t) = C(t)P̂ (t)C 0 (t)+R(t)
(K − IV ) xt = x̂(t) + K(t)(y(t) − C(t)x̂(t))
(K − V ) P̃ (t) = P̂ (t) − K(t)C(t)P̂ (t)
(K − I) x̂(t + 1) = A(t)xt + G(t)u(t)
(K − II) P̂ (t + 1) = A(t)P̃ (t)A0 (t) + Q(t)
Hereby P̃ (t) is the covariance matrix of the estimation error, x̂(t) is the extrapolation value respective the predic-
tion value of the state, P̂ (t) are the covariances of the prediction error x̂ − x, K is the amplifier matrix (the so
called Kalman gain), and X 0 is the transposed of a matrix X.
Please note that the prediction of the future state is also possible with the equation (K-I). Somtimes this is very
useful in image processing in order to determine "‘regions of interest"’ in the next image.
As mentioned above, it is much more demanding to model any kind of noise processes. If for example the system
noise and the measurement noise are correlated with the corresponding covariance matrix L, the equations for the
Kalman gain and the error covariance matrix have to be modified:
P̂ (t)C 0 (t)+L(t)
(K − III) K(t) = C(t)P̂ (t)+C(t)l(t)+L0 C 0 (t)+R(t)
(K − V ) P̃ (t) = (P̂ (t) − K(t)C(t)P̂ (t))P̂ (t) − K(t)L(t)
This means that the user himself has to establish the linear system equations from (K-I) up to (K-V) with respect to
the actual problem. The user must therefore develop a mathematical model upon which the solution to the problem
can be based. Statistical characteristics describing the inaccuracies of the system as well as the measurement
errors, which are to be expected, thereby have to be estimated if they cannot be calculated exactly. Therefore the
following individual steps are necessary:
As mentioned above, the initialization of the system (point 7) hereby necessitates to indicate an estimate x0 of the
state of the system at the time 0 and the corresponding covariance matrix P0 . If the exact initial state is not known,
it is recommendable to set the components of the vector x0 to the average values of the corresponding range, and
to set high values for P0 (about the size of the squares of the range). After a few iterations (when the number of the
accumulated measurement values in total has exceeded the number of the system values), the values which have
been determined in this way are also useable.
If on the other hand the initial state is known exactly, all entries for P0 have to be set to 0, because P0 describes
the covariances of the error between the estimated value x0 and the actual value x(0).
THE FILTER ROUTINE:
A Kalman filter is dependent on a range of data which can be organized in four groups:
Model parameter: transition matrix A, control matrix G including the parameter u and the measurement matrix
C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L, and
measurement-error covariance matrix R
Measurement vector: y
History of the system: extrapolation vector x̂ and extrapolation-error covariance matrix P̂
Thereby many systems can work without input "‘from outside"’, i.e. without G and u. Further, system errors and
measurement errors are normally not correlated (L is dropped).
Actually the data necessary for the routine will be set by the following parameters:
Dimension: This parameter includes the dimensions of the status vector, the measurement vector and the con-
troller vector. Dimension thereby is a vector [n,m,p], whereby n indicates the number of the state variables,
m the number of the measurement values and p the number of the controller members. For a system without
determining control (i.e. without influence "‘from outside"’) therefore [n,m,0] has to be passed.
Model: This parameter includes the lined up matrices (vectors) A,C,Q,G,u and (if necessary) L having been stored
in row-major order. Model therefore is a vector of the length n × n + n × m + n × n + n × p + p[+n × m].
The last summand is dropped, in case the system errors and measurement errors are not correlated, i.e. there
is no value for L.
Measurement: This parameter includes the matrix R which has been stored in row-major order, and the mea-
surement vector y lined up. Measurement therefore is a vector of the dimension m × m + m.
HALCON 8.0.2
1248 CHAPTER 15. TOOLS
PredictionIn / PredictionOut: These two parameters include the matrix P̂ (the extrapolation-error co-
variance matrix) which has been stored in row-major order and the extrapolation vector x̂ lined up. This
means, they are vectors of the length n × n + n. PredictionIn therefore is an input parameter, which
must contain P̂ (t) and x̂(t) at the current time t. With PredictionOut the routine returns the correspond-
ing predictions P̂ (t + 1) and x̂(t + 1).
Estimate: With this parameter the routine returns the matrix P̃ (the estimation-error covariance matrix) which
has been stored in row-major order and the estimated state x̃ lined up. Estimate therefore is a vector of
the length n × n + n.
Please note that the covariance matrices (Q, R, P̂ , P̃ ) must of course be symmetric.
Parameter
. Dimension (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
The dimensions of the state vector, the measurement and the controller vector.
Default Value : [3,1,0]
Typical range of values : 0 ≤ Dimension ≤ 30
. Model (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The lined up matrices A, C, Q, possibly G and u, and if necessary L which have been stored in row-major
order.
Default Value : [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
Typical range of values : 0.0 ≤ Model ≤ 10000.0
. Measurement (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The matrix R stored in row-major order and the measurement vector y lined up.
Default Value : [1.2,1.0]
Typical range of values : 0.0 ≤ Measurement ≤ 10000.0
. PredictionIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The matrix P̂ (the extrapolation-error covariances) stored in row-major order and the extrapolation vector x̂
lined up.
Default Value : [0.0,0.0,0.0,0.0,180.5,0.0,0.0,0.0,100.0,0.0,100.0,0.0]
Typical range of values : 0.0 ≤ PredictionIn ≤ 10000.0
. PredictionOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix P∗ (the extrapolation-error covariances)stored in row-major order and the extrapolation vector x̂
lined up.
. Estimate (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix P̃ (the estimation-error covariances) stored in row-major order and the estimated state x̃ lined up.
Example
/* Typical procedure: */
/* 1. To initialize the variables
which describe the model, e.g. with */
read_kalman("kalman.init",Dim,Mod,Meas,Pred) ;
extract_features(Image1,Meas,&Meas1) ;
/* first Kalman-Filtering: */
filter_kalman(Dim,Mod,Meas1,Pred,&Pred1,&Est1) ;
use_est(Est1) ;
extract_next_features(Image2,Meas1,&Meas2) ;
filter_kalman(Dim,Mod,Meas2,Pred1,&Pred2,&Est2) ;
use_est(Est2) ;
extract_next_features(Image3,Meas2,&Meas3) ;
/* etc. */
Result
If the parameter values are correct, the operator filter_kalman returns the value H_MSG_TRUE. Otherwise
an exception handling will be raised.
Parallelization Information
filter_kalman is reentrant and processed without parallelization.
Possible Predecessors
read_kalman, sensor_kalman
Possible Successors
update_kalman
See also
read_kalman, update_kalman, sensor_kalman
References
W.Hartinger: "‘Entwurf eines anwendungsunabh"angigen Kalman-Filters mit Untersuchungen im Bereich der
Bildfolgenanalyse"’; Diplomarbeit; Technische Universit"at M"unchen, Institut f"ur Informatik, Lehrstuhl Prof.
Radig; 1991.
R.E.Kalman: "‘A New Approach to Linear Filtering and Prediction Problems"’; Transactions ASME, Ser.D: Jour-
nal of Basic Engineering; Vol. 82, S.34-45; 1960.
R.E.Kalman, P.l.Falb, M.A.Arbib: "‘Topics in Mathematical System Theory"’; McGraw-Hill Book Company, New
York; 1969.
K-P. Karmann, A.von Brandt: "‘Moving Object Recognition Using an Adaptive Background Memory"’; Time-
Varying Image Processing and Moving Object Recognition 2 (ed.: V. Cappellini), Proc. of the 3rd Interantional
Workshop, Florence, Italy, May, 29th - 31st, 1989; Elsevier, Amsterdam; 1990.
Module
Foundation
Model parameter: transition matrix A, control matrix G including the controller output u and the measurement
matrix C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L and
measurement-error covariance matrix R
Estimate of the initial state of the system: state x0 and corresponding covariance matrix P0
HALCON 8.0.2
1250 CHAPTER 15. TOOLS
Many systems do not need entries "‘from outside"’, and therefore G and u can be dropped. Further, system errors
and measurement errors are normally not correlated (L is dropped). The characteristics mentioned above can be
stored in an ASCII-file and then can be read with the help of the operator read_kalman. This ASCII-file must
have the following structure:
Dimension row
+ content row
+ matrix A
+ atrix C
+ matrix Q
[ + matrix G + vector u ]
[ + matrix L ]
+ matrix R
[ + matrix P0 ]
[ + vector x0 ]
Dimension: This parameter includes the dimensions of the status vector, the measurement vector and the con-
troller vector. Dimension thereby is a vector [n,m,p], whereby n indicates the number of the state variables,
m the number of the measurement values and p the number of the controller members. For a system without
determining control (i.e. without influence "‘from outside"’) therefore Dimension = [n,m,0].
Model: This parameter includes the lined up matrices (vectors) A, C, Q, G, u and (if necessary) L having been
stored in row-major order. Model therefore is a vector of the length n×n+n×m+n×n+n×p+p[+n×m].
The last summand is dropped, in case the system errors and measurement errors are not correlated, i.e. there
is no value for L.
Measurement: This parameter includes the matrix R which has been stored in row-major order.
Measurement therefore is vector of the dimension m × m.
Prediction: This parameter includes the matrix P0 (the error covariance matrix of the initial state estimate)
and the initial state estimate x0 lined up. This means, it is a vector of the length n × n + n.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
Description file for a Kalman filter.
Default Value : "kalman.init"
. Dimension (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
The dimensions of the state vector, the measurement vector and the controller vector.
. Model (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The lined up matrices A, C, Q, possibly G and u, and if necessary L stored in row-major order.
. Measurement (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix R stored in row-major order.
. Prediction (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix P0 (error covariance matrix of the initial state estimate) stored in row-major order and the initial
state estimate x0 lined up.
Example
Result
If the description file is readable and correct, the operator read_kalman returns the value H_MSG_TRUE.
Otherwise an exception handling will be raised.
Parallelization Information
read_kalman is reentrant and processed without parallelization.
Possible Successors
filter_kalman
See also
update_kalman, filter_kalman, sensor_kalman
HALCON 8.0.2
1252 CHAPTER 15. TOOLS
Module
Foundation
Model parameter: transition matrix A, control matrix G including the controller output u and the measurement
matrix C
Model stochastic: system-error covariance matrix Q, system-error - measurement-error covariance matrix L and
measurement-error covariance matrix R
Measurement vector: y
History of the system: extrapolation vector x̂ and extrapolation-error covariance matrix P̂
Many systems do not need entries "‘from outside"’ and therefore G and u can be dropped. Further, system errors
and measurement errors are normally not correlated (L is dropped). Some of the characteristics mentioned above
may change dynamically (from one iteration to the next). The operator update_kalman serves to modify parts
of the system according to an update file (ASCII) with the following structure (see also read_kalman):
Dimension row
+ content row
+ matrix A
+ matrix C
+ matrix Q
+ matrix G + vector u
+ matrix L
+ matrix R
DimensionIn / DimensionOut: These parameters include the dimensions of the state vector, measurement
vector and controller vector and therefore are vectors [n,m,p], whereby n indicates the number of the state
variables, m the number of the measurement values and p the number of the controller members. n and m are
invariant for a given system, i.e. they must not differ from corresponding input values of the update file. For
a system without without influence "‘from outside"’ p = 0.
HALCON 8.0.2
1254 CHAPTER 15. TOOLS
ModelIn / ModelOut: These parameters include the lined up matrices (vectors) A, C, Q, G, u and if necessary
L which have been stored in row-major order. ModelIn / ModelOut therefore are vectors of the length
n × n + n × m + n × n + n × p + p[+n × m]. The last summand is dropped if system errors and measurement
errors are not correlated, i.e. no value has been set for L.
MeasurementIn / MeasurementOut: These parameters include the matrix R stored in row-major order, and
therefore are vectors of the dimension m × m.
Parameter
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; Htuple . const char *
Update file for a Kalman filter.
Default Value : "kalman.updt"
. DimensionIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong
The dimensions of the state vector, measurement vector and controller vector.
Default Value : [3,1,0]
Typical range of values : 0 ≤ DimensionIn ≤ 30
. ModelIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The lined up matrices A,C,Q, possibly G and u, and if necessary L which all have been stored in row-major
order.
Default Value : [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
Typical range of values : 0.0 ≤ ModelIn ≤ 10000.0
. MeasurementIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
The matrix R stored in row-major order.
Default Value : [1,2]
Typical range of values : 0.0 ≤ MeasurementIn ≤ 10000.0
. DimensionOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; Htuple . Hlong *
The dimensions of the state vector, measurement vector and controller vector.
. ModelOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The lined up matrices A,C,Q, possibly G and u, and if necessary L which all have been stored in row-major
order.
. MeasurementOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
The matrix R stored in row-major order.
Example
Parallelization Information
update_kalman is reentrant and processed without parallelization.
Possible Successors
filter_kalman
See also
read_kalman, filter_kalman, sensor_kalman
Module
Foundation
15.14 Measure
close_all_measures ( )
T_close_all_measures ( )
HALCON 8.0.2
1256 CHAPTER 15. TOOLS
See also
close_all_measures
Module
1D Metrology
HALCON 8.0.2
1258 CHAPTER 15. TOOLS
Parallelization Information
fuzzy_measure_pairing is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairs, measure_pairs
See also
fuzzy_measure_pos, measure_pos
Module
1D Metrology
It should be kept in mind that fuzzy_measure_pairs ignores the domain of Image for efficiency reasons.
If certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter
HALCON 8.0.2
1260 CHAPTER 15. TOOLS
Result
If the parameter values are correct the operator fuzzy_measure_pairs returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
fuzzy_measure_pairs is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairing, measure_pairs
See also
fuzzy_measure_pos, measure_pos
Module
1D Metrology
to the edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1
see gen_measure_rectangle2).
It should be kept in mind that fuzzy_measure_pos ignores the domain of Image for efficiency reasons. If
certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Sigma of Gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.4
. AmpThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum edge amplitude.
Default Value : 30.0
Suggested values : AmpThresh ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Typical range of values : 1 ≤ AmpThresh ≤ 255 (lin)
Minimum Increment : 2
Recommended Increment : 0.5
. FuzzyThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Minimum fuzzy value.
Default Value : 0.5
Suggested values : FuzzyThresh ∈ {0.1, 0.3, 0.5, 0.6, 0.7, 0.9}
Typical range of values : 0.0 ≤ FuzzyThresh ≤ 1.0 (lin)
Recommended Increment : 0.1
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Select light/dark or dark/light edges.
Default Value : "all"
List of values : Transition ∈ {"all", "positive", "negative"}
. RowEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinate of the edge point.
. ColumnEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinate of the edge point.
. Amplitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Edge amplitude of the edge (with sign).
. FuzzyScore (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Fuzzy evaluation of the edges.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Distance between consecutive edges.
Result
If the parameter values are correct the operator fuzzy_measure_pos returns the value H_MSG_TRUE. Oth-
erwise an exception handling is raised.
Parallelization Information
fuzzy_measure_pos is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure
Possible Successors
close_measure
Alternatives
edges_sub_pix, measure_pos
HALCON 8.0.2
1262 CHAPTER 15. TOOLS
See also
fuzzy_measure_pairing, fuzzy_measure_pairs, measure_pairs
Module
1D Metrology
HALCON 8.0.2
1264 CHAPTER 15. TOOLS
Parameter
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y ; double / Hlong
Row coordinate of the center of the rectangle.
Default Value : 50.0
Suggested values : Row ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ Row ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x ; double / Hlong
Column coordinate of the center of the rectangle.
Default Value : 100.0
Suggested values : Column ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Typical range of values : 0.0 ≤ Column ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad ; double / Hlong
Angle of longitudinal axis of the rectangle to horizontal (radians).
Default Value : 0.0
Suggested values : Phi ∈ {-1.178097, -0.785398, -0.392699, 0.0, 0.392699, 0.785398, 1.178097}
Typical range of values : -1.178097 ≤ Phi ≤ 1.178097 (lin)
Minimum Increment : 0.001
Recommended Increment : 0.1
Restriction : (−pi < Phi) ∧ (Phi ≤ pi)
. Length1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth ; double / Hlong
Half width of the rectangle.
Default Value : 200.0
Suggested values : Length1 ∈ {3.0, 5.0, 10.0, 15.0, 20.0, 50.0, 100.0, 200.0, 300.0, 500.0}
Typical range of values : 0.0 ≤ Length1 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
. Length2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight ; double / Hlong
Half height of the rectangle.
Default Value : 100.0
Suggested values : Length2 ∈ {1.0, 2.0, 3.0, 5.0, 10.0, 15.0, 20.0, 50.0, 100.0, 200.0}
Typical range of values : 0.0 ≤ Length2 ≤ 511.0 (lin)
Minimum Increment : 1.0
Recommended Increment : 10.0
Restriction : Length2 ≤ Length1
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; Hlong
Width of the image to be processed subsequently.
Default Value : 512
Suggested values : Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768}
Typical range of values : 0 ≤ Width ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; Hlong
Height of the image to be processed subsequently.
Default Value : 512
Suggested values : Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576}
Typical range of values : 0 ≤ Height ≤ 1024 (lin)
Minimum Increment : 1
Recommended Increment : 16
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; const char *
Type of interpolation to be used.
Default Value : "nearest_neighbor"
List of values : Interpolation ∈ {"nearest_neighbor", "bilinear", "bicubic"}
. MeasureHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Hlong *
Measure object handle.
HALCON 8.0.2
1266 CHAPTER 15. TOOLS
Result
If the parameter values are correct the operator gen_measure_rectangle2 returns the value H_MSG_TRUE.
Otherwise an exception handling is raised.
Parallelization Information
gen_measure_rectangle2 is reentrant and processed without parallelization.
Possible Predecessors
draw_rectangle2
Possible Successors
measure_pos, measure_pairs, fuzzy_measure_pos, fuzzy_measure_pairs,
fuzzy_measure_pairing, measure_thresh
Alternatives
edges_sub_pix
See also
gen_measure_arc
Module
1D Metrology
It should be kept in mind that measure_pairs ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameter
HALCON 8.0.2
1268 CHAPTER 15. TOOLS
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairs, fuzzy_measure_pairing
See also
measure_pos, fuzzy_measure_pos
Module
1D Metrology
HALCON 8.0.2
1270 CHAPTER 15. TOOLS
coordinate frame of the rectangle) from the center of the rectangle. Since this involves some calculations which
can be used repeatedly in several projections, the operator gen_measure_rectangle2 is used to perform
these calculations only once, thus increasing the speed of measure_projection significantly. Since there
is a trade-off between accuracy and speed in the subpixel calculations of the gray values, different interpolation
schemes can be selected in gen_measure_rectangle2 (the interpolation only influences rectangles not
aligned with the image axes). The measure object generated with gen_measure_rectangle2 is passed in
MeasureHandle.
Attention
It should be kept in mind that measure_projection ignores the domain of Image for efficiency reasons. If
certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameter
Extracting points with a particular grey value along a rectangle or an annular arc.
measure_thresh extracts points for which the gray value within an one-dimensional gray value profile is equal
to the specified threshold Threshold. The gray value profile is projected onto the major axis of the measure
rectangle which is passed with the parameter MeasureHandle, so the threshold points calculated within the
gray value profile correspond to certain image coordinates on the rectangle’s major axis. These coordinates are
returned as the operator results in RowThresh and ColumnThresh.
If the gray value profile intersects the threshold line for several times, the parameter Select determines which
values to return. Possible settings are ’first’, ’last’, ’first_last’ (first and last) or ’all’. For the last two cases
Distance returns the distances between the calculated points.
The gray value profile is created by averaging the gray values along all line segments, which are defined by the
measure rectangle as follows:
For every line segment, the average of the gray values of all points with an integer distance to the major axis is
calculated. Due to translation and rotation of the measure rectangle with respect to the image coordinates the input
image Image is in general sampled at subpixel positions.
Since this involves some calculations which can be used repeatedly in several projections, the operator
gen_measure_rectangle2 is used to perform these calculations only once in advance. Here, the measure
object MeasureHandle is generated and different interpolation schemes can be selected.
Attention
measure_thresh only returns meaningful results if the assumptions that the edges are straight and perpendicu-
lar to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible to the
edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1 see
gen_measure_rectangle2).
It should be kept in mind that measure_thresh ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure_id ; Htuple . Hlong
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Sigma of gaussian smoothing.
Default Value : 1.0
Suggested values : Sigma ∈ {0.0, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Typical range of values : 0.4 ≤ Sigma ≤ 100 (lin)
Minimum Increment : 0.01
Recommended Increment : 0.1
Restriction : Sigma ≥ 0.0
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . double
Threshold.
Default Value : 128.0
Typical range of values : 0 ≤ Threshold ≤ 255 (lin)
Minimum Increment : 1
Recommended Increment : 0.5
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Selection of points.
Default Value : "all"
List of values : Select ∈ {"all", "first", "last", "first_last"}
. RowThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; Htuple . double *
Row coordinates of points with threshold value.
. ColumnThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; Htuple . double *
Column coordinates of points with threshold value.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
Distance between consecutive points.
Result
If the parameter values are correct the operator measure_thresh returns the value H_MSG_TRUE. Otherwise,
an exception handling is raised.
Parallelization Information
measure_thresh is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_rectangle2
Possible Successors
close_measure
Alternatives
measure_pos, edges_sub_pix, measure_pairs
HALCON 8.0.2
1272 CHAPTER 15. TOOLS
Module
1D Metrology
• ’contrast’ will use the fuzzy function to evaluate the amplitudes of the edge candidates. When extracting
edge pairs, the fuzzy evaluation is obtained by the geometric average of the fuzzy contrast scores of both
edges.
• The fuzzy function of ’position’ evaluates the distance of each edge candidate to the reference point of the
measure object, generated by gen_measure_arc or gen_measure_rectangle2. The reference
point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference point to the
middle or the end of the one-dimensional gray value profile instead. If the fuzzy position evaluation depends
on the position of the object along the profile, ’position_first_edge’ / ’position_last_edge’ sets the referece
point at the position of the first/last extracted edge. When extracting edge pairs the position of a pair is
referenced by the geometric average of the fuzzy position scores of both edges.
• Similar to ’position’, ’position_pair’ evaluates the distance of each edge pair to the reference point of
the measure object. The position of a pair is defined by the center point between both edges. The ob-
ject’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’position_first_pair’, ’posi-
tion_last_pair’, respectively. Contrary to ’position’, this set is only used by fuzzy_measure_pairs/
fuzzy_measure_pairing.
• ’size’ denotes a fuzzy set that evaluates the normed distance of the two edges of a pair in pixels. This set
is only used by fuzzy_measure_pairs/ fuzzy_measure_pairing. Specifying an upper bound
for the size by terminating the member function with a corresponding fuzzy value of 0.0 will speed up
fuzzy_measure_pairs / fuzzy_measure_pairing because not all possible pairs need to be con-
sidered.
• ’gray’ sets a fuzzy function to weight the mean projected gray value between two edges of a pair. This set is
only used by fuzzy_measure_pairs / fuzzy_measure_pairing.
A fuzzy member function is defined as a piecewise linear function by at least two pairs of values, sorted in an
ascending order by their x value. The x values represent the edge feature and must lie within the parameter space
of the set type, i.e., in case of ’contrast’ and ’gray’ feature and, e.g., byte images within the range 0.0 ≤ x ≤
255.0. In case of ’size’ x has to satisfy 0.0 ≤ x whereas in case of ’position’ x can be any real number. The
y values of the fuzzy function represent the weight of the corresponding feature value and have to satisfy the
range of 0.0 ≤ y ≤ 1.0. Outside of the function’s interval, defined by the smallest and the greatest x value, the
y values of the interval borders are continued constantly. Such Fuzzy member functions can be generated by
create_funct_1d_pairs.
If more than one set is defined, fuzzy_measure_pos / fuzzy_measure_pairs /
fuzzy_measure_pairing yield the overall fuzzy weighting by the geometric middle of the weights of
each set.
Parameter
Parallelization Information
set_fuzzy_measure is reentrant and processed without parallelization.
HALCON 8.0.2
1274 CHAPTER 15. TOOLS
Possible Predecessors
gen_measure_arc, gen_measure_rectangle2, create_funct_1d_pairs,
transform_funct_1d
Possible Successors
fuzzy_measure_pos, fuzzy_measure_pairs
Alternatives
set_fuzzy_measure_norm_pair
See also
reset_fuzzy_measure
Module
1D Metrology
• ’size’ denotes a fuzzy set that valuates the normalized distance of two edges of a pair in pixels:
d
x= (x ≥ 0) .
s
Specifying an upper bound x_max for the size by terminating the member function with a corresponding
fuzzy value of 0.0 will speed up fuzzy_measure_pairs / fuzzy_measure_pairing because not
all possible pairs must be considered. Additionally, this fuzzy set can also be specified as a normalized size
difference by ’size_diff’
s−d
x= (x ≤ 1)
s
and a absolute normalized size difference by ’size_abs_diff’
|s − d|
x= (0 ≤ x ≤ 1) .
s
• The fuzzy function of ’position’ evaluates the signed distance p of each edge candidate to the reference point
of the measure object, generated by gen_measure_arc or gen_measure_rectangle2:
p
x= .
s
The reference point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference
point to the middle or the end of the one-dimensional gray value profile, instead. If the fuzzy position
valuation depends on the position of the object along the profile ’position_first_edge’ / ’position_last_edge’
sets the referece point at the position of the first/last extracted edge. When extracting edge pairs, the position
of a pair is referenced by the geometric average of the fuzzy position scores of both edges.
• Similar to ’position’, ’position_pair’ evaluates the signed distance of each edge pair to the reference point
of the measure object. The position of a pair is defined by the center point between both edges. The ob-
ject’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’position_first_pair’, ’posi-
tion_last_pair’, respectively. Contrary to ’position’, this set is only used by fuzzy_measure_pairs/
fuzzy_measure_pairing.
A normalized fuzzy member function is defined as a piecewise linear function by at least two pairs of values,
sorted in an ascending order by their x value. The y values of the fuzzy function represent the weight of the
corresponding feature value and must satisfy the range of 0.0 ≤ y ≤ 1.0. Outside of the function’s interval, defined
by the smallest and the greatest x value, the y values of the interval borders are continued constantly. Such Fuzzy
member functions can be generated by create_funct_1d_pairs.
If more than one set is defined, fuzzy_measure_pos / fuzzy_measure_pairs /
fuzzy_measure_pairing yield the overall fuzzy weighting by the geometric mean of the weights of
each set.
Parameter
Parallelization Information
set_fuzzy_measure_norm_pair is reentrant and processed without parallelization.
Possible Predecessors
gen_measure_arc, gen_measure_rectangle2, create_funct_1d_pairs
Possible Successors
fuzzy_measure_pairs, fuzzy_measure_pairing
HALCON 8.0.2
1276 CHAPTER 15. TOOLS
Alternatives
transform_funct_1d, set_fuzzy_measure
See also
reset_fuzzy_measure
Module
1D Metrology
15.15 OCV
close_all_ocvs ( )
T_close_all_ocvs ( )
read_ocv("ocv_file",&ocv_handle);
for (i=0; i<1000; i++)
{
grab_image_async(&Image,fg_handle,-1);
reduce_domain(Image,ROI,&Pattern);
do_ocv_simple(Pattern,ocv_handle,"A",
"true","true","false","true",10,
&Quality);
}
close_ocv(ocv_handle);
Result
close_ocv returns H_MSG_TRUE, if the handle is valid. Otherwise, an exception handling is raised.
Parallelization Information
close_ocv is processed completely exclusively without parallelization.
Possible Predecessors
read_ocv, create_ocv_proj
HALCON 8.0.2
1278 CHAPTER 15. TOOLS
See also
close_ocr
Module
OCR/OCV
create_ocv_proj("A",&ocv_handle);
draw_region(&ROI,window_handle);
reduce_domain(Image,ROI,&Sample);
traind_ocv_proj(Sample,ocv_handle,"A","single");
Result
create_ocv_proj returns H_MSG_TRUE, if the parameters are correct. Otherwise, an exception handling is
raised.
Parallelization Information
create_ocv_proj is processed completely exclusively without parallelization.
Possible Successors
traind_ocv_proj, write_ocv, close_ocv
Alternatives
read_ocv
See also
create_ocr_class_box
Module
OCR/OCV
HALCON 8.0.2
1280 CHAPTER 15. TOOLS
Parallelization Information
do_ocv_simple is reentrant and processed without parallelization.
Possible Predecessors
traind_ocr_class_box, trainf_ocr_class_box, read_ocv, threshold, connection,
select_shape
Possible Successors
close_ocv
See also
create_ocv_proj
Module
OCR/OCV
read_ocv("ocv_file",&ocv_handle);
for (i=0; i<1000; i++)
{
grab_image_async(&Image,fg_handle,-1);
reduce_domain(Image,ROI,&Pattern);
do_ocv_simple(Pattern,ocv_handle,"A",
"true","true","false","true",10,
&Quality);
}
close_ocv(ocv_handle);
Result
read_ocv returns H_MSG_TRUE, if the file is correct. Otherwise, an exception handling is raised.
Parallelization Information
read_ocv is processed completely exclusively without parallelization.
Possible Predecessors
write_ocv
Possible Successors
do_ocv_simple, close_ocv
See also
read_ocr
Module
OCR/OCV
create_ocv_proj("A",&ocv_handle);
draw_region(&ROI,window_handle);
reduce_domain(Image,ROI,&Sample);
traind_ocv_proj(Sample,ocv_handle,"A","single");
Result
traind_ocv_proj returns H_MSG_TRUE, if the handle and the training pattern(s) are correct. Otherwise, an
exception handling is raised.
Parallelization Information
traind_ocv_proj is processed completely exclusively without parallelization.
Possible Predecessors
write_ocr_trainf, create_ocv_proj, read_ocv, threshold, connection,
select_shape
Possible Successors
close_ocv
See also
traind_ocr_class_box
Module
OCR/OCV
HALCON 8.0.2
1282 CHAPTER 15. TOOLS
15.16 Shape-from
Parameter
. MultiFocusImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; Hobject : byte
Multichannel gray image consisting of multiple focus levels.
. Depth (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : byte
Depth image.
. Confidence (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image(-array) ; Hobject * : byte
Confidence of depth estimation.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Filter used to find sharp pixels.
Default Value : "highpass"
List of values : Filter ∈ {"highpass", "bandpass"}
. Selection (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; (Htuple .) const char *
Method used to find sharp pixels.
Default Value : "next_maximum"
List of values : Selection ∈ {"next_maximum", "local"}
Example (Syntax: C++)
compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,’highpass’,’next_maximum’);
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);
threshold(Confidence,HighConfidence,10,255);
reduce_domain(SharpImage,HighConfidence,ConfidentSharp);
Parallelization Information
depth_from_focus is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
compose2, compose3, compose4, add_channels, read_image, read_sequence
Possible Successors
select_grayvalues_from_channels, mean_image, binomial_filter, gauss_image,
threshold
See also
count_channels
Module
3D Metrology
HALCON 8.0.2
1284 CHAPTER 15. TOOLS
Parameter
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; Hobject : byte
Image for which slant and albedo are to be estimated.
. Slant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; (Htuple .) double *
Angle of the light sources and the positive z-axis (in degrees).
. Albedo (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; (Htuple .) double *
Amount of light reflected by the surface.
Result
estimate_sl_al_zc always returns the value H_MSG_TRUE.
Parallelization Information
estimate_sl_al_zc is reentrant and automatically parallelized (on tuple level).
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, phot_stereo, shade_height_field
Module
3D Metrology
HALCON 8.0.2
1286 CHAPTER 15. TOOLS
Result
estimate_tilt_zc always returns the value H_MSG_TRUE.
Parallelization Information
estimate_tilt_zc is reentrant and automatically parallelized (on tuple level).
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, phot_stereo, shade_height_field
Module
3D Metrology
T_select_grayvalues_from_channels (
const Hobject MultichannelImage, const Hobject IndexImage,
Hobject *Selected )
compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,’highpass’,’next_maximum’);
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);
Parallelization Information
select_grayvalues_from_channels is reentrant and automatically parallelized (on tuple level, domain
level).
Possible Predecessors
depth_from_focus, mean_image
Possible Successors
disp_image
See also
count_channels
Module
Foundation
HALCON 8.0.2
1288 CHAPTER 15. TOOLS
to values greater than zero if, for example, the white balance of the camera was badly adjusted at the moment the
image was taken.
Attention
sfs_mod_lr assumes that the heights are to be extracted on a lattice with step width 1. If this is not the case, the
calculated heights must be multiplied with the step width after the call to sfs_mod_lr. A Cartesian coordinate
system with the origin in the lower left corner of the image is used internally. sfs_mod_lr can only handle
byte-images.
Parameter
HALCON 8.0.2
1290 CHAPTER 15. TOOLS
Parallelization Information
sfs_orig_lr is reentrant and automatically parallelized (on tuple level).
Possible Predecessors
estimate_al_am, estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr,
estimate_tilt_zc
Possible Successors
shade_height_field
Module
3D Metrology
HALCON 8.0.2
1292 CHAPTER 15. TOOLS
15.17 Stereo
T_binocular_calibration ( const Htuple NX, const Htuple NY,
const Htuple NZ, const Htuple NRow1, const Htuple NCol1,
const Htuple NRow2, const Htuple NCol2, const Htuple StartCamParam1,
const Htuple StartCamParam2, const Htuple NStartPose1,
const Htuple NStartPose2, const Htuple EstimateParams,
Htuple *CamParam1, Htuple *CamParam2, Htuple *NFinalPose1,
Htuple *NFinalPose2, Htuple *RelPose, Htuple *Errors )
In general, binocular calibration means the exact determination of the parameters that model the 3D reconstruction
of a 3D point from the corresponding images of this point in a binocular stereo system. This reconstruction
is specified by the internal parameters CamParam1 of camera 1 and CamParam2 of camera 2 describing the
underlying projective camera model, and the external parameters RelPose describing the relative pose of camera
system 2 in relation to camera system 1.
Thus, known 3D model points (with coordinates NX, NY, NZ) are projected in the image planes of both cameras
(camera 1 and camera 2) and the sum of the squared distances between these projections and the corresponding
measured image points (with coordinates NRow1, NCol1 for camera 1 and NRow2, NCol2 for camera 2) is mini-
mized. It should be noted that all these model points must be visible in both images. The projection uses the initial
values StartCamParam1 and StartCamParam2 of the internal parameters of camera 1 and camera 2 which
can be obtained from the camera data sheets. In addition, the initial guesses NStartPose1 and NStartPose2
of the poses of the 3D calibration model in relation to the camera coordinate systems (CCS) of camera 1 and cam-
era 2 are needed as well. These 3D transformation poses can be determined by the find_marks_and_pose
operator. Since this calibration algorithm simultaneously handles correspondences between measured image and
known model points from different image pairs, poses (NStartPose1,NStartPose2), and measured points
(NRow1,NCol1,NRow2, NCol2) must be passed concatenated in a corresponding order.
The input parameter EstimateParams is used to select the parameters to be estimated. Usually this param-
eter is set to ’all’, i.e., all external camera parameters (translation and rotation) and all internal camera param-
eters are determined. Otherwise, EstimateParams contains a tuple of strings indicating the combination
of parameters to estimate. For instance, if the interior camera parameters already have been determined (e.g.,
by previous calls to camera_calibration) it is often desired to only determine relative the pose of the
two cameras to each other (RelPose). In this case, EstimateParams can be set to ’pose_rel’. This has
the same effect as EstimateParams = [’pose1’,’pose2’]. The internal parameters can be subsumed by the
parameter values ’cam_param1’ and ’cam_param2’, as well. In addition, parameters can be excluded from
estimation by using the prefix ~. For example, the values [’pose1’, ’~transx1’] have the same effect as [’al-
pha1’,’beta1’,’gamma1’,’transy1’,’transz1’]. Whereas [’all’,’~focus1’] determines all internal and external param-
eters except the focus of camera 1, for instance. The prefix ~ can be used with all parameter values except ’all’.
The underlying camera model is explained in the description of the camera_calibration operator. It is
specified by the parameters [focus1, kappa1, sx1, sy1, cx1, cy1, image_width1, image_height1] of camera 1
returned in CamParam1 and [focus2, kappa2, sx2, sy2, cx2, cy2, image_width2, image_height2] of camera 2
returned in CamParam2 (with focus > 0). The external parameters [alpha_rel, beta_rel, gamma_rel, transx_rel,
transy_rel, transz_rel] are returned in RelPose and specify the 3D transformation of points of CCS 2 into CCS
1. Note that according to the description of poses at create_pose one parameter is appended to the pose tuple
at the last position to define the representation type of this pose.
According to camera_calibration the 3D transformation poses of the calibration model to the respective
CCS are returned in NFinalPose1 and NFinalPose2. These transformations are related to RelPose accord-
ing to the following equation (neglecting differences due to the balancing effects of the multi image calibration):
HomMat3D_NFinalPose2 = INV(HomMat3D_RelPose) * HomMat3D_NFinalPose1,
whereas HomMat3D_* denotes a homogeneous transformation matrix of the respective poses and INV() inverts a
homogeneous matrix.
The computed average errors returned in Errors give an impression of the accuracy of the calibration. Using
the determined camera parameters they denote the average of the euklidean distance of the projection of the mark
centers of the model to their image.
Parameter
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all X-coordinates of the calibration marks (in meters).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all Y-coordinates of the calibration marks (in meters).
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all Z-coordinates of the calibration marks (in meters).
. NRow1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 1 (in pixels).
. NCol1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 1 (in pixels).
. NRow2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 2 (in pixels).
HALCON 8.0.2
1294 CHAPTER 15. TOOLS
Result
binocular_calibration returns H_MSG_TRUE if all parameter values are correct and the desired param-
eters have been determined by the minimization algorithm. If necessary, an exception handling is raised.
Parallelization Information
binocular_calibration is reentrant and processed without parallelization.
Possible Predecessors
find_marks_and_pose, caltab_points, read_cam_par
Possible Successors
write_pose, write_cam_par, pose_to_hom_mat3d, disp_caltab,
gen_binocular_rectification_map
See also
find_caltab, sim_caltab, read_cam_par, create_pose, convert_pose_type,
read_pose, hom_mat3d_to_pose, create_caltab, binocular_disparity,
binocular_distance
Module
3D Metrology
HALCON 8.0.2
1296 CHAPTER 15. TOOLS
with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n
It should be noted, that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window, referenced by 2m + 1 and 2n + 1, has to be odd numbered and is passed in
MaskWidth and MaskHeight. The search space is confined by the minimum and maximum disparity value
MinDisparity and MaxDisparity. Due to pixel values not defined beyond the image border the resulting
domain of Disparity and Score is not set along the image border within a margin of height (MaskHeight-
1)/2 at the top and bottom border and of width (MaskWidth-1)/2 at the left and right border. For the same reason,
the maximum disparity range is reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum statistical
spread of gray values within the correlation window can be defined in TextureThresh. This threshold is applied
on both input images Image1 and Image2. In addition, ScoreThresh guarantees the matching quality and
defines the maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation function. Setting
Filter to ’left_right_check’, moreover, increases the robustness of the returned matches, as the result relies on a
concurrent direct and reverse match, whereas ’none’ switches it off.
The number of pyramid levels used to improve the time response of binocular_disparity is determined by
NumLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmented
into rectangular subimages of similar disparity to reduce the disparity range on the next lower pyramid level.
TextureThresh and ScoreThresh are applied on every level and the returned domain of the Disparity
and Score images arises from the intersection of the resulting domains of every single level. Generally, pyramid
structures are the more advantageous the more the disparity image can be segmented into regions of homogeneous
disparities and the bigger the disparity range is specified. As a drawback, coarse pyramid levels might loose
important texture information which can result in deficient disparity values.
Finally, the value ’interpolation’ for parameter SubDisparity performs subpixel refinement of disparities. It is
switched off by setting the parameter to ’none’.
Parameter
HALCON 8.0.2
1298 CHAPTER 15. TOOLS
// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpos.dat’, RelPose)
Result
binocular_disparity returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
binocular_disparity is reentrant and automatically parallelized (on domain level).
Possible Predecessors
map_image
Possible Successors
threshold, disparity_to_distance
Alternatives
binocular_distance
See also
map_image, gen_binocular_rectification_map, binocular_calibration
Module
3D Metrology
with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n
It should be noted that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window has to be odd numbered and is passed in MaskWidth and MaskHeight. The
search space is confined by the minimum and maximum disparity value MinDisparity and MaxDisparity.
Due to pixel values not defined beyond the image border the resulting domain of Distance and Score is
generally not set along the image border within a margin of height MaskHeight/2 at the top and bottom border
HALCON 8.0.2
1300 CHAPTER 15. TOOLS
and of width MaskWidth/2 at the left and right border. For the same reason, the maximum disparity range is
reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum variance
within the correlation window can be defined in TextureThresh. This threshold is applied on both input
images Image1 and Image2. In addition, ScoreThresh guarantees the matching quality and defines the
maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation function. Setting Filter
to ’left_right_check’, moreover, increases the robustness of the returned matches, as the result relies on a concurrent
direct and reverse match, whereas ’none’ switches it off.
The number of pyramid levels used to improve the time response of binocular_distance is determined by
NumLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmentated
into rectangular subimages to reduce the disparity range on the next lower pyramid level. TextureThresh and
ScoreThresh are applied on every level and the returned domain of the Distance and Score images arises
from the intersection of the resulting domains of every single level. Generally, pyramid structures are the more
advantageous the more the distance image can be segmented into regions of homogeneous distance values and the
bigger the disparity range must be specified. As a drawback, coarse pyramid levels might loose important texture
information which can result in deficient distance values.
Finally, the value ’interpolation’ for parameter SubDistance increases the refinement and accuracy of the dis-
tance values. It is switched off by setting the parameter to ’none’.
Parameter
// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpose.dat’, RelPose)
Result
binocular_disparity returns H_MSG_TRUE if all parameter values are correct. If necessary, an exception
handling is raised.
Parallelization Information
binocular_distance is reentrant and automatically parallelized (on domain level).
Possible Predecessors
map_image
HALCON 8.0.2
1302 CHAPTER 15. TOOLS
Possible Successors
threshold
Alternatives
binocular_disparity
See also
map_image, gen_binocular_rectification_map, binocular_calibration,
distance_to_disparity, disparity_to_distance
Module
3D Metrology
Transform a disparity value into a distance value in a rectified binocular stereo system.
disparity_to_distance transforms a disparity value into a distance of an object point to the binocular
stereo system. The cameras of this system must be rectified and are defined by the rectified internal parameters
CamParamRect1 of the projective camera 1 and CamParamRect2 of the projective camera 2, and the external
parameters RelPoseRect. Latter specifies the relative pose of both cameras to each other by defining a point
transformation from rectified camera system 2 to rectified camera system 1. These parameters can be obtained from
the operator binocular_calibration and gen_binocular_rectification_map. The disparity
value Disparity is defined by the column difference of the image coordinates of two corresponding points
on an epipolar line according to the equation d = c2 − c1 (see also binocular_disparity). This value
characterises a set of 3D object points of an equal distance to a plane beeing parallel to the rectified image plane of
the stereo system. The distance to the subset plane z = 0 which is parallel to the rectified image plane and contains
the optical centers of both cameras is returned in Distance.
Parameter
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Rectified internal camera parameters of the projective camera 1.
Number of elements : 8
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Rectified internal camera parameters of the projective camera 2.
Number of elements : 8
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Point transformation from rectified camera 2 to rectified camera 1.
Number of elements : 7
. Disparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Disparity between the images of the world point.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; Htuple . double *
Distance of a world point to the rectified camera system.
Result
disparity_to_distance returns H_MSG_TRUE if all parameter values are correct. If necessary, an excep-
tion handling is raised.
Parallelization Information
disparity_to_distance is reentrant and processed without parallelization.
Possible Predecessors
binocular_calibration, gen_binocular_rectification_map, map_image,
binocular_disparity
Alternatives
binocular_distance
See also
distance_to_disparity, disparity_to_point_3d
Module
3D Metrology
Transform an image point and its disparity into a 3D point in a rectified stereo system.
Given an image point of the rectified camera 1, specified by its image coordinates (Row1,Col1), and its disparity in
a rectified binocular stereo system, disparity_to_point_3d computes the corresponding three dimensional
object point. Whereby the disparity value Disparity defines the column difference of the image coordinates
of two corresponding features on an epipolar line according to the equation d = c2 − c1 . The rectified binocular
camera system is specified by its internal camera parameters CamParamRect1 of the projective camera 1 and
CamParamRect2 of the projective camera 2, and the external parameters RelPoseRect defining the pose of
the rectified camera 2 in relation to the rectified camera 1. These camera parameters can be obtained from the
operators binocular_calibration and gen_binocular_rectification_map. The 3D point is
returned in Cartesian coordinates (X,Y,Z) of the rectified camera system 1.
Parameter
HALCON 8.0.2
1304 CHAPTER 15. TOOLS
Image coordinates result from 3D direction vectors by multiplication with the camera matrix CamM at:
col X
row = CamM at · Y .
1 1
Therefore, the fundamental matrix FMatrix is calculated from the essential matrix EMatrix and the camera
matrices CamMat1, CamMat2 by the following formula:
The transformation of the essential matrix to the fundamental matrix goes along with the propagation of the co-
variance matrices CovEMat to CovFMat. If CovEMat is empty CovFMat will be empty too.
The conversion operator essential_to_fundamental_matrix is used especially for a subsequent visu-
alization of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.
Parameter
. EMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double / Hlong
Essential matrix.
. CovEMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
9 × 9 covariance matrix of the essential matrix.
Default Value : []
. CamMat1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double / Hlong
Camera matrix of the 1. camera.
. CamMat2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double / Hlong
Camera matrix of the 2. camera.
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
9 × 9 covariance matrix of the fundamental matrix.
Parallelization Information
essential_to_fundamental_matrix is reentrant and processed without parallelization.
Possible Predecessors
vector_to_essential_matrix
Alternatives
rel_pose_to_fundamental_matrix
Module
3D Metrology
HALCON 8.0.2
1306 CHAPTER 15. TOOLS
problem, additional constraints are defined: the algorithm chooses the set of homographies that minimizes the
projective distortion induced by the homographies in both images. For the computation of this cost function the
dimensions of the images must be provided in Width1, Height1, Width2, Height2. After rectification the
fundamental matrix is always of the canonical form
0 0 0
0 0 −1 .
0 1 0
In the case of a known covariance matrix CovFMat of the fundamental matrix FMatrix, the covariance matrix
CovFMatRect of the above rectified fundamental matrix is calculated. This can help for an improved stereo
matching process because the covariance matrix defines in terms of probabilities the image domain where to find
a corresponding match.
Similar to the operator gen_binocular_rectification_map the output images Map1 and Map2 describe
the transformation, also called mapping, of the original images to the rectified ones. The parameter Mapping
specifies whether bilinear interpolation (’bilinear_map’) should be applied between the pixels in the input image
or whether the gray value of the nearest neighboring pixel should be taken (’nn_map’). The size and resolution
of the maps and of the transformed images can be adjusted by the parameter SubSampling, which applies a
sub-sampling factor to the original images. For example, a factor of two will halve the image sizes. If just the two
homographies are required Mapping can be set to ’no_map’ and no maps will be returned. For speed reasons,
this option should be used if for a specific stereo configuration the images must be rectified only once. If the stereo
setup is fixed, the maps should be generated only once and both images should be rectified with map_image;
this will result in the smallest computational cost for on-line rectification.
When using the maps, the transformed images are of the same size as their maps. Each pixel in the map contains
the description of how the new pixel at this position is generated. The images Map1 and Map2 are single channel
images if Mapping is set to ’nn_map’ and five channel images if it is set to ’bilinear_map’. In the first channel,
which is of type int4, the pixels contain the linear coordinates of their reference pixels in the original image. With
Mapping equal to ’no_map’ this reference pixel is the nearest neighbor to the back-transformed pixel coordinates
of the map. In the case of bilinear interpolation the reference pixel is the next upper left pixel relative to the back-
transformed coordinates. The following scheme shows the ordering of the pixels in the original image next to the
back-transformed pixel coordinates, where the reference pixel takes the number 2.
2 3
4 5
The channels 2 to 5, which are of type uint2, contain the weights of the relevant pixels for the bilinear interpolation.
Based on the rectified images, the disparity be computed using binocular_disparity. In contrast to stereo
with fully calibrated cameras, using the operator gen_binocular_rectification_map and its succes-
sors, metric depth information can not be derived for weakly calibrated cameras. The disparity map gives just a
qualitative depth ordering of the scene.
Parameter
Generate transformation maps that describe the mapping of the images of a binocular camera pair to a common
rectified image plane.
Given a pair of stereo images, rectification determines a transformation of each image plane in a way that pairs of
conjugate epipolar lines become collinear and parallel to the horizontal image axes. The rectified epipolar images
HALCON 8.0.2
1308 CHAPTER 15. TOOLS
can be thought of as acquired by a new stereo rig, obtained by rotating the original cameras. The camera centers of
this virtual rig are maintained whereas the image planes coincide, which means that the focal lengths are set equal,
and the optical axes parallel.
To achieve the transformation map for epipolar images gen_binocular_rectification_map requires the
internal camera parameters CamParam1 of the projective camera 1 and CamParam2 of the projective camera 2,
as well as the relative pose RelPose defining a point transformation from camera 2 to camera 1. These parameters
can be obtained, e.g., from the operator binocular_calibration.
The projection onto a common plane has many degrees of freedom which are implicitly restricted by selecting a
certain method in Method (currently only one method available):
• ’geometric’ specifies the orientation of the common image plane by the cross product of the base line and the
line of intersection of the original image planes. The new focal length are determined in such a way as the
old prinzipal points have the same distance to the new common image plane.
2 3
4 5
In addition, gen_binocular_rectification_map returns the modified internal and external camera pa-
rameters of the rectified stereo rig. CamParamRect1 and CamParamRect2 contain the modified internal pa-
rameters of camera 1 and camera 2, respectively. The rotation of the rectified camera in relation to the original
camera is specified by CamPoseRect1 and CamPoseRect2, respectively. Finally, RelPoseRect returns
the modified relative pose of the rectified camera system 2 in relation to the rectified camera system 1 defining
a translation in x only. Generally, the transformations are defined in a way that the rectified camera 1 is left of
the rectified camera 2. This means that the optical center of camera 2 has a positive x coordinate of the rectified
coordinate system of camera 1.
Parameter
// ...
// read the internal and external stereo parameters
read_cam_par (’cam_left.dat’, CamParam1)
read_cam_par (’cam_right.dat’, CamParam2)
read_pose (’relpos.dat’, RelPose)
while 1
grab_image_async (Image1, FGHandle1, -1)
map_image (Image1, Map1, ImageMapped1)
Result
gen_binocular_rectification_map returns H_MSG_TRUE if all parameter values are correct. If nec-
essary, an exception handling is raised.
Parallelization Information
gen_binocular_rectification_map is reentrant and processed without parallelization.
Possible Predecessors
binocular_calibration
HALCON 8.0.2
1310 CHAPTER 15. TOOLS
Possible Successors
map_image
Alternatives
gen_image_to_world_plane_map
See also
map_image, gen_image_to_world_plane_map, contour_to_world_plane_xld,
image_points_to_world_plane
Module
3D Metrology
Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Given two lines of sight from different cameras, specified by their image points (Row1,Col1) of camera 1 and
(Row2,Col2) of camera 2, intersect_lines_of_sight computes the 3D point of intersection of these
lines. The binocular camera system is specified by its internal camera parameters CamParam1 of the projective
camera 1 and CamParam2 of the projective camera 2, and the external parameters RelPose defining the pose
of the cameras by a point transformation from camera 2 to camera 1. These camera parameters can be obtained,
e.g., from the operator binocular_calibration, if the coordinates of the image points (Row1,Col1) and
(Row2,Col2) refer to the respective original image coordinate system. In case of rectified image coordinates (
e.g., obtained from epipolar images), the rectified camera parameters must be passed, as they are returned by the
operator gen_binocular_rectification_map. The ’point of intersection’ is defined by the point with
the shortest distance to both lines of sight. This point is returned in Cartesian coordinates (X,Y,Z) of camera system
1 and its distance to the lines of sight is passed in Dist.
Parameter
Result
intersect_lines_of_sight returns H_MSG_TRUE if all parameter values are correct. If necessary, an
exception handling is raised.
Parallelization Information
intersect_lines_of_sight is reentrant and processed without parallelization.
Possible Predecessors
binocular_calibration
See also
disparity_to_point_3d
Module
3D Metrology
Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo images
Image1 and Image2 along with known internal camera parameters, specified by the camera matrices CamMat1
and CamMat2, match_essential_matrix_ransac automatically determines the geometry of the stereo
setup and finds the correspondences between the characteristic points. The geometry of the stereo setup is repre-
sented by the essential matrix EMatrix and all corresponding points have to fulfill the epipolar constraint.
The operator match_essential_matrix_ransac is designed to deal with a linear camera model. The
internal camera parameters are passed by the arguments CamMat1 and CamMat2, which are 3×3 upper triangular
matrices desribing an affine transformation. The relation between a vector (X,Y,1), representing the direction from
the camera to the viewed 3D space point and its (projective) 2D image coordinates (col,row,1) is:
col X f /sx s cx
row = CamM at · Y where CamM at = 0 f /sy cy .
1 1 0 0 1
Note the column/row ordering in the point coordinates which has to be compliant with the x/y notation of the
camera coordinate system. The focal length is denoted by f , sx , sy are scaling factors, s describes a skew factor
and (cx , cy ) indicates the principal point. Mainly, these are the elements known from the camera parameters as
used for example in camera_calibration. Alternatively, the elements of the camera matrix can be described
in a different way, see e.g. stationary_camera_self_calibration. Multiplied by the inverse of the
camera matrices the direction vectors in 3D space are obtained from the (projective) image coordinates. For known
camera matrices the epipolar constraint is given by:
T
X2 X1
Y2 · EM atrix · Y1 = 0 .
1 1
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
HALCON 8.0.2
1312 CHAPTER 15. TOOLS
algorithm is applied to find the essential matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the essen-
tial matrix EMatrix. It tries to find the essential matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The essential matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns the
covariance of the essential matrix CovEMat as well. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-
linear-transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences
differ depending on the deployed estimation method.
The value Error indicates the overall quality of the estimation procedure and is the mean euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
For the operator match_essential_matrix_ransac a special configuration of scene points and cameras
exists: if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution
in the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by
the operator. This means that the output parameters EMatrix, CovEMat and Error are of double length and
the values of the second solution are simply concatenated behind the values of the first one.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible.
Parameter
HALCON 8.0.2
1314 CHAPTER 15. TOOLS
Compute the fundamental matrix for a pair of stereo images by automatically finding correspondences between
image points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo images
Image1 and Image2, match_fundamental_matrix_ransac automatically finds the correspondences
between the characteristic points and determines the geometry of the stereo setup. For unknown cameras the
geometry of the stereo setup is represented by the fundamental matrix FMatrix and all corresponding points
have to fulfill the epipolar constraint, namely:
T
Cols2 Cols1
Rows2 · FMatrix · Rows1 = 0 .
1 1
Note the column/row ordering in the point coordinates: because the fundamental matrix encodes the projective
relation between two stereo images embedded in 3D space, the x/y notation has to be compliant with the camera
coordinate system. So, (x,y) coordinates correspond to (column,row) pairs.
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an initial
matching between them is generated using the similarity of the windows in both images. Then, the RANSAC algo-
rithm is applied to find the fundamental matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the fun-
damental matrix FMatrix. It tries to find the matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. If left and right camera are identical and the relative orien-
tation between them is a pure translation then choose EstimationMethod equal to ’trans_normalized_dlt’ or
’trans_gold_standard’. The typical application for this special motion case is the scenario of a single fixed camera
looking onto a moving conveyor belt. In order to get a unique solution in the correspondence problem the min-
imum required number of corresponding points is eight in the general case and three in the special, translational
case.
The fundamental matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns as
well the covariance of the fundamental matrix CovFMat. Here, ’normalized_dlt’ and ’gold_standard’ stand for
direct-linear-transformation and gold-standard-algorithm respectively.
The value Error indicates the overall quality of the estimation procedure and is the mean euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible.
HALCON 8.0.2
1316 CHAPTER 15. TOOLS
Parameter
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannel-image ; Hobject : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Row coordinates of characteristic points in image 1.
Restriction : (length(Rows1) ≥ 8) ∨ (length(Rows1) ≥ 3)
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Column coordinates of characteristic points in image 1.
Restriction : length(Cols1) = length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Row coordinates of characteristic points in image 2.
Restriction : (length(Rows2) ≥ 8) ∨ (length(Rows2) ≥ 3)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Column coordinates of characteristic points in image 2.
Restriction : length(Cols2) = length(Rows2)
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Gray value comparison metric.
Default Value : "ssd"
List of values : GrayMatchMethod ∈ {"ssd", "sad", "ncc"}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Size of gray value masks.
Default Value : 10
Typical range of values : 3 ≤ MaskSize ≤ 15
Restriction : MaskSize ≥ 1
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Average row coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ RowMove ≤ 200
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Average column coordinate shift of corresponding points.
Default Value : 0
Typical range of values : 0 ≤ ColMove ≤ 200
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Half height of matching search window.
Default Value : 200
Typical range of values : 50 ≤ RowTolerance ≤ 200
Restriction : RowTolerance ≥ 1
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; Htuple . Hlong
Half width of matching search window.
Default Value : 200
Typical range of values : 50 ≤ ColTolerance ≤ 200
Restriction : ColTolerance ≥ 1
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; Htuple . double / Hlong
Estimate of the relative orientation of the right image with respect to the left image.
Default Value : 0.0
Suggested values : Rotation ∈ {0.0, 0.1, -0.1, 0.7854, 1.571, 3.142}
. MatchThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; Htuple . Hlong / double
Threshold for gray value matching.
Default Value : 10
Suggested values : MatchThreshold ∈ {10, 20, 50, 100, 0.9, 0.7}
. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; Htuple . const char *
Algorithm for the computation of the fundamental matrix and for special camera orientations.
Default Value : "normalized_dlt"
List of values : EstimationMethod ∈ {"normalized_dlt", "gold_standard", "trans_normalized_dlt",
"trans_gold_standard"}
Compute the relative orientation between two cameras by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo
images Image1 and Image2 along with known internal camera parameters CamPar1 and CamPar2,
match_rel_pose_ransac automatically determines the geometry of the stereo setup and finds the corre-
spondences between the characteristic points. The geometry of the stereo setup is represented by the relative
pose RelPose and all corresponding points have to fulfill the epipolar constraint. RelPose indicates the rel-
ative pose of camera 1 with respect to camera 2 (See create_pose for more information about poses and
HALCON 8.0.2
1318 CHAPTER 15. TOOLS
their representations.). This is in accordance with the explicit calibration of a stereo setup using the operator
binocular_calibration. Now, let R, t be the rotation and translation of the relative pose. Then, the essen-
tial matrix E is defined as E = ([t]× R)T , where [t]× denotes the 3 × 3 skew-symmetric matrix realising the cross
product with the vector t. The pose can be determined from the epipolar constraint:
T
X2 X1 0 −tz ty
Y2 · ([t]× R)T · Y1 = 0 where [t]× = tz 0 −tx .
1 1 −ty tx 0
Note, that the essential matrix is a projective entity and thus is defined up to a scaling factor. From this follows that
the translation vector of the relative pose can only be determined up to scale too. In fact, the computed translation
vector will always be normalized to unit length. As a consequence, a subsequent threedimensional reconstruction
of the scene, using for instance vector_to_rel_pose, can be carried out only up to a single global scaling
factor.
The operator match_rel_pose_ransac is designed to deal with a camera model, that includes lens dis-
tortions. This is in contrast to the operator match_essential_matrix_ransac, which encompasses
only straight line preserving cameras. The camera parameters are passed in CamPar1 and CamPar2. The
3D direction vectors (X1 , Y1 , 1) and (X2 , Y2 , 1) are calculated from the point coordinates (Rows1,Cols1) and
(Rows2,Cols2) by inverting the process of projection (see camera_calibration).
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the relative pose that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be selected.
If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’ means
the sum of absolute differences, and ’ncc’ is the normalized cross correlation. This metric is minimized (’ssd’,
’sad’) or maximized (’ncc’) over all possible point pairs. A thus found matching is only accepted if the value of
the metric is below the value of MatchThreshold (’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matchings can be limited. Only points within a
window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of the
search window in the second image with respect to the position of the current point in the first image is given by
RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the rel-
ative pose RelPose. It tries to find the relative pose that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The relative pose is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen. With
’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns as well the
covariance of the relative pose CovRelPose. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-linear-
transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences differ
depending on the deployed estimation method.
The value Error indicates the overall quality of the estimation procedure and is the mean euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
For the operator match_rel_pose_ransac a special configuration of scene points and cameras exists: if all
3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in the
essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by the
operator. This means that the output parameters RelPose, CovRelPose and Error are of double length and
the values of the second solution are simply concatenated behind the values of the first one.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible.
Parameter
HALCON 8.0.2
1320 CHAPTER 15. TOOLS
Module
3D Metrology
HALCON 8.0.2
1322 CHAPTER 15. TOOLS
Compute the fundamental matrix from the relative orientation of two cameras.
Cameras including lens distortions can be modeled by the following set of parameters: the focal length f , two
scaling factors sx , sy , the coordinates of the principal point (cx , cy ) and the distortion coefficient κ. For a more
detailed description see the operator camera_calibration. Only cameras with a distortion coefficient equal
to zero project straight lines in the world onto straight lines in the image. Then, image projection is a linear
mapping and the camera, i.e. the set of internal parameters, can be described by the camera matrix CamM at:
f /sx 0 cx
CamM at = 0 f /sy cy .
0 0 1
Going from a nonlinear model to a linear model is an approximation of the real underlying camera. For a variety of
camera lenses, especially lenses with long focal length, the error induced by this approximation can be neglected.
Following the formula E = ([t]× R)T , the essential matrix E is derived from the translation t and the rotation
R of the relative pose RelPose (see also operator vector_to_rel_pose). In the linearized framework the
fundamental matrix can be calculated from the relative pose and the camera matrices according to the formula
presented under essential_to_fundamental_matrix:
The transformation from a relative pose to a fundamental matrix goes along with the propagation of the covariance
matrices CovRelPose to CovFMat. If CovRelPose is empty CovFMat will be empty too.
The conversion operator rel_pose_to_fundamental_matrix is used especially for a subsequent visual-
ization of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.
Parameter
. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; Htuple . double / Hlong
Relative orientation of the cameras (3D pose).
. CovRelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
6 × 6 covariance matrix of relative pose.
Default Value : []
. CamPar1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Parameters of the 1. camera.
. CamPar2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; Htuple . double / Hlong
Parameters of the 2. camera.
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; Htuple . double *
Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; Htuple . double *
9 × 9 covariance matrix of the fundamental matrix.
Parallelization Information
rel_pose_to_fundamental_matrix is reentrant and processed without parallelization.
Possible Predecessors
vector_to_rel_pose
Alternatives
essential_to_fundamental_matrix
See also
camera_calibration
Module
3D Metrology
Compute the essential matrix given image point correspondences and known camera matrices and reconstruct 3D
points.
For a stereo configuration with known camera matrices the geometric relation between the two images is de-
fined by the essential matrix. The operator vector_to_essential_matrix determines the essential matrix
EMatrix from in general at least six given point correspondences, that fulfill the epipolar constraint:
T
X2 X1
Y2 · EM atrix · Y1 = 0
1 1
The operator vector_to_essential_matrix is designed to deal only with a linear camera model. This is
in constrast to the operator vector_to_rel_pose, that encompasses lens distortions too. The internal camera
parameters are passed by the arguments CamMat1 and CamMat2, which are 3 × 3 upper triangular matrices
desribing an affine transformation. The relation between the vector (X,Y,1), defining the direction from the camera
to the viewed 3D point, and its (projective) 2D image coordinates (col,row,1) is:
col X f /sx s cx
row = CamM at · Y where CamM at = 0 f /sy cy .
1 1 0 0 1
The focal length is denoted by f , sx , sy are scaling factors, s describes a skew factor and (cx , cy ) indicates
the principal point. Mainly, these are the elements known from the camera parameters as used for example in
HALCON 8.0.2
1324 CHAPTER 15. TOOLS
camera_calibration. Alternatively, the elements of the camera matrix can be described in a different way,
see e.g. stationary_camera_self_calibration.
The point correspondences (Rows1,Cols1) and (Rows2,Cols2) are typically found by applying the operator
match_essential_matrix_ransac. Multiplying the image coordinates by the inverse of the camera ma-
trices results in the 3D direction vectors, which can then be inserted in the epipolar constraint.
The parameter Method decides whether the relative orientation between the cameras is of a special type and which
algorithm is to be applied for its computation. If Method is either ’normalized_dlt’ or ’gold_standard’ the relative
orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’ means that the relative motion
between the cameras is a pure translation. The typical application for this special motion case is the scenario
of a single fixed camera looking onto a moving conveyor belt. In this case the minimum required number of
corresponding points is just two instead of six in the general case.
The essential matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result. Here, ’normal-
ized_dlt’ and ’gold_standard’ stand for direct-linear-transformation and gold-standard-algorithm respectively. All
methods return the coordinates (X,Y,Z) of the reconstructed 3D points. The optimal methods also return the co-
variances of the 3D points in CovXYZ. Let n be the number of points then the 3 × 3 covariance matrices are
concatenated and stored in a tuple of length 9n. Additionally, the optimal methods return the covariance of the
essential matrix CovEMat.
If an optimal gold-standard-algorithm is chosen the covariances of the image points (CovRR1, CovRC1, CovCC1,
CovRR2, CovRC2, CovCC2) can be incorporated in the computation. They can be provided for example by the
operator points_foerstner. If the point covariances are unknown, which is the default, empty tuples are
input. In this case the optimization algorithm internally assumes uniform and equal covariances for all points.
The value Error indicates the overall quality of the optimization process and is the root-mean-square euclidian
distance in pixels between the points and their corresponding epipolar lines.
For the operator vector_to_essential_matrix a special configuration of scene points and cameras exists:
if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in
the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by
the operator. This means that all output parameters are of double length and the values of the second solution are
simply concatenated behind the values of the first one.
Parameter