Está en la página 1de 690

DynamicStudio

User's Guide
This manual may not be copied, photocopied, translated, modified, or reduced to any electronic
medium or machine-readable form, in whole, or in part, without the prior written consent of Dantec
Dynamics A/S.
Build no.: 4.15.0115. Publication no.: 9040U1859. Date: 24-04-2015. © by Dantec Dynamics A/S,
P.O. Box 121, Tonsbakken 18, DK–2740 Skovlunde, Denmark. All rights reserved.
All trademarks referred to in this document are registered by their owners.
Disposal of Electronic Equipment (WEEE)

Important
Electronic equipment should not be disposed of together with
other waste. The user must dispose electronic waste according
to local rules and regulations.
Table of Content
Table of Content i
1 Introduction 13
1.1 The DynamicStudio Software 13
1.1.1 Assumptions 13
2 Related Documentation 14
3 Contacting Dantec Dynamics 15
4 Software License Agreement 17
5 Laser Safety 21
5.1 Danger of Laser Light 21
5.2 Precautions 21
5.3 Laser Safety Poster 22
6 Requirements 23
6.1 Minimum requirements 23
6.2 Recommended configuration 23
7 Note to Administrators 25
7.1 Restricted/Limited User 25
7.2 Windows Firewall 25
7.3 BCDEDIT.exe (Microsoft® Windows© 7) 25
8 Software Update Agreement (SUA) 27
9 Add-Ons and Options for DynamicStudio 29
10 Legacy Options 31
11 Getting Started 32
11.1 Warranties and Disclaimers 32
11.1.1 Camera Sensor Warranty 32
11.2 Installation 32
11.3 Using DynamicStudio 32
11.3.1 User Interface 32
11.3.2 First Time Use 40
11.3.3 Image Buffer resources 46
11.3.4 Normal Use 47
11.3.5 Database Access 53
11.3.6 Delete and Restore 56
11.3.7 Working with the Database 58
11.3.8 Calibration Images 62
11.3.9 AgentHost service 64
11.3.10 Known issues (Vista 64bit) 65
11.3.11 Database with Demo Data 67
12 Acquisition 72
12.1 Acquisition Manager 72
12.1.1 Opening the Acquisition Manger 72
12.1.2 Description of possible settings 72
12.1.3 Buttons in Acquisition Manager dialog 73
12.1.4 Working with rows in the Acquisition Manager 73
12.1.5 Naming convention 74

I
12.1.6 Generate Grid 75
12.1.7 Selecting Analysis seqeunce 75
12.1.8 Troubleshooting 77
12.2 Reporting acquisition settings 77
12.3 Storing and loading acquisition settings 77
12.4 Remote Controlled Acquisition 79
12.4.1 How to start Remote Controlled Acquisition 79
12.4.2 Infomation on the Remote Control Driver 81
12.4.3 Commands 82
12.5 Online Vectors 84
12.5.1 Adding Online Vectors to be performed during Acquisition 85
12.5.2 Parameters for Online Vectors 85
12.5.3 Adding the Focus Assist online analysis function to the setup 85
12.5.4 Understanding the resulting image from Online Focus Assist 86
12.5.5 Using Online Focus Assist to set up a stereo PIV system 89
12.5.6 Parameters for the device Online Focus Assist 93
12.6 Online Light-Field 93
12.6.1 Online data analysis 93
12.6.2 Acquiring data to disk 93
12.7 PIV Setup Assistant 93
13 Devices 96
13.1 Lens Mounts 96
13.1.1 Remote lens control 96
13.1.2 SpeedSense 90xx Canon lens mount 99
13.2 1-Axis Scheimpflug mounts 101
13.2.1 Instructions for mounting the FlowSense EO cameras 102
13.2.2 Instructions for mounting the SpeedSense 1040 cameras 103
13.3 Image Intensifiers 105
13.3.1 Custom Image Intensifier 106
13.3.2 Hamamatsu Image Intensifier 108
13.4 Analog Input 110
13.4.1 Analog Input 110
13.4.2 Installation 110
13.4.3 Connecting hardware for acquisition 110
13.4.4 Preparing software for acquisition 111
13.4.5 Acquiring data 115
13.5 Cameras 115
13.5.1 HiSense MkI 115
13.5.2 HiSense MkII Camera 116
13.5.3 HiSense NEO and Zyla Camera 116
13.5.4 FlowSense 2M Camera 120
13.5.5 FlowSense 4M Camera 122
13.5.6 FlowSense EO Camera series 124
13.5.7 HiSense 4M Camera 129
13.5.8 HiSense 11M Camera 129
13.5.9 HiSense 6XX cameras 130
13.5.10 Image buffer recourses 131
13.5.11 Streaming to disk 132
13.5.12 NanoSense Camera Series 134

II
13.5.13 Photron Camera Series 137
13.5.14 SpeedSense 10XX series 146
13.5.15 SpeedSense 1040 camera 149
13.5.16 Parameters for the camera 150
13.5.17 SpeedSense 90XX and MXXX Cameras 153
13.5.18 VolumeSense 11M Camera 166
13.5.19 PCO Dimax cameras 168
13.6 Synchronizers 168
13.6.1 Scanning Light Sheet controller 168
13.6.2 Synchronization 172
13.6.3 USB Timing HUB 174
13.6.4 Start options 176
13.6.5 Mode options 177
13.6.6 Limitations 178
13.6.7 BNC 575 178
13.6.8 Timer Box 182
13.6.9 Installing the TimerBox 182
13.6.10 Connecting the timer box 183
13.6.11 Synchronization setup 184
13.6.12 Synchronizing two TimerBoxes 186
13.6.13 Additional settings 187
13.6.14 Cyclic Synchronizer and Linear Synchronizer 187
13.6.15 Using Two Synchronizers 202
13.6.16 Pulse Receiver 202
13.7 Illumination Systems 204
13.7.1 Pulsed Lasers 204
13.7.2 Dual Cavity pulsed lasers 205
13.7.3 DualPower Lasers 205
13.7.4 Time resolved lasers 207
13.7.5 Shutter Devices 208
13.7.6 Lee Lasers 208
13.7.7 New Wave Lasers 208
13.7.8 Microstrobe 208
13.7.9 Brilliant B laser 209
13.7.10 TDL+ laser 210
13.8 Traverse systems 212
13.8.1 Traverse Control 212
14 Analysis 226
14.1 Analysis Sequence 226
14.1.1 "Pickup" an Analysis Sequence 226
14.2 Analysis Sequence Library 226
14.3 Using Analysis Sequences 227
14.4 Predefined Analysis Sequences 227
14.4.1 Use of Analysis Sequences 228
14.5 Context menu 228
14.6 Distributed Analysis 228
14.6.1 Installing Analysis Agent software 228
14.6.2 Configuring Distributed Analysis. 228
14.6.3 Analyzing using Distributed Analysis 229

III
14.7 Distributed Database 231
14.7.1 Collecting remotely stored data 232
14.7.2 Troubleshooting Distributed Database 232
14.8 Custom Properties 232
14.8.1 Example 232
14.9 Timestamp 232
14.10 Selection (Input to Analysis) 233
14.10.1 Example 233
14.11 Fixed Selection (Input to Analysis) 233
15 Analysis methods 234
15.1 2D Least squares matching (LSM) 234
15.1.1 Background information 234
15.1.2 Usage 235
15.1.3 236
15.1.4 References 236
15.2 Adaptive Correlation 236
15.2.1 Interrogation areas 237
15.2.2 Window and Filter 238
15.2.3 Validation methods 238
15.2.4 Interrogation area offset 239
15.2.5 High Accuracy and Deforming Windows 240
15.3 Adaptive PIV 244
15.3.1 Interrogation areas 245
15.3.2 Windowing and Filtering 246
15.3.3 Validation 247
15.3.4 Adaptivity 247
15.3.5 Diagnostic 249
15.3.6 Reference Vector Map 251
15.4 Calibrate Analog inputs 251
15.4.1 How to set calibration values? 251
15.4.2 Calibration 251
15.5 When to use Average Correlation? 252
15.5.1 Using Average Correlation to look at average PIV signal conditions 252
15.5.2 Using an offset vector map to improve the results 253
15.5.3 Using preconditioning to minimize influence from out of focus particles 254
15.5.4 Schematic of Average Correlation 254
15.6 Average Filter 257
15.7 Bi-Orthogonal Decomposition (BOD Analysis) 258
15.7.1 Supported input 259
15.7.2 Handling the mean 259
15.7.3 Mode Count 259
15.7.4 Step by Step example 259
15.7.5 Input requirements 269
15.7.6 References 269
15.8 Calibration refinement 269
15.8.1 Required input 269
15.8.2 Refinement Area 270
15.8.3 Frame 271
15.8.4 Interrogation Area Size 271

IV
15.8.5 Disparity Map 271
15.8.6 Dewarped Particle Images 272
15.8.7 Average Correlation Map 273
15.8.8 Interpreting correlation maps 274
15.8.9 Interpreting Disparity Vectors 274
15.8.10 Change of Coordinate System 276
15.9 Coherence Filter 277
15.10 Combustion LIF processing 279
15.10.1 Image analysis 279
15.10.2 Interpretation of the results 281
15.11 Cross-Correlation 281
15.11.1 Cross-correlating single images 281
15.12 Curve fit processing 281
15.12.1 Curve fitting procedure 282
15.12.2 Open data fit as numeric 283
15.12.3 Overview on non-linear fit models 284
15.13 Define Mask 284
15.13.1 Adding shapes to the mask 286
15.13.2 Deleting shapes 286
15.13.3 Selecting multiple shapes 287
15.13.4 Ordering shapes 287
15.13.5 Preview the final mask 287
15.13.6 Vector map Overlay 287
15.14 Diameter Statistics 287
15.15 Extract 290
15.15.1 Examples 290
15.16 Feature Tracking 292
15.17 FeaturePIV 296
15.18 FlexPIV processing 298
15.18.1 Defining grid points 300
15.18.2 Defining vector analysis 301
15.19 Grid Interpolation 301
15.20 Histogram 303
15.20.1 The Recipe dialog 304
15.21 Image Arithmetic 306
15.21.1 Image arithmetic with another image 307
15.21.2 Image arithmetic with a constant 307
15.21.3 Data clamping 308
15.22 Image Balancing 308
15.23 Image Dewarping 310
15.23.1 Dewarping image maps 310
15.23.2 Recipe dialog: Imaging Model Fit (camera calibration) 312
15.23.3 Recipe dialog: Re-sampling grid 312
15.23.4 Recipe dialog: Re-sampling scheme 313
15.23.5 Recipe dialog: Fill color outside image 313
15.23.6 Recipe dialog: Z-coordinate 313
15.24 Image Masking 313
15.25 Image & Volume Math 317
15.25.1 Inputs 318

V
15.25.2 Scalars 318
15.25.3 Functions 319
15.25.4 Operators 320
15.25.5 Error description 321
15.25.6 Output Selection 321
15.25.7 Example 322
15.25.8 Advanced processing 324
15.26 Image Mean 324
15.26.1 Application example 324
15.27 Image Min/Mean/Max 325
15.28 Image Processing Library (IPL) 329
15.28.1 Low-pass filters 329
15.28.2 High-pass filters 331
15.28.3 Morphology filters 333
15.28.4 Thresholding 335
15.28.5 Utility filters 335
15.28.6 Signal processing 340
15.28.7 Custom filter 340
15.29 Image Resampling 343
15.29.1 Re-sampling window 343
15.29.2 Re-sampled maps 344
15.30 Image Resolution 344
15.31 Image RMS 345
15.32 Image Stitching 346
15.33 Imaging model fit 347
15.33.1 Normal use and background 347
15.33.2 Target library & Custom targets 350
15.33.3 Acquiring calibration images 352
15.33.4 The recipe for Imaging Model Fit 354
15.33.5 Displaying imaging model parameters 357
15.33.6 Direct Linear Transform (DLT) 359
15.33.7 3'rd order XYZ polynomial imaging model fit 360
15.33.8 Pinhole camera model 361
15.33.9 Telecentric camera model 363
15.33.10 Adjusting parameters for finding the dot matrix target 367
15.33.11 Image Processing Parameters 368
15.33.12 Imaging model fit equals Camera calibration 371
15.34 Multi Camera Calibration 371
15.35 Imaging Model Fit Import 376
15.36 IPI Processing 377
15.36.1 Content 377
15.36.2 User Interface 377
15.36.3 Calibration 378
15.36.4 Recipe <IPI Processing> 380
15.36.5 General 380
15.36.6 Optical Setup 381
15.36.7 Velocity Setup 382
15.36.8 Advanced Settings 382
15.36.9 Region Of Interest (ROI)/ Validation 383

VI
15.36.10 Window Setup 384
15.36.11 Filter 385
15.36.12 Laser setup 386
15.36.13 Processing and Presentation 387
15.36.14 Post processing 391
15.36.15 Example 392
15.36.16 Trouble shooting guide 393
15.37 IPI Spatial Histogram 400
15.37.1 Process 401
15.38 Least Squares Matching 401
15.38.1 Introduction 401
15.38.2 The Least Squares Matching Recipe 402
15.38.3 Results of the analysis 407
15.38.4 References 408
15.39 LIEF Processing 408
15.39.1 1. LIEF spray analysis 408
15.39.2 1.2 Spatial calibration of two cameras 410
15.39.3 1.3 Launching the LIEF Processing analysis method 410
15.39.4 1.4 Determining and correcting for the cross-talk 411
15.40 LIF Calibration 412
15.40.1 Custom properties of the calibration images 412
15.40.2 Performing the calibration 414
15.41 LIF Processing 417
15.42 1. Mie-LIF SMD - General 421
15.43 2. SMD Calibration 422
15.44 3. SMD Process 426
15.45 LII Calibration 429
15.46 LII Gas composition calibration 431
15.47 LII Processing 432
15.47.1 LII data processing by Region-of-Interest <ROI> methodology 433
15.47.2 LII data processing by Line-of-Sight (LoS) methodology 434
15.48 Light-Field Calibration 435
15.48.1 Image acquisition procedure for calibration 435
15.48.2 Calibration procedure 438
15.49 Light-Field Conversion 443
15.49.1 Procedure 443
15.49.2 Advanced Light-Field Settings 444
15.50 Light-Field LSM 445
15.50.1 Parameters: 446
15.51 Light-Field PTV 447
15.52 Line Integral Convolution (LIC) 448
15.52.1 References 450
15.53 Make Double Frame 450
15.54 Make Single Frame 451
15.55 Make Reverse Frame 451
15.56 MATLAB Link 452
15.56.1 Contents 452
15.56.2 Recipe for the MATLAB Link 452
15.56.3 Selecting data for transfer to MATLAB 455

VII
15.56.4 DynamicStudio data in MATLAB's workspace 456
15.56.5 Parameter String 459
15.56.6 General 460
15.56.7 The Output variable 460
15.57 Troubleshooting, Tips & Tricks 462
15.58 Moving Average Validation 464
15.58.1 Using the <Moving-average validation > method 464
15.59 N-Sigma Validation 465
15.60 Octave Link 468
15.60.1 Contents 468
15.60.2 Recipe for the Octave Link 468
15.60.3 Selecting data for transfer to Octave 472
15.60.4 DynamicStudio data in Octave's workspace 472
15.60.5 The Output variable 472
15.61 Oscillating Pattern Decomposition 473
15.61.1 References 479
15.62 Particle Tracking Velocimetry (PTV) 479
15.63 Peak Validation 481
15.63.1 Interactive setting and finding good parameters 481
15.63.2 Example using the peak validation for phase separation 482
15.64 Probability Distribution 483
15.64.1 Define and apply a mask 483
15.64.2 Distribution processing 483
15.64.3 More about ROI data analysis 484
15.65 Profile plot 485
15.65.1 Detailed description 486
15.65.2 A handy shortcut 488
15.65.3 Obtaining the numerical values 489
15.65.4 Examples 489
15.66 Proper Orthogonal Decomposition (POD) 491
15.66.1 POD Snapshot 491
15.66.2 POD Projection 494
15.66.3 References 497
15.67 Range Validation 498
15.68 Rayleigh Thermometry 499
15.68.1 Rayleigh theory 499
15.68.2 Rayleigh Thermometry analysis in DynamicStudio 500
15.68.3 Species and Mixture Library 505
15.69 Reynolds Flux 507
15.69.1 Image re-sampling 507
15.69.2 Reynolds flux calculations 507
15.70 Region of Interest (ROI) Extract 507
15.70.1 Manipulating the ROI rectangle using the mouse. 508
15.70.2 Setting the ROI rectangle using the property dialog. 508
15.70.3 Using the image view. 509
15.71 Scalar Conversion 509
15.72 Scalar derivatives 510
15.72.1 Calculating the gradients of U, V and W in the x and y direction 511
15.72.2 Scalar derivatives that can be calculated 512

VIII
15.73 Scalar Map 515
15.73.1 Visualization methods… 516
15.74 Scalar statistics 518
15.75 Shadow Histogram 521
15.75.1 Variable to process 522
15.75.2 Process data from 523
15.75.3 Display 523
15.75.4 Scaling 523
15.75.5 Region 523
15.75.6 Histogram display properties 524
15.76 Shadow Sizer processing 525
15.76.1 Content 525
15.76.2 Field of view and calibration 526
15.76.3 Recipe <Shadow Sizing> 526
15.76.4 Data visualization 533
15.77 Shadow Spatial Histogram 534
15.77.1 Calculation 535
15.77.2 Number of cells 535
15.77.3 Process 535
15.78 Shadow Validation 535
15.79 Size-velocity correlation 536
15.80 Spectrum 538
15.80.1 Example: Spectrum in the wake of a cylinder 539
15.81 Spray Geometry 543
15.81.1 Introduction 543
15.81.2 Setup window 544
15.81.3 Spray nozzle properties 545
15.81.4 Region of interest 547
15.81.5 Cone Geometry - Plume geometry 547
15.81.6 Spray Pattern 549
15.81.7 Spray geometry processing - Temporal evolution 551
15.81.8 Trouble shooting 554
15.82 Stereo-PIV 555
15.82.1 Method and formulas 556
15.82.2 Input Required 558
15.82.3 Recipe for Stereo PIV processing 558
15.82.4 Displaying results 559
15.83 Streamlines 561
15.83.1 Example 562
15.84 Subpixel Analysis 563
15.85 PTV 564
15.85.1 Tomographic PTV 565
15.85.2 Time-resolved PTV 567
15.86 Universal Outlier Detection 568
15.87 UV Scatter plot Range Validation 571
15.88 Vector Arithmetic 573
15.89 Vector Dewarping 575
15.89.1 Setting the z-value and w-value 575
15.90 Vector Interpolation 577

IX
15.91 Vector Masking 579
15.92 Vector Resampling 584
15.92.1 Automatic re-sampling 585
15.92.2 User-defined re-sampling 586
15.92.3 Edit data 587
15.93 Vector Rotation/Mirroring 588
15.94 Vector/Scalar subtraction 589
15.95 Vector Statistics 593
15.95.1 Visualization methods 595
15.95.2 Numeric data display 595
15.95.3 Formulas used 596
15.96 Vector Stitching 597
15.97 Volumetric Velocimetry 599
15.97.1 References 603
15.98 Voxel Reconstruction 604
15.98.1 Introduction 604
15.98.2 The Voxel Reconstruction Recipe 605
15.99 Waveform Calculation 610
15.99.1 Formulas 610
15.99.2 Built-in Functions 610
15.99.3 Built-in Operators 611
15.99.4 Naming Conventions 612
15.99.5 Syntax and Numerical Errors 612
15.99.6 Examples 612
15.100 Waveform Extract 613
15.100.1 Extracting Data 613
15.101 Waveform Statistics 614
15.101.1 Statistical Values 614
15.101.2 Example 615
15.102 Waveform Stitch 615
15.102.1 Stitching Data 615
15.102.2 Analog Stitching 617
15.103 Correlation option Window/Filter 617
15.103.1 Window functions 617
15.103.2 Filter functions 618
16 Data Exchange 620
16.1 Image Import 620
16.1.1 Formats 620
16.1.2 How to Import Images 620
16.2 Note 624
16.3 Image Export 624
16.3.1 Formats 624
16.3.2 File Format 625
16.3.3 Enhanced image quality 625
16.3.4 How to Export Data 625
16.4 Numeric Export 626
16.5 FlowManager Database Converter 629
16.5.1 Converting a FlowManager Database 629
16.5.2 Calibration images and scale factor 630

X
16.5.3 Scale factor 631
16.5.4 Example 1: database with calibration information 631
16.5.5 Example 2 : database with no calibration information. 632
17 Displays 633
17.1 General Display Interaction 633
17.2 Using the display from within an analysis method 633
17.2.1 Zoom 634
17.2.2 Pan 634
17.2.3 Magnifying glass 634
17.2.4 Color map 635
17.2.5 Adjusting the ellipse 635
17.2.6 Adjusting the polygon 636
17.2.7 Adjusting the rectangle 638
17.3 Correlation Map Display 639
17.3.1 Normalized Cross Correlation formula 640
17.4 Particle Density Probe 641
17.4.1 Working principle… 644
17.5 XY Display 644
17.5.1 Graphical user interface 645
17.5.2 Legend 645
17.5.3 Info Box 646
17.5.4 Zooming 647
17.5.5 Probe 647
17.5.6 Default setup 647
17.5.7 Display Options 648
17.5.8 Data Selection 648
17.5.9 Plot Setup 649
17.5.10 Axis Setup 650
17.5.11 Line Style 650
17.6 Vector Map Display 651
17.6.1 Vector Map display options 651
17.6.2 Examples of realizable displays 659
17.7 Scalar Map Display 661
17.7.1 More visualization methods… 661
17.8 3D Display 664
17.8.1 Voxels 664
17.8.2 Interacting with the voxel volume display 665
17.8.3 The display options for voxel volumes 666
17.8.4 Images 668
17.8.5 Vectors 669
17.8.6 Iso-surfaces 672
17.8.7 Contours 674
17.8.8 Stereo Rendering 675
17.8.9 Animation 675
17.8.10 Camera 675
17.8.11 Export 675
17.9 Color map and histogram 676
18 Dialogs 680
18.1 Field of View 680

XI
18.1.1 Scaling of measurements to metric units. 680
18.2 Measure Scale Factor 681
18.2.1 Scaling of measurements to metric units. 681
18.3 Sort 683
18.3.1 Sorting by Timestamp 683
18.3.2 Sorting by Data Column Value 683
18.3.3 Sort by Property Value 684
18.4 Split 684
18.4.1 Split at Index 684
18.4.2 Split at Sort Property Value 684
18.4.3 Automatic 684
18.4.4 Custom 684
18.5 Merge 685

XII
1 Introduction
1.1 The DynamicStudio Software
DynamicStudio is the main software package for image acquisition and analysis in for example the PIV,
LIF, LII, and Particle Sizing areas. It contains tools for configuration, acquisition, analysis, post-proc-
essing of acquired data:

l The acquisition system includes auto-detection of devices, cable connection diagrams and sup-
ports distributed acquisition over networks.
l The innovative and secure ensemble database gives a simple intuitive display of large amounts of
data.
l The built-in presentation and analysis modules give you several possibilities and combinations of
processing and display of data. 

Please read and accept our "Software License Agreement" (on page 17) before using this product!

We encourage you to sign up for a Software Update Agreement, so that you can download new versions
of the software.
Please do not hesitate to contact Dantec Dynamics if you run in to issues with this product, and please
visit our homepage www.dantecdynamics.com for new information on products and events.

Contents of this Manual


This manual consists of sections presenting the system and software and how to use it.

l If you are new to DynamicStudio you might want to go to Normal Use to get an introduction to
DynamicStudio.
l If you are looking for help on a particular dialog, most dialogs in DynamicStudio will bring up help
information for the dialog if you, Press F1.

On-line Help
The On-line Help is a context sensitive Help system built into the DynamicStudio application software. It
provides you with a quick reference for procedures, keystroke sequences and commands that you will
need in order to run the Imaging System. The Help can be used within the Application Software.

1.1.1 Assumptions
It is assumed that you have a basic knowledge about measurement techniques in Fluid Mechanics and
that you are familiar with the concept of Imaging Analysis, PIV, LIF, LII, Particle Sizing etc.

It is also assumed that you are familiar with Windows Terminology.

13
2 Related Documentation
The following printed manuals, user guides and documents contain information that you might find helpful
as you use this online help file. Some of these manuals are only delivered for specific applications and sys-
tems

l 2D PIV Reference Manual - 9040U1752 This manual describes the fundamental techniques of
PIV. Including seeding, light sources, cameras, the mathematics of PIV data processing and
using the software.
l Stereoscopic PIV Reference Manual - 9040U4115 This manual describes the fundamental tech-
niques of Stereoscopic PIV. Including laser safety, principles in Stereoscopic PIV, calibrations,
system components, and using the software.
l Planar-LIF Software Installation & User's Guide - 9040U3652 This manual describes the fun-
damental techniques of liquid LIF. Including theory and practice with planar LIF, using the soft-
ware for calibration, re-sampling and Reynolds fluxes, and the quality of planar LIF data.
l LIF Application Guide - 9040U3041 This manual describes the fundamental techniques of gas and
combustion LIF processes. Including how to handle gas LIF tracers, using the software, and the
quality of gas and combustion LIF data.
l Inteferometric Particle Imaging (IPI) Reference Manual - 9040U1191 This manual describes the
IPI particle sizing techniques. Including camera setup, particle light scattering and detection, cal-
ibration and post-processing using the software.
l A Practical Guide to Laser Safety and Risk Assessment (Safety Guide) - 9040U4031 This safety
guide includes safety instruction in working with lasers. Including risk assessment, personal
safety, creating safe environments, and a check list.
l Handling Fluorescence Dyes (Safety Guide) - 9040U3671 This safety guide includes instructions
in working with fluorescence dyes for planar LIF. Including interpretation of MSDS standards,
risks and safety phrases.
l OEM Documentation - A number of OEM supplied manuals and guides are delivered together with
the devices delivered for your system. This includes cameras, lasers, frame grabbers, AD
boards, etc.

14
3 Contacting Dantec Dynamics
Please contact Dantec Dynamics' sales staff for information about these products. Feel free to comment
on the application and send ideas and suggestions to your local Dantec Dynamics representative, so we
can help improve your work with the Dantec Dynamics products. Also, visit our web site at http://www-
.dantecdynamics.com for support and information regarding hardware and software products. As always,
we thank you for your continued interest in our products. If you ever have any questions or comments,
please feel free to contact us.

Address Information
Dantec Dynamics A/S
Tonsbakken 16-18
P.O. Box 121
DK-2740 Skovlunde
Denmark

Telephone: +45 44 57 80 00
Fax: +45 44 57 80 01

For international contacts and sales representatives please visit: Contact Us.

15
4 Software License Agreement
This software end user license agreement ("License Agreement") is concluded between you (either an
individual or a corporate entity) and Dantec Dynamics A/S ("Dantec Dynamics"). Please read all terms
and conditions of this License Agreement before installing the Software. When you install the Software,
you agree to be bound by the terms of this License Agreement. If you cannot agree to the terms of this
License Agreement you may not install or use the software in any manner.

Grant of License
This License Agreement together with the Software package including eventual media, user's guide and
documentation and the invoice constitutes your proof of license and the right to exercise the rights herein
and must be retained by you.
One license permits you to use one installation at a time of the Dantec Dynamics software product sup-
plied to you (the "Software") including documentation in written or electronic form and solely for your own
internal business purposes.
You may not rent, lease, sublicense or otherwise distribute, assign, transfer or make the Software avail-
able to any third party without the express consent of Dantec Dynamics except to the extent specifically
permitted by mandatory applicable law.

Updates
Updates, new releases, bug fixes, etc. of the Software which are supplied to you (if any), may be used
only in conjunction with versions of the Software legally acquired and licensed by you that you have
already installed, unless such update etc. replaces that former version in its entirety and such former ver-
sion is destroyed.

Copyright
The Software (including text, illustrations and images incorporated into the Software) and all proprietary
rights therein are owned by Dantec Dynamics or Dantec Dynamics' suppliers, and are protected by the
Danish Copyright Act and applicable international law. You may not reverse assemble, decompile, or
otherwise modify the Software except to the extent specifically permitted by mandatory applicable law.
You are not entitled to copy the Software or any part thereof except as otherwise expressly set out above.
However you may make a copy of the Software solely for backup or archival purposes. You may not copy
the user's guide accompanying the Software, nor distribute copies of any user documentation provided in
“on-line” or electronic form, without Dantec Dynamics' prior written permission.

License and Maintenance Fees


You must pay any and all licensee and maintenance fees in accordance with the then-current payment
terms established by Dantec Dynamics.

Limited Warranty
You are obliged to examine and test the Software immediately upon your receipt thereof. Until 30 days
after delivery of the Software, Dantec Dynamics will deliver a new copy of the Software if the medium on
which the Software was supplied is not legible.
A defect in the Software shall be regarded as material only if it has a material effect on the proper func-
tioning of the Software as a whole, or if it prevents operation of the Software in its entirety. If until 90 days
after the delivery of the Software, it is established that there is a material defect in the Software, Dantec
Dynamics shall, at Dantec Dynamics' discretion, either deliver a new version of the Software without the
material defect, or remedy the defect free of charge or terminate this License Agreement and repay the
license fee received against the return of the Software. In any of these events the parties shall have no fur-
ther claims against each other. Dantec Dynamics shall be entitled to remedy any defect by indicating pro-
cedures, methods or uses ("work-arounds") which result in the defect not having a significant effect on the
use of the Software.
Software is inherently complex and the possibility remains that the Software contains bugs, defects and
inexpediencies which are not covered by the warranty set out immediately above. Such bugs, defects and

17
inexpediencies etc. shall not constitute due ground for termination and shall not entitle you to any remedial
action including refund of fees or payment of damages or costs. Dantec Dynamics will endeavour to cor-
rect bugs, defects etc. in subsequent releases of the Software.
The Software is licensed "as is" and without any warranty, obligation to take remedial action or the like
thereof in the event of breach other than as stipulated above. It is therefore not warranted that the oper-
ation of the Software will be without interruptions, free of bugs or defects, or that bugs or defects can or
will be remedied.

Indemnification
Dantec Dynamics will indemnify you against any claim by an unaffiliated third party that the Software
infringes such unaffiliated third party's intellectual property rights and shall pay to you the amount awarded
to such unaffiliated third party in a final judgment (or settlement to which Dantec Dynamics has con-
sented) always subject however to the limitations and exclusions set out in this paragraph.
You must notify Dantec Dynamics promptly in writing of any such claim and allow Dantec Dynamics to
take sole control over its defense. You must provide Dantec Dynamics with all reasonable assistance in
defending the claim.
Dantec Dynamics' obligation to indemnify you shall not apply to the extent that any claim comprised by
this paragraph is based in whole or in part on (i) any materials provided directly or indirectly by you; (ii) your
exploitation of the Software for other purposes than those expressly contemplated in this License Agree-
ment; and/or (iii) combining of the Software with third party products. You shall reimburse Dantec Dynam-
ics for any costs or damages that result from such actions.
If Dantec Dynamics receives information of an alleged infringement of third party intellectual property
rights or a final adverse judgment is passed by a competent court or a final settlement consented to by
Dantec Dynamics is reached regarding an intellectual property right infringement claim related to the Soft-
ware, Dantec Dynamics may (but shall not under any obligation to do so), either (i) procure for you the right
to continue to use the Software as contemplated in this License Agreement; or (ii) modify the Software to
make the Software non infringing; (iii) replace the relevant portion of the Software with a non infringing func-
tional equivalent; or (iv) terminate with immediate effect your right to install and use the Software against a
refund of the license fees paid by you prior to termination. You acknowledge and agree that Dantec
Dynamics is entitled to exercise either of the aforesaid options in Dantec Dynamics' sole discretion and
that this constitutes your sole and exclusive remedy in the event of any infringement of third party intel-
lectual property rights.

Limitation of Liability
Neither Dantec Dynamics nor its distributors shall be liable for any indirect damages including without lim-
itation loss of profits and loss of data or restoration hereof, or any other incidental, special or other con-
sequential damages, and even if Dantec Dynamics has been informed of their possibility. Further, Dantec
Dynamics disclaims and excludes any and all liability based on Dantec Dynamics' simple or gross neg-
ligent acts or omissions. In addition to any other limitations and exclusions of liability, Dantec Dynamics'
total aggregate liability to pay any damages and costs to you shall in all events be limited to a total aggre-
gated amount equal to the license fee paid by you for the Software.

Product Liability
Dantec Dynamics shall be liable for injury to persons or damage to tangible items caused by the Software
in accordance with those rules of the Danish Product Liability Act, which cannot be contractually waived.
Dantec Dynamics disclaims and excludes any liability in excess thereof.

Assignment
Neither party shall be entitled to assign this License Agreement or any of its rights or obligations pursuant
this Agreement to any third party without the prior written consent of the other party. Notwithstanding the
aforesaid, Dantec Dynamics shall be entitled to assign this License Agreement in whole or in part without
your consent to (i) a company affiliated with Dantec Dynamics or (ii) an unaffiliated third party to the extent
that such assignment takes place in connection with a restructuring, divestiture, merger, acquisition or the
like.

18
Term and Termination
Subject to and conditional upon your compliance with the terms and conditions of this License Agreement
may install and use the Software as contemplated herein.
We may terminate this License Agreement for breach at any time with immediate effect by serving notice
in writing to you, if you commit any material breach of any terms and conditions set out in this License
Agreement. Without limiting the generality of the aforesaid, any failure to pay fees and amounts due to
Dantec Dynamics and/or any infringement of Dantec Dynamics' intellectual property rights shall be
regarded a material breach that entitles Dantec Dynamics to terminate this License Agreement for breach.

Governing Law and Proper Forum


This License Agreement shall be governed by and construed in accordance with Danish law. The sole and
proper forum for the settlement of disputes hereunder shall be that of the venue of Dantec Dynamics. Not-
withstanding the aforesaid, Dantec Dynamics shall forthwith be entitled to file any action to enforce Dan-
tec Dynamics' rights including intellectual property rights, in any applicable jurisdiction using any
applicable legal remedies.

Questions
Should you have any questions concerning this License Agreement, or should you have any questions
relating to the installation or operation of the Software, please contact the authorized Dantec Dynamics
distributor serving your country. You can find a list of current Dantec Dynamics distributors on our web
site: www.dantecdynamics.com.

Dantec Dynamics A/S


Tonsbakken 16-18 - DK-2740 Skovlunde, Denmark
Tel: +45 4457 8000 - Fax: + 45 4457 8001
www.dantecdynamics.com

19
5 Laser Safety
All equipment using lasers must be labeled with the safety information. Dantec systems are using Class
III and Class IV lasers whose beams are safety hazards. Please read the laser safety sections of the doc-
umentation for the lasers and optics carefully. Furthermore, you must instigate appropriate laser safety
measures and abide by local laser safety legislation. Use protective eye wear when the laser is running.
Appropriate laser safety measures must be implemented when aligning and using lasers and illumination
systems. You are therefore urged to follow the precautions below, which are general safety precautions to
be observed by anyone working with illumination systems to be used with a laser. Again, before starting, it
is recommended that you read the laser safety notices in all documents provided by the laser man-
ufacturer and illumination system supplier and follow these as well as your local safety procedures.

Precautions
As general precautions to avoid eye damage, carefully shield any reflections so that they do not exit from
the measurement area.

5.1 Danger of Laser Light


Laser light is a safety hazard and may harm eyes and skin. Do not aim a laser beam at anybody. Do not
stare into a laser beam. Do not wear watches, jewellery or other blank objects during alignment and use of
the laser. Avoid reflections into eyes. Avoid accidental exposure to specular beam reflections. Avoid all
reflections when using a high-power laser. Screen laser beams and reflections whenever possible. Follow
your local safety regulations. You must wear appropriate laser safety goggles during laser alignment and
operation.
During alignment, run the laser at low power whenever possible.
Since this document is concerned with the BSA Flow Software, there is no direct instructions in this doc-
ument regarding laser use. Therefore, before connecting any laser to the system, you must consult the
laser safety section of the laser instruction manual.

5.2 Precautions
As general precautions to avoid eye damage, carefully shield any reflections so that they do not exit from
the measurement area.

21
5.3 Laser Safety Poster

Displays the essential precautions for ensuring a safe laboratory environment when using lasers in your
experiments. (Wall poster 70 x 100 cm). Available from http://www.dantecdynamics.com, or through Dan-
tec Dynamics' sales offices and representatives.

22
6 Requirements

6.1 Minimum requirements


• PC with a modern multi-core Intel processor.
• Microsoft® Windows© Windows 7™ x64 with latest service packs.
• Microsoft® Windows© Installer v3.5 or later.
• Microsoft® Internet Explorer© 6 or later with latest security updates.
• Microsoft® .NET 4.5
• 4 GB of RAM.
• SXGA (1280x1024) or higher-resolution monitor with millions of colors.
• Mouse or compatible pointing device.

6.2 Recommended configuration


• 4+ GB of RAM
• 3D accelerated graphics card (at least 256 MB onboard RAM)
• 500 GB RAID0 configured hard-disk space.
• GigaBit Base-T Ethernet or faster adapter with RJ-45 connector.

Microsoft® Windows© 95, 98, 98SE, Me, NT, 2000, XP and Vista operating systems are not supported.

Notice: Installation must always be done as administrator, if necessary contact your network admin-
istrator.

Notice: Some basic Windows system files can be missing during the installation on a clean Windows PC.
Therefore we always recommend having all the latest Windows service packs and updates installed along
with the latest Internet Explorer. Utility files and useful links can also be found on the DVD in the “…\sof-
tware\util\” folder.

Warning: All USB dongles must be removed during the installation!

During the installation, additional third-party products are installed:


Rainbow Sentinel System Driver: This driver can be updated or removed using the add/remove pane in the
Windows Control Panel. For more information or driver updates please visit to www.rainbow.com.

DynamicStudio and remote agents require Microsoft .NET Framework Version 4.5 or later. The Microsoft
.NET Framework Version 4.5 can be found on the DVD in the “…\software\util\” folder.

If your Imaging system includes timer boards, frame grabbers or other computer boards for insertion in the
PC, the appropriate drivers must be installed before installing DynamicStudio. Please follow the instruc-
tions provided in the Unpacking and Installation Guide on the DynamicStudio DVD, or on the download
section http://www.dantecdynamics.com/download-login

23
7 Note to Administrators
7.1 Restricted/Limited User
Installation All installation scripts from Dantec Dynamics A/S contain a digital signature (code signing cer-
tificate) issued by (CA) VeriSign, Inc. (http://www.verisign.com) ensuring correct identity.
Installation requires administrative privileges, but running can also be done under restricted/limited (non-
admin) users accounts. When installing under a restricted/limited user account the user is requested to
run (Run As...) the installation as an administrative user.
Remote Agent must be installed under administrator privileges. The automatic update of the remote agent
requires access to Windows system and drivers folders.
Installation under Windows Vista requires disabling of User Access Control (UAC). Go to the Windows
Control Panel and search for UAC to turn it off before installing (requires reboot). It is however highly rec-
ommended to turn it on again after the installation.
To accomplish restricted/limited user account support, the following permissions are altered during instal-
lation:

l All users (“Everyone”) are permitted full file access to the “All Users\Shared Documents\Dantec
Dynamics” folder including all sub folders.
l All XML configuration and library files are installed in the “All Users\Shared Documents\Dantec
Dynamics\Dynamic Studio” folder, allowing shared access for all users.

7.2 Windows Firewall


The following ports are opened during installation in the Windows Firewall.

l 5013 Dantec Dynamics - Acquisition Agent


l 5008 Dantec Dynamics - Agent Configuration Multicast
l 5012 Dantec Dynamics - Agent Host
l 5011 Dantec Dynamics - Agent Master
l 5007 Dantec Dynamics - Multicast Agent Hosts
l 5014 Dantec Dynamics - Traverse Agent

7.3 BCDEDIT.exe (Microsoft® Windows© 7)


The following boot parameter is added to the Windows boot configuration information, for specifying phys-
ical memory that Windows can't use, when the image buffer is changed for frame grabber cameras: "trun-
catememory".
The parameter can be removed (and thereby the memory allocate freed) by entering the following com-
mand in a Command Prompt: "bcdedit.exe /deletevalue truncatememory".

25
8 Software Update Agreement (SUA)
We encourage you to sign up for a Software Update Agreement (SUA)

Please consider registering your product and signing up for a Software Update Agreement with us, giving
you accesses to free software updates, manuals and tools. For more information see: Software Reg-
istration.

27
9 Add-Ons and Options for DynamicStudio
80S57 Main Package DynamicStudio.
80S38 Particle Characterization Interferometric Particle Imaging (IPI) add-on for detecting spherical
droplets or bubbles in an image. This package includes the following
modules: IPI processing, diameter histogram, spatial histogram
together with mean, and RMS statistics.
80S43 FlexPIV FlexPIV is a unique add-on to DynamicStudio. It allows you to define
exactly what part of an image map you want to use for vector cal-
culation, and how you want to do it. It introduces advanced grid gen-
eration FlexGRID and variable processing schemes.
80S45 Stereoscopic PIV Allows computation of the third (out-of-plane) velocity component, by
(requires 2D PIV) using data from two cameras.
80S46 Optical Flow and Motion Particle tracking add-on for detecting particle trajectories in double
Tracking images.
80S48 Shadow Sizing Shadow sizing analysis add-on for detecting arbitrary particle types in
captured images. This package includes the following modules:
shadow processing, shadow histogram, shadow spatial his-
togram/statistics, and shadow validation.
80S55 Combustion LIF Laser Induced Fluorescence (LIF) add-on for combustion processes.
This package includes the following modules: Analog time-stamp cal-
ibration and rescale, LIF species processing, resampling (grid, vector
map), distribution processing, scalar map conversion, curve fit anal-
ysis together with mean-, and RMS statistics.
80S89 Rayleigh Thermometry Temperature measurements in flames using Rayleigh scattering.
80S58 2D PIV Including standard 2D PIV auto-, and cross-correlation calculations,
together with advanced adaptive-, and average correlations. Val-
idation routines of PIV covering peak-, range-, filter- and moving aver-
age validations. The add-on also includes basic vector displays and
statistics as well as vector subtractions.
80S59 LII Laser Induced Incandescense (LII) add-on for soot emissions anal-
ysis. This package includes the following modules: analog time-
stamp calibration and rescale, LII calibration, LII gas-composition cal-
ibration, LII processing, resampling (grid, vector map) together with
mean, and RMS statistics.
80S69 Data Visualization Data Visualization in DynamicStudio provides users with a powerful
graphical environment from which results can be viewed, manipulated
and published without resorting to exporting and/or copying data into
other applications.
80S76 Traverse Option Enables DynamicStudio to control a traverse system.
80S79 Tecplot® Data Loader An add-on data loader for Tecplot to simplify loading of Dynamic Stu-
dio data.
80S83 Volumetric Velocimetry Volumetric Velocimetry Add-on for DynamicStudio contains cal-
ibration routines, Volumetric Particle Tracking, Tomographic Particle
Tracking and Least Squares Matching.
80S84 Distributed Analysis Distributed Analysis use redundant network computer capacity and
speed up massive image data analysis.

29
80S85 Liquid/Gas LIF Laser Induced Fluorescence (LIF) add-on for measuring instant
whole-field concentration, temperature or pH maps in both liquid and
gas flows. The LIF package contains the following modules: LIF
tracer calibration, LIF processing (concentration, temperature, pH),
resampling (grid, vector map), together with mean-, RMS-, instant
Reynolds fluxes-, and vector statistics.
80S87 Spray Geometry module Spray geometry determines the spatial geometry of a spray nozzle
with one or more nozzle exits. The analysis can characterize the
geometry of a spray seen from the side as a plume or seen from
above/below as a pattern. The temporal development of the spray can
be evaluated at different time delays, or time averaged.
80S29 Volumetric Particle Track- Determination of 3D particle histories from time-resolved images
ing acquired from two cameras.
80S99 Light-field Velocimetry ...
80S13 Oscillating Pattern Oscillating Pattern Decomposition identify spatial structures (modes)
Decomposition with corresponding frequencies and growth/decay rates.
80S56 LIF-Mie Sizing ...

30
10 Legacy Options
The following Add-Ons and Options have been discontinued, merged with other options or made part of
the main package. For technical reasons they may however still appear in DynamicStudio's 'About'-box
as enabled in the dongle:

80S33 Advanced PIV (requires The Advanced PIV add-on Merged with 80S58, 2D PIV
2D PIV Add-On) includes the possibility for high
accuracy processing and def-
inition of deforming windows.
80S39 Image Processing Basic image manipulations and Included in 80S57, Dynamic Stu-
Library transforms including: rotation, dio main package.
shifting, scaling, high and low
pass filtering and more. Also
including more advanced rou-
tines like morphing and Fourier
transforms.
80S74 Proper Orthogonal Proper Orthogonal Decom- Included in 80S57, Dynamic Stu-
Decomposition position (POD): Using the dio main package.
method of snapshots on a time-
series of scalar or vector maps.
80S75 Advanced Graphics The Advanced Graphics opens Included in 80S57, Dynamic Stu-
up for advanced vector data dio main package.
processing and displays. The
add-on includes vector map
resampling, rotation and mir-
roring. It also includes stream-
line, LIC, scalar map, spectrum
and vorticity displays. (formerly
known as Vector Processing
Library)
80S77 MATLAB® Link Transfer of setup information Included in 80S57, Dynamic Stu-
and data from DynamicStudio to dio main package.
MATLAB's workspace and auto
execution of a user-supplied
script.
80S80 1 Camera Plug-in or The simple and flexible device Included in 80S57,
80S81 2 Camera Plug-in or handling in DynamicStudio Dynamic Studio main
80S82 3+ Camera Plug-in opens up for an infinite number package.
of camera combinations. Cam-
eras are auto detected and ena-
bled locally or remote in the
system. Different add-ons
opens up for standard 1 or 2 cam-
era combinations, and for
advanced use of 3 or more cam-
eras can be connected any-
where in the system.
80S90 Dynamic Mode Decom- Dynamic Mode Decomposition Discontinued
position extracts modal structures based
on temporal or spatial linear evo-
lution dynamics.

31
11 Getting Started
The DynamicStudio Image Acquisition System comprises the DynamicStudio software including on-line
image acquisition. The acquisition system can be configured with the desired parameter settings and data
can be acquired, pre-viewed and saved to disk at the operator needs.
A comprehensive number of data processing methods are available for analyzing the data after the acqui-
sition has been made and the data are saved to disk.
Normally the acquisition hardware used to control the camera and illumination system (typically a laser),
hardware synchronization and data acquisition/transfer are installed in the same PC as the Dynam-
icStudio software.
The system supports a multitude of different cameras and can be configured for one or more cameras (typ-
ically using multiple cameras of the same type).
A synchronization unit is used to generate trigger pulses for all cameras, light sources and/or a shutter
devices.
For more information about supported devices please see See "Cameras" on page 115, See "Syn-
chronizers" on page 168, and See "Illumination Systems" on page 204.

11.1 Warranties and Disclaimers


11.1.1 Camera Sensor Warranty
Direct or reflected radiation from Argon or Nd:YAG lasers can damage the sensor of the camera. This may
happen with or without power to the camera and with or without the lens mounted. Therefore when setting
up and aligning for measurements, take extreme care to prevent this from happening.
Laser damage may turn up as white pixels in the vertical direction, or as isolated white pixels anywhere in
the image. This shows up clearly when acquiring an image with the lens cap on.
The manufacturer has identified all sensor defects into classes. Often the character and location of all
defects are on record. Additional defects arising from laser-induced damage may void the sensor war-
ranty.

Precautions

1. Cap the lens whenever the camera is not in use.


2. Cap the lens during set-up and alignment of the light sheet. Before removing the cap, make sure
that reflections off the surface of any objects inside the light sheet do not hit the lens by observing
where reflections go.
3. As general precautions to avoid eye damage, carefully shield any reflections so that they do not
exit from the measurement area. You must wear appropriate laser safety goggles during laser
alignment and operation.

11.2 Installation
Please follow the instruction found in "Unpacking and installation guide" on the installation DVD.

11.3 Using DynamicStudio

11.3.1 User Interface


This helps contents information about the See "Layout, tool bar and file menu" on page 33 and provides a
step by step getting started procedure for the See "Normal Use" on page 34 (data analysis) and for See
"Acquisition Mode" on page 35 (acquiring images).

32
Layout, tool bar and file menu
Layout and windows
The layout can be customized differently for acquisition mode and normal mode. Different windows can be
displayed from the View menu:

l Devices: to add devices and to see the one (cameras, frame grabbers, agent,...) are use. For
more information see See "Automatic device detection" on page 36 or See "Adding devices man-
ually" on page 37.
l Device Properties: to read and change the devices properties displayed in the device window.
l System Control: to acquire data and to set acquisition parameters. For more information see See
"Acquiring Images" on page 38.
l Acquired Data: to display the database tree.
l Synchronization Cables: to tell DynamicStudio how devices are connected. For more information
See "Device connection" on page 37.
l Log: to display error and warning messages. It is recommended to keep this window.
l Database: display and handle the database.
l Record Properties: view and edit properties of individual database records.
l Agents: state of acquisition agents.

Tool bar
Below the main menu there is a series of icon buttons forming a toolbar. These buttons provide shortcuts
to a number of the functions in the system as described below.
Delete record or set-up.

Delete all records under a set up. Make sure you want to push this button before using it.

DynamicStudio can be used in two modes: the normal mode for data analysis the and acquisition
mode. Swapping from one mode to the other one is done by clicking on the green diamond from
the tool bar.
Show Recipe.

Analysis.

Perform analysis again.

Open as numeric. When pressing this feature on a data set the underlying data are displayed in a
spread sheet format. The columns may be formatted and pasted onto Windows clip board from
where it may be pasted into other programs like Microsoft Excel.
Open current record.

Open database: Pressing this button will close the current database, and prompt you to
select a new one from a list of existing databases. You may of course re-open the orig-
inal database, by selecting it from the list.
Opens a data set as XY-Plot.

Unselects all records in the database.

33
Open the timing diagram

Menu
The menu represents the main menu of the DynamicStudio software and include the points:
File: selecting database, new/open/save/import and general system components.
Edit: rename/delete and select/unselect and clear log.
View: selecting device, record/device properties, system control, acquired data, synchronization cables,
agents, log windows and UI (user-interface) level. For more information see See "Normal Use" on page 34
and See "Acquisition Mode" on page 35. From this menu, the layout can be saved, loaded or reset to
default. The layout can be customized differently for acquisition mode and normal mode.
Run: leaving or entering the acquisition mode, resetting the acquisition system and controlling the traverse
system.
Analysis: performing a calibration or data analysis
Tools: launching the configuration wizard see See "First Time Use" on page 40, FlowManager database
See "FlowManager Database Converter" on page 629 and light source wizard See "First Time Use" on
page 40.
Window: standard control of the appearances of windows and jumping to another window
Help: online help contents, and version number of the software and gives access to dongle key code.

It is possible to add your own entries to the Tools menu, e,g, to launch other software from within Dynam-
icStudio. Perform the following steps to add an entry:

1. Right click onto the program that you want to create a shortcut for and select "Create Shortcut".
2. Let the name of the shortcut start with "Tool.". Example: Tool.BEI Device Interface
3. Move the shortcut to the DynamicStudio installation path
(typically C:\Program Files (x86)\Dantec Dynamics\DynamicStudio
4. Start or restart DynamicStudio, and the shortcut will appear in the list of tools for DynamicStudio.

All entries that you add to the Tool menu will still be present after installation of a new DynamicStudio ver-
sion. In general, all files added manually to the DynamicStudio folder wil not be deleted by installing or re-
installing DynamicStudio.

Normal Use
When you start DynamicStudio you will see the default database view:

34
You can change the screen layout, but this is the default and will be used throughout this manual.
The top of the screen features a normal menu and toolbar. Everything in the right hand side is related to the
acquisition of images, while the left hand side is intended for managing both images that has already been
acquired and other data derived from them. The gray area in the middle is the working area where you can
display and examine acquired images as well data derived from them.

Creating a Database
From the File menu you can open existing DynamicStudio databases or create a new one to hold the
images that you acquire. To create a new one click File/New Database... and specify name and location
for a new database. DynamicStudio databases has file extension *.dynamix and contain information about
the relation between the different datasets as well as about how raw images were acquired. The actual
data is not in the database itself, but in a folder with the same name, where sub folders are used to organ-
ize the data.
Note If at some point you wish to move or backup a database please remember to take the folder and all
its sub folders also, since otherwise you'll not get the actual data.

Acquisition Mode

To enter Acquisition Mode press Run/Enter Acquisition Mode or click the little green button on the tool-
bar.

35
When you enter acquisition mode DynamicStudio will search for Acquisition Agents on the local PC
and/or other PC's on the local network. The agents found will be prompted for information about various
hardware devices connected to the PC in question.

Automatic device detection


Cameras and synchronization devices are auto-detected by DynamicStudio and appear automatically in
the 'Devices' list in the middle right-hand part of the screen: The auto-detection will also detect if con-
nection is lost to either of them and indicate this by changing the text color to red.

The auto-detection may take a few seconds, but you need not do anything to tell the system e.g. what
kind of camera(s) are connected.

36
Adding devices manually
Only 'intelligent' devices can be auto-detected, while e.g. lasers or other light sources typically cannot.
These have to be added manually, by right clicking in the device list below the last device shown. In the
context menu select Add New Device... and select your laser from the list:

Having selected a laser, the device list should look like this:

Device connection
When all devices in your system are listed it is time to tell DynamicStudio how they are connected. Some
synchronization units are very flexible with respect to where e.g. the camera trigger cable should be con-
nected, while other synchronization units allow only one specific way of connecting the other devices.
From the View menu select Synchronization Cables to display a diagram of connected devices.

37
At first you should just see the various devices without any connections between them. If you right click
e.g. the arrow at camera trig you can Restore Default Connections..., which will suggest a possible con-
nection to the synchronization unit.
Provided of course devices really are connected as shown in the diagram, you are now ready to start
acquiring images.

Acquiring Images
In the default screen layout of DynamicStudio the main system control is in the upper right corner of the
screen. If Acquired Data is shown click the tab labeled System Control.

You can specify how many images you wish to acquire (limited by the size of the image buffer).
You can also specify the time between pulses to use if double-frames (or double-exposed single frames)
are to be acquired.
You can specify the desired trigger rate, limited by the capabilities of the camera.
Finally you specify whether to acquire single- or double-frame images.
There are three different ways of acquiring images with DynamicStudio, corresponding to the three top-
most buttons in the right hand side of the system control window:

38
Free Run
In free run the camera is running freely, i.e. not synchronized with other hardware devices. This means
e.g. that in this mode the laser (or other light source) is not flashing, and the frame rate of the camera does
not necessarily correspond to the nominal value from the system control. If ambient light is strong enough
compared to exposure time you should be able to see images and perhaps focus the camera.
The camera will continue acquiring data until you press Stop continually overwriting the oldest images in
the buffer. If you set the system up to acquire say 20 images, the last 20 acquired will remain in the buffer
after you press Stop so you can browse through them and store them if you wish.

Preview
In preview mode all devices are synchronized, the laser is flashing and camera triggers acquires images
at the rate specified in the System Control panel. It will not stop acquiring images when the requested
number of images have been acquired, but simply overwrite the oldest image with the most recent one. It
will continue acquiring images until you press Stop and again the last images acquired remain in the buffer
for evaluation and possibly storage.

Acquire
Acquire does exactly the same as Preview with the one exception that it stops when the requested
number of images have been acquired. You can of course stop it earlier by pressing the Stop button, in
which case the allocated buffer will not be full, but hold the images acquired until you stopped the acqui-
sition.
Note Common to all run modes is that acquired data can be saved to the database.

Storing and Evaluating Acquired Images


Whether acquisition stops automatically or because you click Stop, the last images acquired will remain in
the buffer for evaluation and storage.
To evaluate and/or store the images in the buffer click the tab labeled Acquired Data:

Evaluating images
When acquisition stops the very last image acquired will typically be shown on the screen and with con-
trols in the Acquired Data window you can browse through the images in the buffer while the display win-
dow updates accordingly. You can even play back as a movie specifying the time between each new
frame in the dialog Time interval.

39
You need not play back all images, but can start part way into the buffer and stop before reaching the end.
To do this, click and drag the pink dots at either end of the green line in the window above. The blue arrow
indicates which image is presently shown on the screen and can also be moved around by clicking and
dragging with the mouse.

Storing images
While in the buffer images is still only stored temporarily in RAM. In the window with Acquired Data you
can choose to store the images in the database, meaning also that they are transferred to the hard disk for
permanent storage.
As with the playback described above you need not store all of the images, but may opt to store just some
of them using the same controls as used for the movie playback.
There are two buttons for storing images, Save in Database and Save for Calibration.
The first one is for normal data, while the second, as the name implies, is intended for calibration images
of various kinds. If for example you've taken a picture of a ruler to determine scaling, you would store the
image for calibration.
Specifically identifying calibration images to the system is an advantage for later processing, since such
images are inherently different from other images and are generally processed and used very different
from 'normal' images.
When images are saved in the database they will appear in the database view in the left hand side of the
screen, from where you can access them the same as you would access old data from an existing data-
base.

11.3.2 First Time Use

Content
This help contents information about
Agent DynamicStudio
Traverse Installation
Light Source Wizard to add, remove or edit a light source

Agents DynamicStudio
Agents DynamicStudio rely on so-called 'Agents' to handle external hardware such as frame grabbers,
cameras, timer boards and so on. An 'Agent' is a program, which may run on the same PC as Dynam-
icStudio itself or on another PC communicating with DynamicStudio via a normal network. Dynam-
icStudio can handle multiple agents running on several PC's, but the most common setup handles
everything from a single PC.
A configuration wizard is supplied with DynamicStudio to help you set up the agent(s); either click the
shortcut which the installation created on the desktop or click Start/All Programs/Dantec Dynam-
ics/DynamicStudio.
When DynamicStudio is running click Tools/Configuration Wizard and go through the wizard.

40
Traverse Installation
Click on tools and select the configuration wizard. Select Networked system/traversing system

Check the traverse agent to be installed:

41
The traverse agent is successfully installed

42
Light Source Wizard
To add, remove and configure a new light source such as laser, flash lamp or stroboscope, select from
tools the light source wizard. If you use a standard light source you may skip this part.

Follow the instructions from the wizard.

43
Press next and check the operation you want to perform

Adding a light source


After giving a name to the light source, it has to be configured. Select the number of cavities and light
source type between:

44
According to the selected light source, you need to type different properties

The following window appears when the light source is created.


45
Your new light source is now available form the dive list (right lick on the agent icon and select add new
device):

11.3.3 Image Buffer resources


Unless your camera has onboard RAM for temporary image storage you will need to allocate RAM in the
computer to use as a buffer when acquiring images. DynamicStudio will warn you as shown below if this
is the case:

46
You will now have to chose which type of memory you will use during acquisition:

1. Right click the in the Devices view and set a check mark at "Advanced view"
2. Select "Image buffer resources" in the Device tree view.
3. Select the Memory mode to use.
4. In the Device Properties view set the property "Buffer size" to ex. 1024 MBytes
5. If you have chosen "Reserved memory" you will have to reboot the PC, otherwise the system is
ready for acquisition.

See "Image buffer recourses" on page 131

11.3.4 Normal Use


When you start DynamicStudio you will see the default database view:

You can change the screen layout, but this is the default and will be used throughout this manual.
The top of the screen features a normal menu and toolbar. Everything in the right hand side is related to the
acquisition of images, while the left hand side is intended for managing both images that has already been
acquired and other data derived from them. The gray area in the middle is the working area where you can
display and examine acquired images as well as data derived from them.

47
Creating a Database
From the File menu you can open existing DynamicStudio databases or create a new one to hold the
images that you acquire. To create a new one click File/New Database... and specify name and location
for a new database. DynamicStudio databases has file extension *.dynamix and contain information about
the relation between the different datasets as well as about how raw images were acquired. The actual
data are not in the database itself, but in a folder with the same name, where sub folders are used to organ-
ize the data.

Note: If at some point you wish to move or backup a database please remember to take the folder and all
its sub folders also, since otherwise you'll not get the actual data.

Acquisition Mode

To enter Acquisition Mode press Run/Enter Acquisition Mode or click the little green button on the tool-
bar.

When you enter acquisition mode DynamicStudio will search for Acquisition Agents on the local PC
and/or other PC's on the local network. The agents found will be prompted for information about various
hardware devices connected to the PC in question.

Automatic device detection


Cameras and synchronization devices are auto-detected by DynamicStudio and appear automatically in
the 'Devices' list in the middle right-hand part of the screen: The auto-detection will also detect if con-
nection is lost to either of them and indicate this by changing the text color to red.

48
The auto-detection may take a few seconds, but you need not do anything to tell the system e.g. what
kind of camera(s) are connected.

Adding devices manually


Only 'intelligent' devices can be auto-detected, while e.g. lasers or other light sources typically cannot.
These have to be added manually, by right clicking in the device list below the last device shown. In the
context menu select Add New Device... and select your laser from the list:

Having selected a laser, the device list should look like this:

49
Device connection
When all devices in your system are listed it is time to tell DynamicStudio how they are connected. Some
synchronization units are very flexible with respect to where e.g. the camera trigger cable should be con-
nected, while other synchronization units allow only one specific way of connecting the other devices.
From the View menu select Synchronization Cables to display a diagram of connected devices.

At first you should just see the various devices without any connections between them. If you right click
e.g. the arrow at camera trig you can Restore Default Connections..., which will suggest a possible con-
nection to the synchronization unit.
Provided of course devices really are connected as shown in the diagram, you are now ready to start
acquiring images.

Acquiring Images
In the default screen layout of DynamicStudio the main system control is in the upper right corner of the
screen. If Acquired Data is shown click the tab labeled System Control.

50
The system control is used to define some of the acquisition parameters :

l Time between pulses : time between the two laser pulses, i.e. the time difference between the
two particle images
l Trigger rate : Sampling frequency of the PIV setup
l Number of images : specifies how many images are required in the acquisition
l Single of Double frame PIV mode : in order to reach the minimum Time between pulses, the
camera has to tun in double frame mode.

Once those parameters are changed, they must be validated in order to be used by DynamicStudio. This
can be done by pressing either TAB or Enter.
If the validation is not done, and the user tries to click on one the acquisition button, the software will not
start the acquisition. A second click on the
button will be necessary to validate the parameters and start the process.
There are three different ways of acquiring images with DynamicStudio, corresponding to the three top-
most buttons in the right hand side of the system control window:

Free Run
In free run the camera is running freely, i.e. not synchronized with other hardware devices. This means
e.g. that in this mode the laser (or other light source) is not flashing, and the frame rate of the camera does
not necessarily correspond to the nominal value from the system control. If ambient light is strong enough
compared to exposure time you may be able to see images and perhaps focus the camera.
The camera will continue acquiring data until you press Stop continually overwriting the oldest images in
the buffer. If you set the system up to acquire say 20 images, the last 20 acquired will remain in the buffer
after you press Stop so you can browse through them and store them if you wish.

Preview
In preview mode all devices are synchronized, the laser is flashing and the camera is triggered to acquire
images at the rate specified in the System Control panel. It will not stop acquiring images when the
requested number of images have been acquired, but simply overwrite the oldest image(s) with the most
recent one(s). It will continue acquiring images until you press Stop and the latest images acquired remain
in the buffer for evaluation and possibly storage.

Note: For some cameras - especially High-Speed cameras - preview mode will result in bursts of meas-
urements. While each burst will have the correct frame rate, the overall data rate may not be the same as
when using the "acquire" mode.

Acquire
Acquire does exactly the same as Preview with the one exception that it stops when the requested
number of images have been acquired. You can of course stop it earlier by pressing the Stop button, in

51
which case the allocated buffer will not be full, but hold the images acquired until you stopped the acqui-
sition.

Note: In all run modes acquired data can be saved to the database.

Storing and evaluating and Acquired Images


Whether acquisition stops automatically or because you click Stop, the last images acquired will remain in
the buffer for evaluation and storage.
To evaluate and/or store the images in the buffer click the tab labeled Acquired Data:

Evaluating images
When acquisition stops the very last image acquired will typically be shown on the screen and with con-
trols in the Acquired Data window you can browse through the images in the buffer while the display win-
dow updates accordingly. You can even play back as a movie specifying the time between each new
frame in the dialog Time interval.
You need not play back all images, but can start part way into the buffer and stop before reaching the end.
To do this, click and drag the pink dots at either end of the green line in the window above. The blue arrow
indicates which image is presently shown on the screen and can also be moved around by clicking and
dragging with the mouse.

Storing images
While in the buffer images are still only stored temporarily in RAM. In the window with Acquired Data you
can choose to store the images in the database, meaning also that they are transferred to the hard disk for
permanent storage.
As with the playback described above you need not store all of the images, but may opt to store just some
of them using the same controls as used for the movie playback.
There are two buttons for storing images, Save in Database and Save for Calibration.
The first one is for normal data, while the second, as the name implies, is intended for calibration images
of various kinds. If for example you've taken a picture of a ruler to determine scaling, you would store the
image for calibration.
Specifically identifying calibration images to the system is an advantage for later processing, since such
images are inherently different from other images and are generally processed and used very different
from 'normal' images.

52
When images are saved in the database they will appear in the database view in the left hand side of the
screen, from where you can access them the same as you would access old data from an existing data-
base.

11.3.5 Database Access


This help provides information about
Database Structure
Copying and Moving Ensembles

Database structure
DynamicStudio store acquired data in a nested tree structure. At the root of the tree is the database,
branching out into one or more project folders. Each project folder can hold both calibration images and nor-
mal images stored in so-called 'Runs'. Every time you transfer data from the image buffer to the database
by saving, a new Run is added to the latest project and time stamped for later reference. All project, run
and ensemble labels can be later changed. You can create new projects e.g. to separate different exper-
iment configurations.

By right clicking on the different levels of the database, you can access to different operations and prop-
erties

53
Ensembles
A core feature of DynamicStudio is the concept of ensembles, which represent a whole series of data as a
single icon in the database. Typically an ensemble corresponds to a Run and can hold anywhere from a
single to thousands of images or derived datasets. This greatly helps you navigate among the very large
amounts of data that can easily be acquired with modern imaging systems.
The ensemble is thus the main data type in a DynamicStudio database and in connection with data anal-
ysis we default to process entire ensembles producing entire ensembles of derived data. If you wish to
access and/or process individual datasets you can of course do so.
In the figure in the previous section the topmost ensemble has been renamed to 'Gain 0' and the text right
next to the name indicate that this ensemble contain 50 datasets. From the icon we can see that the
ensemble contain single frames and if we just double-click on the icon the 1st image in the ensemble will
be shown on the screen. If we wish to have a closer look at individual images in an ensemble, we can right
click it and select Contents from the resulting context menu:

54
Browsing inside the contents list the display window will update accordingly to show individual images in
the ensemble.

Project
The database can be fragmented into projects. For example a project can correspond to a specific flow
setup configuration.

Calibration
If calibration images have been acquired and stored, a calibration folder will be created including cal-
ibration images. Only one calibration folder per project can be created. From the ensemble under the cal-
ibration, the specific operations are:

l To Measure scale factor


l To perform a calibration
l To pre-process images (Image Processing Library and Image arithmetic)

Run
Each time, new images are stored, a new run will be created. From the ensemble inside a run, the different
analysis tools are available (mouse right click)

Copying and Moving ensembles and Deleting images


Warning: Moving and copying ensembles may lead to loose image information and may corrupt the data-
base. The moving and copying of ensembles is the users responsibility. Such operation must be carried
out only in extreme necessity.
By right clicking on an ensemble, you can copy or move an ensemble. The ensemble will be copied in the
actual folder and named "copy of ensemble name". It is then possible to move it into another run. To move
an ensemble make a drag and drop with the mouse.
The run settings (such as dt between frames, cameras and so on) MUST be the same to move ensemble
from one run to another one.
Subsequently to expanding an ensemble, it is possible to delete images from it. Right hand click the
image you want to delete, and select "Delete". It is only possible to delete raw images from ensembles,
which has no child ensembles (as a result of a data analysis)

55
11.3.6 Delete and Restore
When enabled, the database feature 'Delete and Restore' will enable a 'Waste Bin', where items deleted
from the database are temporarily stored, providing the option to restore the data to their original location in
the database.
By default 'Delete and Restore' is enabled, but it can be disabled via DynamicStudio Options.
If 'Delete and Restore' is enabled, data deleted from DynamicStudio databases are not immediately
deleted, but moved to a Waste Bin named 'Deleted Items'. If the deletion was a mistake it is possible to
restore the deleted items to their original location in the database.

Deleted Items record


When 'Delete and Restore' is enabled a record named 'Deleted Items' will appear in the database tree just
below the root database icon.

When the feature is first enabled the waste bin will be empty as shown by the icon in the left of the two fig-
ures above. Once data have been deleted (and thus moved to the waste bin) the icon will change to the
one shown on the right, indicating that the waste bin contains items, which can potentially be restored.
The properties of the Deleted Items record will indicate how much data is in the waste bin:

Indicate parents to deleted items


When data is deleted the parent record is marked in the database tree with a small indicator. This tells the
user that deleted child items are in the waste bin, from where they can be restored.

The indicators can be toggled on and off by pressing Ctrl-I, or by right-clicking the waste bin and selecting
"Indicate parents to deleted items" or "Do not Indicate parents to deleted items". By default the indicators
are on.

Restoring deleted items


To restore deleted items right-click the parent and select 'Restore deleted items (recursive)'. This will
restore child datasets and possibly entire database branches descending from the record you right-
clicked. The record from where you start need not itself have an indicator that deleted child data are in the
waste bin. The system will scan through all descendant datasets restoring all deleted items recursively.
This means also that in the extreme case you may right-click the database icon itself, select 'Restore
deleted items (recursive)', and the system will restore all deleted items in the waste bin back to their orig-
inal location in the database.

56
Emptying the waste bin
To empty Deleted Items right click the Deleted Items record and select 'Empty Deleted items'. All data in
the waste bin will now be permanently deleted.
To limit the amount of data in the waste bin you are also prompted to clean it up when a database is
closed:

This prompt will not appear if 'Delete and Restore' is disabled or if the waste bin is empty.
The first 3 buttons are self-explaining, you may delete all items, leave all items, or delete old items, keep-
ing only items deleted within the last week.
To prevent this message from appearing again click 'Show alternatives...', which will open the 'Database'
tab of the 'Options' dialog:

With respect to the handling of deleted items you have three possibilities here. The last one is to be
prompted as shown above every time a database is closed, the other two will permanently delete all or
delete old items without prompting the user. If the prompt has been disabled and you wish to enable it
again you can access the Options dialog via the 'Tools' menu in DynamicStudio.

57
Note: Even if you set up the system to delete items older than 7 days the waste bin may in fact contain
items that are older. This is because the clean-up takes place when the database is closed, not when it is
opened. Imagine that you close a database and decide to keep deleted items that are less than 7 days old.
If two weeks pass before the database is opened again it may then contain deleted items that are 3 weeks
old.

11.3.7 Working with the Database


DynamicStudio databases are based on ensembles. An ensemble is a collection of data, which logically
behaves like one single dataset. If - for example - the user has acquired 100 PIV images from 1 camera,
these images will appear in the database as one ensemble. If two cameras had been used, the images
would be represented by two ensembles - one for each camera.

Working on ensembles
Most data processing is performed directly on the ensembles. Lets say that you want to perform a PIV
analysis of the two above-mentioned ensembles. By selecting -> "Analyse" from the context menu of an
ensemble and then "PIV Signal" -> "Adaptive Correlation" , the PIV analysis is being carried out on all 37
image pairs in the ensembles. The result is another ensemble - containing 37 Vector maps

58
Methods, which require a long range of successive input, are easily handled via the ensembles. For
instance calculating the average of the above-mentioned 37 Vector maps. From the context menu of the
vector map ensemble select "Analysis"-> "Statistics" -> "Vector Statistics" and the average of all 37 Vec-
tor maps will be calculated automatically.

Expanding ensembles
It is also possible to work on individual elements of an ensemble. From the context menu of the ensemble
select "Contents", the ensemble will open and you can see the individual data. Double clicking any data in
this list will open op this particular dataset of the ensemble.

Analyzing a part of an ensemble


Selecting "Analyze.." from the context menu of a dataset will make it possible to perform an analysis on
this dataset only. The result will be a new ensemble containing one dataset only. Before this ensemble is
created you will be asked if you want to carry out the calculation on the remaining datasets in the parent
ensemble first

59
User Selected Data
Some analysis methods require data from more than one ensemble to be specified. An example is the
masking function. Here it is necessary to select the mask before the masking is carried out. The selection
method is used for this. Data selected by this method is called "User Selected data". When trying to per-
form an analysis, which require a user selection, without having performed the selection, this method's
info box will contain a statement in red, that the selection is missing.

Another example of how to use the select method is when performing an average correlation. Here it is not
mandatory to have a selected dataset. However, if a vector map has been selected before the analysis is
called, this vector map will be used to offset the interrogation areas.

60
Pairing ensembles
It is possible to pair multiple datasets from two ensembles. This could for instance be necessary when
adding to sets of image maps. If the analysis method: "Image Arithmetic" - "add" is carried out on an
ensemble containing 37 image maps, while another ensemble containing another 37 image maps has
been "selected", the add function will add image 1 from the input ensemble to image 1 from the selected
ensemble, image 2 from the input ensemble to image 2 from the selected ensemble and so on.

61
Multi selection
It is possible to select multiple ensembles and perform the same action on all the selected ensembles.
There are three ways to select multiple ensembles:

l Hold down the Ctrl key and click on the ensembles that are to be selected
l Select the first ensemble and while holding down the Shift key select the last ensemble of a
series.
l Use the arrow keys to move to the first ensemble, hold down the shift key an move to the last
ensemble to be selected

To perform the same analysis on the selected ensembles, then from the context menu of the selected
ensembles, select analysis. The following analysis will be carried out on all the selected ensembles.
Multi selection works on other database operations such as delete, open etc..

11.3.8 Calibration Images

Acquiring and saving calibration images


The acquisition of calibration images is no different from any other image acquisition, but when you store
the images you need to tell the system that they are intended for calibration rather than measurements.
This is done by selecting 'Save for Calibration' rather than 'Save in Database' in the 'Acquired Data' win-
dow:

62
This will open a dialog prompting you to select one or more custom properties that needs to be stored
along with the images.
If for example the images are to be used for an Imaging Model Fit, the relevant custom property is 'Z',
describing the position of the calibration target relative to the lightsheet.

Press 'OK' and you're finally requested to specify a value for the custom property. Continuing the example
of Imaging Model Fits Z=0 is typically in the center of the lightsheet, while Z=-2 in the example below may
represent a position behind the lightsheet as seen from the cameras point of view.

In other contexts such as the calibration of a concentration-LIF experiment, you will of course choose the
custom property 'Concentration' and then type in the known concentration that apply for the calibration
images you are about to save in the database.
Press OK and images will be stored and appear in the database in a group named 'Calibration':

If you right-click an ensemble of calibration images and select 'Contents' you will be able to browse
through the images in the ensemble and for each the custom property you entered will be shown in the

63
record properties. You can also assign custom properties to the ensemble as a whole and whatever value
you enter here will apply for any member of the ensemble that does not have a specific property value of
its own. Custom properties can also be assigned to ordinary measurements, but are rarely used for any-
thing and thus optional. For all but the simplest calibrations custom properties are mandatory, since they
are a vital part of any calibration.

Tip:
It is possible to move images from another ensemble in the database to the calibration record by simply
dragging the ensemble onto the calibration record. If the project does not contain a calibration record it can
be created by right clicking the project and selecting the menu option "Create Calibration Record". It is
however not possible to move images and ensembles out of the calibration folder and into a normal acqui-
sition "Run".

11.3.9 AgentHost service


After having installed any part of the DynamicStudio installation, AgentHost service will be running. Agen-
tHost is the key component in the distributed environments that are supported by DynamicStudio.

When AgentHost is running it is possible to use the PC as a Database-, Analysis- or Acquisition-agent. If


AgentHost is not running the PC can not be used by other DynamicStudio installations.

AgentHost service is ready as soon as Windows has started up. You do not need to log in on the PC to
have AgentHoset started.

A "DynamicStudio Icon" is shown in the notification area when AgentHost serive is running.

Stopping and Starting AgentHost service


If you want to start/stop or prevent AgentHost service from running when the PC has booted do the fol-
lowing:

1. Login as an administrator
2. Click Start -> Control Panel -> Administrative Tools-> Computer Management -> Service and
Applications -> Services
3. RightClick on the service named "DynamicStudio Agent Host Service" and select Properties
4. Select the Tap named "General

64
5. Change "Startup type:" to
"Automatic" to have AgnetHost started when the PC is booted.
"Manual" : to prevent AgentHost service from starting when the PC is booted.
6. At Service Status you can click
"Start" to start AgentHost Service
"Stop" to stop AgentHost Service
7. Click OK to close the dialog.

Stopping and Starting AgentHost service remotely


It is possible to remotely start and stop the AgentHost service, you must be an administrator on the
remote computer in order to do so:

1. Login as an administrator
2. Click Start -> Control Panel -> Administrative Tools-> Computer Management
3. Right click "Computer Management " and select Connect to another computer.
4. When connected click Service and Applications -> Services.
5. Follow step 3 trough 7 in order to control AgentHost service on the remote computer.

Manually Starting and Stopping AgentHost


When AgentHost serivce is not running you can manually start AgentHost:
Click Start->All programs->Dantec Dynamics -> Remote Agent for DynamicStudio
To stop AgentHost right click the Icon in the notification area and select "exit".

11.3.10 Known issues (Vista 64bit)


When using Distributed acquisition it is not possible to modify the amount of memory to use as reserved
memory on the Agent PC's.
When using Distributed acquisition it is not possible to use the already reserved memory for acquisition on
the Agent PC's

Reason:
On Windows Vista 64bit, new restriction has been added to the OS that makes it impossible to access
level 0 device driver such as DXMapMem.

Two ways exists to solve this issue:

Solution 1:
Starting up Agent host service with a specific login name and password that has the necessary rights to
access Level 0 device drivers such as DXMapMem:
On each agent host do the following:
1. From Control Panel\Administrative Tools select and open Services
2. Find the service named "DynamicStudio Agent Host Service" and double click the service:

65
3. Go to Tap "Log On":

66
4.Instead of "Local System account" select "This account:" and enter the login name and password.
5.Go to Tap "General" and click first the button "Stop" and then the button "Start".

Now when the service is started the login that you described in step 4 will be used and this login has the
necessary rights to access hardware the issues are solved.

Solution 2:
Instead of letting AgentHostService start up Agent.AgentHost.exe do this manually.
On each agent host do the following:
1. From Control Panel\Administrative Tools select and open Services
2. Find the service named "DynamicStudio Agent Hoset Service" and double click the service
3.Set Startup type to "Disabled".
4.Click the button "Stop"
5.Clock OK. Now the service is stopped, and will not start up again on the next boot.
6. Click Start->All programs->Dantec Dynamics->Remote Agent for DynamicStudio
Now the Agent.Host Icon will appear in the notification tray, and the Agent PC is now ready for use.
You will have to manually start up Agent host each time you have rebooted.

11.3.11 Database with Demo Data


On the DVD that you received when your imaging system was delivered and installed, you will find a zip-
file that contains a database with data from many different experiments. These data reflect a wide range of
applications and introduce the analysis methods in DynamicStudio that are most widely used to process
the raw data with the goal to extract the physical information.
To fit the database onto the DVD some intermediate data, which is not strictly necessary to understand
the processing and work with the examples, was deleted from the database. This is indicated by icons
that are grayed out,

and attempts to open the data display consequently results in the following error message:

Zipping of the database was necessary due to its size. If you do not already have an unzip software
installed on your PC,we recommend using 7zip.
The demo database contains several projects organized in folders:

67
Stereoscopic PIV and calibration refinement
This project shows how to process particle images acquired with two cameras arranged in a stereoscopic
setup to obtain all three velocity components in an illuminated plane.

Interferometric Particle Imaging


This project shows how to process particle images acquired with two cameras (one focused, the other
camera defocused) to obtain both velocity and droplet size simultaneously.

68
PIV/PLIF measurement
This project shows how to process images acquired with two cameras (one observing particles through a
bandpass filter, the other observing fluorescence through a cut-off filter) to obtain velocity and scalar
(here: concentration) information simultaneously.

FeatureTracking/PIV
This project shows how time-resolved particle images can be processed to obtain both velocity infor-
mation and Lagrangian particle history.

FlexPIV
This project demonstrates how FlexPIV can be used to process PIV images that contain different geom-
etries, flow regimes or multiple fluid phases in an optimized way adapted to the geometry and flow sit-
uation.

69
Calibration refinement
This project once more shows stereoscopic PIV and calibration refinement, this time using multi camera
calibration instead of individual calibration of the cameras.

Shadow Sizing
This project uses back-lit particles to simultaneously determine size and velocity of irregular shaped par-
ticles or droplets.

Combustion LIF
This project shows how images of fluorescence (here OH) captured in a flame can be used to study the
physics of combustion by for example observing the frame front.

POD, FeaturePIV (von Karman vortex street)


This project is a combined example on how to use POD to extract dominant modes from PIV data and
determine Lagrangian particle history.

70
2D Least Squares Matching
This project compares Adaptive Cross-Correlation processing with 2D Least Squares Matching
(2D LSM). It shows the benefits of LSM in accuracy and direct determination of velocity gradients.

71
12 Acquisition
This section describes the following operation:

12.1 Acquisition Manager


This control makes it possible to create a grid definition managing a series of acquisitions which can be
performed automatically.
In the database the grid definition is stored in the current project. For every acquisition a new run will be
created under the project to store acquisition settings and images.

For troubleshooting click here.

12.1.1 Opening the Acquisition Manger


To open the Acquisition manger either click on an acquisition grid saved in the database, or click Run-
>Acquisition Manager.

In order to make the Acquisition Manger work with the Acquisition system, DynamicStudio must be in
Acquisition Mode, and all devices must be ready for Acquisition.

Note:
When entering Acquisition Manger a dialog will appear stating a warning that the Laser warning dialog will
be disabled. This dialog normally appears just before starting an acquisition and pulsing the laser
When start running the steps in Acquisition Manager no Laser warning will take place!
You can not enter Acquisition Manager without agreeing to disable the Laser warning.

When open the Acquisition Manger Dialog look like this:

12.1.2 Description of possible settings

X, Y, Z
This is the position of the traverse in millimeters.

Time between pulses


This is the time between pulses of the laser in micro seconds.

Trigger rate
This is the rate at which the event are happening.

72
For single frame mode the rate of each laser pulse.
For double frame mode the rate of each double laser pulse.
this is in Hz.

Trigger delay
This is used when Trigger mode is set to External, in device properties of the synchronizer. This is the
delay introduced after every trigger signal in micro seconds.

Number of images
This is the number of images to acquire at the specific traverse position.

Analysis sequence
It is possible to have the data that is acquired analyzed. The way this is performed is that a selected Anal-
ysis sequence is applied on the acquired data. To select an analysis sequence please See "Selecting
Analysis seqeunce" on page 75

Start delay 
This is the delay introduced when an acquisition is started,
It will disable the cameras during the delay but everything else will run.

Prompt
If checked, a dialog will appear in which you will have to click "OK" in order to continue to the next row.

12.1.3 Buttons in Acquisition Manager dialog

Save
Clicking Save will save the work done in Acquisition Manager to a Grid Definition record in the database in
the current selected Project.

NOTE: You can open the Grid Definition record in Acquisition Manger by double clicking the Grid definition
record in the database.

Run all
Clicking Run All will start the acquisition specified in Acquisition Manager.

Stop
Clicking Stop while Acquisition Manager has started an Acquisition will stop the acquisition.

Postpone Analysis until all acquisition done


If "Postpone Analysis until all acquisition done" is checked, no analysis will take place before all acqui-
sitions specified in Acquisition Manger has been performed. Otherwise analysis will be done right after
one acquisition has been performed.

12.1.4 Working with rows in the Acquisition Manager


Most actions are done via the context menu for a Row (right clicking a row index number). The context
menu looks like this:

73
Generate Grid
See "Generate Grid" on page 75.

Insert new
When selecting Insert new a new row will be entered.

Delete
Will delete the currently selected row

Select All
Will select all rows

Move to position
Will move the Traverse to the position given by x,y and z.

Run on position
Make an acquisition based on the settings in the selected row.

Run From position


Start acquisition at the selected row, and when done, continue with the remaining rows.

Run naming convention


See "Naming convention" on page 74

12.1.5 Naming convention


Right clicking the grid and selecting naming convention enables the user to select between three different
naming conventions on how each run should be named:

l Creation time (time and date of creation of the run)


l Position (the x, y, z position entered in the grid will be used for naming the run)
l Index (the index will be used for naming the run)

74
12.1.6 Generate Grid
It is possible to generate rows automatically. This is done using the Grid Generator.
Right click a row in the Acquisition Manger dialog and select "Generate Grid". This will bring up the fol-
lowing dialog:

Start by choosing the right Grid type. Here it is possible to select between 1D, 2D, or 3D systems mean-
ing 1, 2, or 3 axis traverse positions.
In the dialog above, a 1D system has been selected.
Now select the start position, number of steps, and the size of each step (all in mm).
When clicking OK, five new rows will be created.
Settings for the new rows created (apart from settings for x, y, and x) will be based on existing rows in the
Acquisition Manager.

12.1.7 Selecting Analysis seqeunce


To select an analysis sequence to be performed on the acquired data, just click the drop down button in
the cell. This will make a list of available analysis sequences drop down:

Just select the analysis sequence, or if more information is needed select the "<<Browse>>" entry in the
list. This will bring up the Analysis Sequence library dialog as shown below:

75
Here you can select the analysis sequence that you want to have performed on the acquired data.
As can be seen in the example above, an analysis sequence might have external dependencies. Here it is
calibration for each of the camera, to be used when analyzing the Stereo PIV Processing.

NOTE:External dependencies must be marked as Input for Analysis before opening the Acquisition Man-
ager.

Removing a selected Analysis sequence from a row


To remove a selected analysis sequence from a row in the Acquisition Manager, just select the empty
entry in the drop down list.

Analysis Sequence execution

1. First it will be checked if the necessary “External dependencies” defined in the sequence (if any)
has been marked in the database. If not, no analysis will be performed and the system will stop
with an error.
2. If the Analysis Sequence only holds one parent, each and every acquired image ensemble will be
treated with the specified Analysis sequence.

76
3. If the Analysis Sequence holds more parents, then it will be checked if the necessary parents just
have been saved as part of the acquisition just done. If check is ok, only one execution of the
sequence will be performed. If the check fails, the analysis will be skipped and the system
stopped with an error.

12.1.8 Troubleshooting 
Problem Action
The manager hangs after first position Make sure that the camera is connected both physically and in
the synchronization diagram. The Acquisition manager does not
run preview or free run so everything needs to be connected.
First try to make a single acquisition from the System Control
window. This will indicate if the system is correctly set up.
The traverse does not move See troubleshooting under Traverse Control
Not all steps in the selected analysis Check if the analysis sequence selected have external dependencies, if
sequence are performed so, that these are not selected in the database before entering the acqui-
sition manager, analysis steps requiing external input will not be per-
formed.

12.2 Reporting acquisition settings


From an existing data base, it is possible to generate a file including all the acquisition parameters, and
hardware description. This html file can be used to document the data acquisition.

l From a run icon in the data base tree, do a mouse right click and select "generate acquisition
setting reports". Type in the file name and storing location.

12.3 Storing and loading acquisition settings


When a new data base is created, DynamicStudio loads setting which are stored in the acquisition device
(last used ones).
To use the default configuration hardware configuration, you can reset the acquisition system

77
l from the tool bar, select run
l under acquisition system, select "reset acquisition system"

To re-use an old acquisition configuration, you need to copy it from the existing data base to the acqui-
sition devices:

l open the old data base which includes the settings you want to re-use.
l from the system control window, press the store icon.
l The settings are now stored in the acquisition system

l If now you create a new database, it loads the settings you have just stored from the acquisition
system
l If you open an existing database, you can load the settings by right clicking from the run icon
(select load acquisition settings)

78
12.4 Remote Controlled Acquisition
Remote Control is a special option that requires implementing a driver based on the information below.

Using the DynamicStudio Remote Control interface it is possible to

l Set Traverse position before starting Acquisition,


l Set properties that will be saved together with the data saved during an Acquisition
l Starting Acquisition that will end up with data saved in the database
l Stopping Acquisition
l Perform analysis on the acquired data
l Deleting the just acquired data from the database

The need for Remote Control of Acquisition is often a part of an automation process, where more equip-
ment must be synchronized to work together, such as experiments in larger facilities. The remote control
can also be used to repeat a certain measurement routine over and over again, or to ensure control over a
fixed set up between experiments.

There is no driver installed with the installation of dynamic studio. In order to receive a demonstration
driver (including source code) please contact support@dantecdynamics.com.

12.4.1 How to start Remote Controlled Acquisition


Remote control can only be started when in Acquisition mode by selecting Remote Controlled Acqui-
sition from the Run menu. Before entering remote control it is recommended to run a preview to ensure
proper setup of devices and connections. The dialog will look like this:

79
The Close button ends the session and closes the Remote connection, the More button toggles between
more and less information.. When expanded a log windows is displayed providing status information on
the communication between the "Application.Remote.AKProtokol.dll" and DynamicStudio.

The entry Session Command is a string that is passed to the "Application.Remote.AKProtokol.dll" when a
session is started. The default string is "COM1" but the last string entered will be remembered by Dynam-
icStudio. The string could be ex. "10.10.100.101:7009" instaed of a comport, it is up tp the implentatio nof
the "Application.Remote.AKProtokol.dll" to handel and interpriate the string.
If an error occurs during remote control interaction, then in order to get out of the error state the user have
to either click the Restart button that stats a new session or send the AK-Protocol command End or Abort.
This ends or aboars the acquisition and returns to a ready state.

80
12.4.2 Infomation on the Remote Control Driver
Since the remote control feature is designed to be integrated into various systems, it consists of a ded-
icated driver. The driver is a DLL that must be installed on the PC running DynamicStudio. You can imple-
ment this DLL yourselves and integrate this into DynamicStudio software when the interface described
below is satisfied.

Dll Interface Methods


The interface to the driver DLL consists of three interface calls, that must be implemented by the driver:

RCStart
int __declspec(dllexport) __RCStart( HWND hWnd,const char* psInfo)

Starts the remote driver. Here you can connect to or start any sub routines that your driver must run.
This method is called when the Remote Control dialog is opened in DynamicStudio software, with the last
used information string. This method is also called during a request of a restart of the remote driver, with
optional a new information string provided.

Parameters
[in] hWnd is a Windows Handle to the Remote Control dialog. You must use this Windows Handle to call
the remote commands. This is also provided for use with the __RCMessage callback routine, see descrip-
tion.
[in] psInfo is a pointer to an information string, passed from the Remote Control dialog in DynamicStudio
to the driver. The string can contain any text for your driver. The information string can be max. 64 char-
acters long.

Return value
If your remote driver starts successfully you should return 0, otherwise you can return an integer error
code that will be displayed in the Remote Controller dialog for easy error handling.

Example
The information string can contain the serial COM port, or other necessary information for your driver to
work properly. The string is saved along with BSA Flow Software.

__RCMessage
This is an optional interface for providing a message based callback function for the remote driver. If the
remote driver needs a window for receiving Windows Messages, the handle passed in the __RCStart
method can be used and the message will be passed back to this function. This simplifies the design of
the remote driver.

LRESULT __declspec(dllexport) __RCMessage(WPARAM wp,LPARAM lp)

Parameters
[in] wp is the first data holder.
[in] lp is the second data holder.

Return value
Not used.

Example
If you receive asynchronous messages from e.g. a serial communication driver you can pass the hWnd
received in the __RCStart method to this sub method. Inside this sub method you can then call the WIn-
dows messages SendMessage or PostMessage with data:

81
#define WM_REMOTEDRIVERDATA (WM_APP + 2134)
::SendMessage(hWnd,WM_REMOTEDRIVERDATA,(WPARAM)data1,(LPARAM)data2);

The same data will then be received in the __RCMessage method:

LRESULT __declspec(dllexport) __RCMessage(WPARAM data1,LPARAM data2)


{
:
:

__RCStop
int __declspec(dllexport) __RCStop()

Stops the remote driver. In the method you can disconnect or end running sub routines in your driver.
This function will be called by the Remote Control dialog when a Restart is requested and when the dialog
is closed.

Return value
Not used.

12.4.3 Commands
Commands are send to DynamicStudio via a simple Windows Messages using WM_COPYDATA. This
message provides a mean to copy data from one process to another in Windows. Each command is given
a unique command ID recognized in DynamicStudio, and some of the commands requires a command
text line in a special syntax, all can be found in the list below.

The remote driver must call the DynamicStudio with the following command set:

#define RC_USERDATA 1
#define RC_RUN 2
#define RC_ABORT 3
#define RC_STATUS 4
#define RC_ADDANDRUN 5
#define RC_END 6
#define RC_SET_ANALYSIS_SEQUENCE_ID 7
¤define RC_DELETE_RUN 8

RC_USERDATA
Adds user data to the current acquisition. User data is displayed in the log window of DynamicStudio and
saved in the database later when saving the acquired data to the database.

User data must be a ';' separated string with the following format:

[Category name1;name2];[property name;property value];[property name;property value]...

Ex 1:

"Temperature;Port1;Temp1;20.5;Temp2;Temp2;19.5"

This will create a category named "Temperature.Port1" in properties for the Run saved later.
82
In this category the following properties will be seen:
Temp1 20.5
Temp2 19.5

Ex 2:

"WindSpeed;#1;Point1;10.5;Point2;11.3;Point3;10.4;Point4;10.9"

This will create a category named "WindSpeed.#1" in properties for the Run saved later.
In this category the following properties will be seen:
Point1 10.5
Point2 11.3
Point3 10.4
Point4 10.9

RC_STATUS
Asks for status from DynamicStudio.

The status is returned by the message call.

0 ready
1 busy (acquiring)
2 error

RC_ADDANDRUN
Moves the Traverse to a new position, sets a new trigger delay on the synchronizer (if used) and starts an
acquisition.

Add and run info must be a ';' separated string with the following format:

ignored;ignored;X;Y;Z;Trigger delay

Ex:

"Ignored;Ignord;0.00;0.00;10.00;100.0"

RC_END
Stops the acquisition.
No command string.

RC_SET_ANALYSIS_SEQUENCE_ID
Specifies the analysis sequence to be performed after data has been saved to the database during acqui-
sition.

Ex: "3s2vj1v0"

83
NOTE: The right sequence ID can be found in the "Analysis Sequence Library"

RC_DELETE_RUN
Can be called to delete the just acquired Run and all data below.
No command string..

12.5 Online Vectors


Online Vectors is an analysis which is performed during acquisition. The analysis result is displayed as a
vector map.

For purposes of performance the interrogation areas are spaced equally apart according to the number of
areas specified by the user, as shown above.

The online analysis is performed only during Free run and Preview . After an acquisition has take place the
analysis method will try to calculate a result for each acquired image pair, in this way it is possible to exam-
ine the acquired images and have vectors map displayed before determining which images to save.

84
12.5.1 Adding Online Vectors to be performed during Acquisition
To add Online vectors to be performed during acquisition select "Add Online Analysis... " from the Context
Menu of a"Image format" node in the device tree in Devices. Now in the "Add Online Analysis" dialog
select "Online vectors".

12.5.2 Parameters for Online Vectors


There are three parameters for Online Vectors:

Number of interrogation areas:


This parameter specifies the number of interrogation areas to be analyzed. The default size is 8 x 8. The
minimum is 4 x 4 and the maximum is 40 x 40.

Interrogation area size:


This parameter specifies the interrogation area size. The default size is 32 x 32 pixels. The minimum size
is 16 x 16 and the maximum depends on image size.

Peak Ratio
This is a validation criteria. A vector is only valid if the first to second correlation peak height is higher that
the number specified.

12.5.3 Adding the Focus Assist online analysis function to the setup
The Online Focus assist is added to the setup as a sub device to “Image Format” device in the device tree.
1.To add the device right click the “Image Format” device and in the context menu that pups up select
“Add Online Analysis”. The following dialog will appear:

Select “Online Focus Assist” and click OK.

2.The Online Focus assist device will be added just below the Image Format as shown below:

85
3.When clicking “Free run” or “Preview” Online analysis will run. When doing a real acquisition the Online
analysis will not run, only when all data has been collected the Online analysis will be enabled again so
that when browsing through data the result form the Online analysis method will be shown.

12.5.4 Understanding the resulting image from Online Focus Assist


The result from Online Focus assist is an image with the same size and bit depth ad the original image.
The image will display gradients found in the image. A bright pixel indicated a hi gradient.
Below is an example of an image captured by a FlowSense 2M camera. The camera was aimed at a cal-
ibration target:

The resulting image from Online Focus Assist can be shown below:

86
In the example above the image is focused. If the user moves the focus away from the surface of the tar-
get the pattern will fade away. The more focused the original image is the higher intensities will be seen in
the resulting image.

In the example below the camera is aimed at white partials on a black surface:

87
The resulting image from Online Focus assist is shown below.

88
In this example the image is focused at its best. If the user moves the focus away the image will dim out
and be totally black. The more focused the original image is the higher intensities will be seen in the result-
ing image.

The highest possible value shown in the resulting image will be if a saturated pixel is next to a totally dark
pixel. The resulting pixel value will be 255 for 8bit image and 4095 for 12 bit images.

12.5.5 Using Online Focus Assist to set up a stereo PIV system


A stereo PIV set-up requires the optical arrangement to fulfill the Scheimpflug condition: the object, image
and lens planes have to cross each other on the same line.
In order to fulfill this condition, the camera has to be tilted with respect to the lens as illustrated on the fol-
lowing figure :

89
The use of the Online Focus Assist makes the alignment of a stereo system easier and faster. The fol-
lowing describes how to use the Online Focus Assist for such a set-up.

o Open the lens aperture at maximum


This reduces the depth of field for more precise alignment
o Ensure that the illumination of the target is homogenous

o Adjust the LUT of both the camera image and the Focus Assist displays
Double click on both displays

o Use grid display (Right click on the display, Options/Grid)


Set a grid size of half the width the Camera chip.

90
o Set the camera in the direction to match the desired Field of View

o Set the Scheimpflug angle to the maximum opposite position


Setting the Scheimpflug in the opposite direction reduces even more the part of the image that will
be in focus:

91
o Align the focused area with the middle of the image
Use the grid as a reference

o Set the correct Scheimpflug angle


The histogram of the focus assist, displayed in Log scale, can be used to track the correct posi-
tion of the Scheimpflug mount.
The histogram is usually mainly composed of two values that represent ares with no gradient infor-
mation (the left part of the histogram) and areas with high gradients (the right hand side peak in the
histogram). Therefore, the correct scheimpflug angle is set when the second peak reaches the
rightmost position, see below.

The set up is now aligned and ready for calibration:

92
12.5.6 Parameters for the device Online Focus Assist

Pixel dilation Diameter


Only one parameter for the Focus Assist device exists and this is Pixel dilation Diameter.
The functionality of this parameter is to expand high intensity pixels values to neighbour pixels.
The number that can be selected describes the diameter in pixels of this expansion.

12.6 Online Light-Field


12.6.1 Online data analysis
One of the main advantages of light-field velocimetry over tomographic is the ability to view depth infor-
mation in near real-time processing of depth. Once calibration is completed an online process can be
started. In the acquisition setup under the VolumeSense camera a display option can be added called
"Light-field online. This display can only function when linked to a light-field calibration dataset. Under prop-
erties for the light-field online a browser is provided for quick selection of a calibration record.

12.6.2 Acquiring data to disk


Capture and storage of images for light-field data is identical to other cameras. Please consult the
Dynamic Studio manual for more info.

12.7 PIV Setup Assistant


The PIV Setup Assistant will help you set up your PIV experiment by suggesting a suitable value of Δt
(time between laser pulses) based on information provided by you.

93
For a given camera, lens and measuring distance it will also tell you how big the field of view is and how
large a given Interrogation Area will be in physical space.

The PIV Setup Assistant looks like this:

You can select the camera from a drop-down list of known cameras, or create your own camera by select-
ing the 'Custom Camera' in the list.
For a 'Custom Camera' you must specify sensor size (Width x Height) as well as pixel pitch (distance
between neighbor pixels) in microns:

94
For predefined cameras in the drop down list sensor size and pixel pitch is already known and physical
size of the sensor can be determined easily.
In the example above 2448 x 2050 pixels at 9 μm/pixel gives a physical sensor size of 22.03 x 18.45 mm.
The focal length of the lens divided by the Camera-Object distance determine the magnification of the sys-
tem as M=f/Zo.
Combining physical sensor size with the magnification found the Field of View is determined easily as is
the physical size of an interrogation area in pixel when projected onto the lightsheet in object space.
Given the f-number of the lens and the wavelength of light used, the PIV Setup Assistant can also predict
focal depth of the optical system and diffraction limited spot size, that usually affect particle image sizes
more than physical size of particles.
The maximum out-of-plane velocity combined with Light sheet thickness limits the time between pulses to
reduce out-of-plane loss-of-pairs between two consecutive images.
Similarly the maximum in-plane velocity combined with the intended Interrogation Area Size limit the time
between pulses to avoid in-plan loss of pairs.

95
13 Devices
Devices are the terms for all external hardware connected to DynamicStudio. This includes cameras, las-
ers, synchronizers, AD boards etc.

13.1 Lens Mounts


Special lens mounts exists for lenses that can be controlled via software. The following are a description
of these devices..

13.1.1 Remote lens control


The Remote lens control is based on a lens mount from Birger Enginering. This special lens mount is capa-
ble of controlling focus and aperture for Canon lenses supporting remote control.

Installing the Remote lens control communication driver


The Birger Enginering lens mount is delivered in two different versions. One with a USB interface and
another with a COM port interface.
The lens mount with COM port interface does not require any special driver installation.
For using the USB version a special driver from Birger Enginering has to be installed. The driver can be
found on the installation DVD or on the download section of Dantec Dynamics A/S home page.
The USB version of the Birger mount is simply a COM port version equipped with a USB to serial Port con-
verter.
This means that when the USB version of the Birger mount is connected to the PC, it will appear as a
serial COM port in the operative system.

Detection and appearance of the Remote lens control


When the DynamicStudio acquisition system is started all serial ports on the PC are displayed in the
device tree. After system initialization detection of devices will start.
Once detected the Remote lens control will appear in the device tree under one of the COM ports:

The Remote lens controller will also be displayed in the Synchronization Cables Diagram:

96
Associating Camera and Remote lens controller
To associate a Remote Lens controller drag the Remote lens controller on top of the camera device in the
Synchronization cables diagram. Doing this will virtually attach the Remote lens controller to the camera.
When the Remote lens controller is attached to a camera the two devices will follow each other when
moved around in the Synchronization Cables diagram.
To detached the Remote lens controller from a camera, right click the Remote lens controller and select
"Detach device".
This can also be done via properties for the Remote lens controller.

Properties for the Remote lens controller


The Remote lens controller has the following properties:

97
Control Properties

l Show control dialog


By Selecting this property and clicking the “...” button the dialog for controlling the Lens will be dis-
played.

Device information and version

l Full name
This property displays the device name and version
l Serial number
Displays the serial no. of the Birger Lens Mount
l Firmware version
Displays the Firmware version

Setup

l Connected Lens type


Displays the lens type connected.
l Camera attached
Here it is possible to identify the camera to which the Lens mount is attached (see Attaching Lens
Mount to camera).

Controlling the lens


To control the lens right-click the Remote lens controller in the Synchronization cables diagram and select
"Select lens control dialog", or go to properties for the Remote lens controller and select the property
"Show control dialog" and click the "..." that appears. These actions will bring up the LensControl dialog as
shown below:

Dialog functionality:
Main dialog

98
l Lens type:
Displays the lens type connected.
l Lens status:
1. Manual Focus: Shows if the Lens is set to Manual Focus.
2. Sticky lens: Enables or disables special functionality in the Remote lens controller for con-
trolling “sticky” lenses.

Focus

l Allow Control:
If unchecked lens focus cannot be changed. This is useful to avoid accidentally changing focus.
If checked lens focus can be changed.
l Keyboard step size:
Controls the step size when the keyboard is used to change the focus.
l Re-Learn focus range:
When clicked the lens focus range is learned by moving the focus from minimum to maximum. It
will not be possible to change the focus on the lens before the Birger mount knows the focus
range.
l Focus Preset
When the user has found a good focus he can click this button. This will save the current focus
position.
l Recall Preset:
When clicked the focus will return to the saved focus position (saved by clicking Focus Preset)

Aperture

l Allow Control:
If unchecked lens aperture cannot be changed. This is useful to avoid accidentally changing the
aperture.
If checked lens aperture can be changed.
l Initialize:
By Clicking this button the aperture range is read from the lens. After that it will be possible to
change aperture.
l Aperture Preset:
When the user has found a good aperture setting he can click this button. This will save the cur-
rent aperture position.
l Recall Preset:
When clicked the aperture will return to the saved aperture position (saved by clicking Aperture
Preset).

13.1.2 SpeedSense 90xx Canon lens mount


SpeedSense 90XX cameras can be delivered with a Canon Lens mount. This lens mount offeres the abil-
ity to control focus and aperture from the software.

Detection of the special Lens mount


Once the camera has been detected the system will check if the camera is equipped with a Canon lens
mount and if so if a lens is attached to the camera.
If a lens is detected a Remote Lens controller device will be created as a child device for the camera just
like the Image buffer and Image format device:

99
In Synchronization cables connection diagram the Remote lens controller will also appear as a child
device and attached to the camera as shown below:

Right clicking the Remote lens controller will bring up a context menu from where the Lens control dialog
can be accessed.

If the lens is detached from the camera the Remote lens controller device will disappear from the device
tree and from the Synchronization cables connection diagram

Parameters for the Remote Lens controller

100
The device properties or parameters for the SpeedSense 90xx remote lens controller are limited. Essen-
tially only 2 parameters give any meaning:

Show control dialog


-This property allow the user to bring up the lens control dialog

Connected Lens type


-Displays information on what type of lens is connected.

Controlling the attached lens

At the top the detected lens type is displayed.

The controls for handling focus are very limited. It is possible to click either the plus or minus button to
move the focus closer or further away. The step size in the selection box refer to a stepper motor inside
the lens and cannot be easily interpreted as a change (in mm) of the focal distance.

The aperture is controlled via the slider in the bottom of the dialog. Here also the total range is displayed as
well as the current selected aperture.

13.2 1-Axis Scheimpflug mounts


13.2.1 Instructions for mounting the FlowSense EO cameras 102
13.2.2 Instructions for mounting the SpeedSense 1040 cameras 103

101
This section describes assembly and alignment of the 1-axis Scheimpflug mount series

1. Mount the camera as described in the sections above


2. Focus the camera on a surface. This can be a target plate or similar.
3. Move the Sheimpflug angle from one side to the other. Observe the center of the image. If an
image movement is observed the camera position has to be changed to remove this uncertainty.
A few pixels (up to 4 is ok) will not disturb finding the right Scheimpflug angle later on.
4. Move the camera in one direction until image only moves within the uncertainty. If the movement
of the image becomes larger, change the direction you move the camera

Forward movement of the camera:


To move the camera forth, mount the fine adjustment toll on the slider. Tighten the bolts that holds the fine
adjustment and slightly un-tighten the bolds that hold the camera, so that it is possible for the adjustment
toll to move the camera forth. Turn the screw on the fine adjustment toll right to move the camera forth
towards the lens.

13.2.1 Instructions for mounting the FlowSense EO cameras


Remove the original C-mount front from the camera and replace it with the new front, using the same
screws.
Mount the bellows using the retaining ring and four M4x6 bolts.

Place the assembly on the rail, and slide the camera forward to attach the lens mount to the bellows using
four M3x12 mm bolts.
Make sure the bellow is placed on the inside of the retaining ring, and on the outside of the front ring.

102
To make sure the image plane is correctly positioned, measure from the back of the rail to the rear of the
carrier, and from the pod plate to the lens mount.
Camera Lens mount height Distance from rear
FlowSense EO 16.6 mm 93.2 mm

13.2.2 Instructions for mounting the SpeedSense 1040 cameras


Remove the original F-mount front from the camera, and place the backup front, bellows, and retaining ring
with four M5 bolts.

103
Mount the bracket under the camera using two M5 UH bolts, and mount the carrier using 1/4" UNC bolts.

Place the assembly on the rail, and slide the camera forward to attach the lens mount to the bellows using
four M3x12 mm bolts.
Make sure the bellow is placed on the inside of the retaining ring, and on the outside of the front ring.

104
To make sure the image plane is correctly positioned, measure from the back of the rail to the rear of the
carrier, and from the pod plate to the lens mount.
Camera Lens mount height Distance from rear
SpeedSense 1040 21.6 mm 101.4 mm

13.3 Image Intensifiers


The image intensifier is the most crucial component of an intensified camera system, beside the sensor
itself. It allows for an optimum adoption of the camera to any specific application. The main function of the

105
image intensifier is the amplification of the incoming light signal. This enables the camera to take images
at very low light conditions and/or at very short exposure times.
For more information see http://en.wikipedia.org/wiki/Image_intensifier

l See "Custom Image Intensifier" on page 106


l See "Hamamatsu Image Intensifier" on page 108

13.3.1 Custom Image Intensifier


A custom image intensifier is a device where the user has full control over all signal parameters. The cus-
tom image intensifier will work for most intensifiers on the market.
It is possible to manually add a custom intensifier to the device tree by right-clicking the Acquisition agent
and select 'Add New Device...'.
It is possible to add as many intensifiers as needed to reflect the image acquisition system used.

Properties of the intensifier are shown below and are arranged in two groups. The Control group is for con-
trolling general properties of the intensifier and the Timing group is for controlling signal properties.

Gain
This parameter does not actively control the gain, but can be set to reflect the chosen gain of the image
intensifier. More info See Camera Attached

Phosphor type or Decay Time


The intensifier uses a phosphor screen to convert the electrons from the MCP (Micro Channel Plate) back
to photons. Depending on the phosphor type used in the image intensifier there is a certain decay time
associated. This decay time will influence when the intensifier can be gated again without seeing a ghost

106
image from the previous exposure. It is possible to select between two predefined phosphor types (P43 or
P46) or manually enter a decay time.

Camera Attached
This is a drop-down list that contains all cameras detected by the system. From here it is possible to
attach an intensifier to a camera. The gain used for the image intensifier will automatically be saved with
the images from the selected camera. To detach the intensifier from a camera select ‘None’.
An attached image intensifier will look like this in the ‘Synchronization Cables’ diagram:

It is also possible to attach an intensifier to a camera from the ‘Synchronization Cables’ diagram. This is
done by dragging and dropping the intensifier on top of the camera. To detach the intensifier drag it away
from the camera.

Gate Pulse Width


This is the width of the gate pulse.

Second Gate Pulse Width


When running in double frame mode this is the width of the second gate pulse. It is possible to select
‘Same as first’ or enter a specific width.

Delay to open
An internal delay from the image intensifier gets a gate signal until it actually opens (hardware specific
value to be found in the image intensifier manual).

Gate Pulse Delay


The time delay of the gate pulse relative to the light pulse of the system (T0). This value must be negative
if the image intensifier is to be open when the first light pulse is fired.

Gate Min. Off time


The minimum time the intensifier must be off (hardware specific value to be found in the intensifier man-
ual).

Gate Min. On time


The minimum time the intensifier must be on (hardware specific value to be found in the intensifier man-
ual).

Timing diagram
below is shown a complete system with a laser, camera and an image intensifier:

107
The zoomed in timing diagram for an intensifier is shown below. The dark blue part of the 'Phos-
phorescence time' shows the time that the intensifier is open and the light blue part shows the time for the
phosphorescence intensity to decay to 1%. The red dotted line is T0 (position of first laser pulse).

The timing diagram above is a result of the properties shown below:

Multiple image intensifiers connected to the same output


It is possible to attach two or more identical image intensifiers to the same output on the timing device. In
that case the intensifier parameters will be synchronized.

13.3.2 Hamamatsu Image Intensifier


The following Hamamatsu image intensifiers can be fully controlled and auto detected by DynamicStudio:

l C9546 Series
l C9547 Series

The properties of a Hamamatsu image intensifier are shown below.

108
Operation Mode
Operation mode can be set to Normal, Gated or Auto.

l Normal means that the image intensifier will be held open while acquiring.
l Gated, means that an external gate signal must be applied to the intensifier typically from the syn-
chronizer.
l Auto means that in free run it will operate as Normal and in Preview or Acquire mode it will be
Gated.

Gain
This property controls the physical gain of the image intensifier. It’s a relative value from minimum gain (0)
to maximum gain (999). An important note is that the gain adjustment on the intensifier hand control does
not operate in the same range as the gain from DynamicStudio. The gain from the hand control operated
from 600 to 999 meaning if it is set to 0 then the real gain is 600. More info see See "Gain" on page 106

Phosphor type or Decay Time


See See "Phosphor type or Decay Time" on page 106

Reset Warning/Error State


The Hamamatsu image intensifier needs to be reset if a warning has been issued or an error has occurred.
This drop-down reveals a button that has to be pressed to perform a reset.

Camera attached
See See "Camera Attached" on page 107

Serial number
Is an internal number identifying the image intensifier.

109
Timing parameters
For the timing parameters see See "Gate Pulse Width" on page 107

Known issues
If the intensifier is shot down while connected to DynamicStudio, there is a high risk of getting a total sys-
tem crash resulting in a blue screen of death (BSoD). To prevent this, disable the device first or close
down DynamicStudio before turning off the intensifier.

13.4 Analog Input


13.4.1 Analog Input
DynamicStudio supports a number of analog input options for simultaneous sampling of analog signals
and recording of images.

The following analog boards are supported:


Device Sample rate Resolution Voltage range Channels
Analog Input Option 10 kS/s 12 bit -10V to +10V 8 channels
Fast Analog Input Option, 4 ch. 2.8 MS/s 14 bit -10V to +10V 4 channels
Fast Analog Input Option, 8 ch. 2.8 MS/s 14 bit -10V to +10V 8 channels
Ultra Fast Analog Input Option 2 GS/s 8 bit -5V to +5V 2 channels

13.4.2 Installation
All analog input devices are auto detected by DynamicStudio.

Analog Input Option


This device is connected to the PC via USB and has direct BNC access.

Fast Analog Input Option


These devices are placed in a PCI slot in the PC. A special cable is connected from the board to a ter-
minal box to get BNC access.

Ultra Fast Analog Input Option


This device is placed in a PCI slot in the PC and has direct BNC access.

13.4.3 Connecting hardware for acquisition

Analog Input Option


Connect the "Trigger in" connector to the device generating the start measurement pulse train.

Fast Analog Input Option


Connect PFI0 in the "Trigger/Counter" group to the device generating the start measurement pulse train. If
for example this is the Dantec Timer Box, connect PFI0 to any available output on the Timer Box. Depend-
ing on the analog input device you may use AI0-AI3 (4 channel model) or AI0-AI7 (8 channel model) for the
analog signal inputs.

110
Ultra Fast Analog Input Option
Connect TRIG to any available output on the Timer Box and use CH0 and CH1 for the analog signal input.

13.4.4 Preparing software for acquisition


All analog input devices have the same fundamental user interface. In the Synchronization Cables view
the devices appear with only one input pin. This should be connected to the device that generates the start
waveform measure pulse. In the figure below the analog input device is connected to a Dantec timer box
and the timer box will deliver the pulse train.

111
With the System Control settings shown below the analog input device will be triggered 100 times at a rate
of 40 Hz:

If the analog input device properties are set as shown below then every pulse from the timer box will
trigger the analog input device to acquire 5000 samples at a rate of 2 MHz.
The Sample delay property is used to add a delay relative to the first laser light pulse. In this example the
delay is 2 μs meaning that the analog input device will not start sampling until 2 μs after the laser have
fired the first light pulse. The delay can be negative in which case sampling of the analog input will start
before the first laser pulse is fired.

112
Each channel has its own properties. These properties include the following:

l Enabled. This enables or disables the channel.


l Minimum input voltage. Allows the user to specify the minimum input voltage that should be meas-
ured. If a better resolution is needed, a higher voltage can be specified.
l Maximum input voltage. Allows the user to specify the maximum input voltage that should be
measured. If a better resolution is needed a lower voltage can be specified.
l Channel name. Allows the user to give a meaningful name to the analog channel. This name will
be used in the resulting plot diagram during and after measurement.

Ultra Fast Analog Input Option


This analog input device has some additional parameters as shown below:

113
The additional parameters are:

l Fast Sampling Option. This enables the device to sample channel #1 at a speed of 2 GS/s. When
fast sampling is enabled channel #2 will automatically be disabled.
l Input impedance. This can be either 50 Ω or 1 MΩ. The 1 MΩ path is required in applications that
require minimal loading. The 50 Ω inputs are protected by a thermal disconnect circuit. If however
a voltage change is large and sudden enough, the protection circuits might not have enough time
to react before permanent damage occurs. It is therefore important that you observe the specified
maximum input voltages, especially when the input impedance is set to 50 Ω.
l Input coupling. This can be either DC or AC. Select AC-coupling if the input signal has a DC com-
ponent that you want to reject.
l Minimum input voltage. This will show the minimum voltage that can be measured with the cur-
rent voltage range and voltage offset.
l Maximum input voltage. This will show the maximum voltage that can be measured with the cur-
rent range and offset.
l Voltage range. This specifies the voltage range of the input signal. Depending on what the voltage
offset has been set to, the minimum and maximum voltage that can be measured will change.

114
l Voltage offset. This is an offset that can be applied so the minimum and maximum voltage that
can be measured increased or decreased.
l Probe attenuation. If your probe attenuation is 10:1 and your voltage range is 10 V, the analog
board is set to measure a 1 Vpk-pk signal. The data returned is 10 Vpk-pk .

13.4.5 Acquiring data


During free run and preview it is not guaranteed that the number of sample waveforms will be the same as
the number of images recorded. To get the same number of analog data as there are images, perform a
real acquisition.

13.5 Cameras

13.5.1 HiSense MkI


The HiSense MKI camera (originally name HiSensePIV/PLIF camera) was one of the first cameras intro-
duced by Dantec Dynamics.

Main specification:

l 1280x1024 pixels.
l 12 bits/pixel.
l maximum frame rate: 9 Hz single frame / 4.5 Hz double frame.
l Pixel binning: No binning, 2x2 and 4x4.

The HiSense camera is connected as shown below:

115
The frame grabber must be a National Instruments PCI-1424 (for 32-bit RS422 and TTL image Acqui-
sition).

Known issue 

l In double frame mode the very first image pair in an acquisition can sometimes be out of sync.
The rest of the acquired image pairs is OK.
There is presently no solution for this issue.

13.5.2 HiSense MkII Camera


The HiSense MkII camera uses a highly performant progressive scan interline CCD chip, with typically
72% quantum efficiency at 532 nm. This chip includes 1.344 by 1.024 light sensitive cells and an equal
number of storage cells.
In cross-correlation mode the first laser pulse exposes the CCD, and the resulting charge is transferred as
the first frame to the storage cells immediately after the laser pulse. The second laser pulse is then fired to
expose the second frame. The storage cells now contain the first frame and the light sensitive cells the
second. These two frames are then transferred sequentially to the digital outputs for acquisition and proc-
essing. The charges in the vertical storage cells are transferred up into a horizontal shift register, which is
clocked out sequentially line-by-line through the CCD output port.
In relation to PIV and planar-LIF experiments, the Dantec HiSense MkII camera has a number of benefits
compared to other cameras with:

l Very high light sensitivity (typically 72% quantum efficiency at 532 nm)
l Extremely low background noise

The high dynamic range is a valuable flexibility in the practical performance of the PIV and LIF exper-
iments (although most PIV experiments do not require 12-bit resolution, LIF does in terms of scalar res-
olution, precision and accuracy in some cases). Also, there is less need to adjust the laser power and the
seeding, simply because a wider range of input intensity levels provide successful results. Likewise, if
problematic windows, dirt or other produces uneven illumination over the area, there is less loss of local
information, because the signal is still received at the CCD due to the higher dynamic range.

13.5.3 HiSense NEO and Zyla Camera


The HiSense NEO and Xyla camera are sCMOS cameras with a resolution of 2260x2160 pixels, 16 or 12
bits per pixel. The Neo camera has 4GByte of memory used for image-handling and image-buffering. The
maximum frame rate is 50 frames per second in single frame mode.
The HiSense NEO camera is delivered with a BitFlow Neon Frame grabber, and the Zyla camera, a Bit-
Flow Karbon frame grabbe. These frame grabbers are the only frame grabbers supporting the cameras.

The Neo camera is equipped with pipes for water cooling. This system cooling is not used.

The HiSense Neo and Zyla Camera transport case includes all that is necessary for installing and running
the camera under DynamicStudio. Included in the transportcaase there is also a USB cable. This cable is
only used for updating firmware in the camera. Leave the USB cable in the case, and store the transport
case for later use in case the camera is not used for longer periods of time or in the case the camera has to
be shipped.

116
Frame grabber and camera
There is one piano-type switch block on the Neon and Karbon frame grabber with two switches. These are
used to identify individual boards when there is more than one board in a system. The switch settings are
read for each board from DynamicStudio and used to identify the board.

Set the switch for each frame grabber in the PC:

Frame grabber 1: 1:down 2: down (As also illustrated in the image above)
Frame grabber 2: 1:up 2: down
Frame grabber 3: 1:down 2: up
Frame grabber 4: 1:up 2: up

Connecting the camera


The camera and frame grabber is connected with a Camera Link cables. To connect the two devices,
always make sure to disconnect the power cables from both the PC and the camera.

Synchronization cables
The camera is delivered with a special synchronization cable consisting of one TTL/DAC 26 D type con-
nector in one end and 4 BNC plugs in the other end.

Label and function description:

l EXTERNAL TRIGGER (Input: Used for triggering the camera. Connect this cable to the Syn-
chronizer in the system)
l FIRE (Output: High while exposing)
l SHUTTER (not used)
l ARM (not use)

Detecting the camera


It is important that the camera is connected and turned on before starting up DynamicStudio, otherwise
the camera will not be detected. While DynamicStudio is running, the camera must not be turned off. If the

117
camera is turned off, it enters an unknown state making DynamicStudio unstable. The problem will remain
even if the camera is turned back on while DynamicStudio is still running.

After entering the DynamicStudio Acquisition mode, the system will first detect the BitFlow frame
grabber, After this the frame grabber device will try to detect the camera. When the camera is detected the
Device tree will contain a 'BitFlow Neon' branch like this:

The camera is sensitive to grounding issues. If the system is not properly grounded the camera might not
be detected. Make sure that all equipment in the system is grounded.

Acquiring images
The HiSense NEO camera acquires images in a special way compared to other cameras. To acquire a sin-
gle image two sets of images are transferred from the sensor to the camera memory.
Immediately after reset of the entire sensor a reference frame is acquired. For this frame the exposure
time is extremely short so it measures the noise level of the sensor and is later used for noise reduction.
After the reference frame is acquired the actual signal frame is acquired.
The readout of the reference puts a lower limit on the exposure time of the actual image.
When the signal frame has been transferred from the sensor to the camera memory the two images are
combined to produce the resulting images.

Single frame mode


In single frame mode the camera acquires one frame for each trigger as shown below:

As shown the reference frame is read out of the sensor during the exposure of the signal frame. This
means that the minimum exposure time will be the read out time of one frame corresponding to ~10ms.

118
Double frame mode
In double frame mode the camera will acquire one signal frame for each trigger pulse. The time between
each pulse defines the exposure time. The camera is triggered twice in double frame mode giving the dou-
ble exposure.
This means that the camera is exposing all the time from the time the camera is triggered the first time
until end of acquisition as shown below:

As seen above the minimum exposure time becomes ~20ms, since it depends on the readout of both the
reference frame and the signal frame.
(The Exposure time Signal Frame 1 and Exposure time Signal Frame 2 is combined in to one double
image. Exposure time Signal Frame 3 will be the first frame of the next double frame)

Parameters for the camera

Fan speed
Configures the speed of the fan in the camera. There are three possibilities:

l Off
Although the fan can be turned of this is not recommended. The camera can over heat, but before
this happens an acoustic alarm will sound. Turn of the camera and let it cool down.
l Low
The camera will control the fan speed.
l High (default)
The fan speed is set to the maximum.

Camera pixel clock


The pixel clock is set to maximum for the camera as default. It is possible to lower the pixel clock. This
will reduce the maximum frame rate, but also lower the image noise .

Well capacity
This parameter is only visible in 12bit per pixel. Well capacity can be set to High or Low.
High well capacity means that each pixel cell can hold a larger amount of photons/electrons, where as
Low well capacity can hold fewer as can be seen below:
Bitdepth and Well capacity Sensitivity e-/ ADU(typical) Data Range Effective pixel saturation
limit / e-

119
12-bit (high well capacity) 7.5 12-bit 30000
12-bit (low well capacity) 0.42 12-bit 1700
16-bit 0.45 16-bit 30000

Camera image buffer


The BitFlow frame grabber is capable of transferring 15 frames/sec (full frame) from the camera to the PC
memory. If the acquisition frame rate is higher than 15 frames/sec images will buffer up in the cameras
internal memory.
If an acquisition is done at a rate where images are buffered in the HiSense NEO camera, and the user
wants to stop the acquisition by click'ing 'Stop', the system will stop, but the camera will continue trans-
ferring images to the PC for a while after the stop has been pressed. THis image transfere can take some
time, and if you do not want to use the images buffered in the camera you can stop this process by click-
ing 'Abort/Reset'.

Frame rate
By changing the ROI a higher frame rate can be archived.

l Changing the height will have an effect. Below are some theoretical frame rates. The assumption
that all ROIs are centered on the sensor (this provides the fastest frame rates due to the design of
the sensor).

l Changing the width of the ROI will not affect the frame rate when storing the data to the on board
camera memory however there will be a slight increase in the throughput to the card simply
because there is less data to transfer.

ROI Height (ROI Centered on Sensor) Current frame rate (Hz) Expected frame rate (Hz)

2160 28.2 30.8

1040 74.6 75.0

512 152 152

256 300 301

128 358 592

48 566 1396

13.5.4 FlowSense 2M Camera


The FlowSense 2M camera uses a high performance progressive scan interline CCD chip but with lower
performance than the HiSense MkII camera (approximately 75% at 532 nm and approximately 60% in the
yellow-orange region of the light spectrum). Though, the chip includes a much larger number of storage
cells with 1.600 by 1.200 light sensitive cells, which greatly limits the performance of LIF results in terms
of precision of the scalar property measured. In relation to PIV experiments, the FlowSense 2M camera
has the benefits to record in 8- or 10-bit data resolution, high light sensitivity at 532 nm (about 75% of the
HiSense MkII) and low background noise. The 8- or 10-bit dynamic range is a valuable flexibility in the
practical performance of the experiment.

120
Tips and Tricks

Black and White Level Adjustment


The FlowSense 2M camera is a two TAP camera, this means, that there are two AD converters in the
camera, each one responsible for one half of the total image.
Due to small differences in the analogue circuits surrounding the two ADC's and the CCD, the image
produced can have different intensities for each half image. This note describes how to adjust the
analogue setting to bring the black and white level intensity as close as possible to one another.
Procedure:

1. Turn the camera on.


2. Wait until the camera has reach work-temperature.
3. Put on a cap in front of the camera so that no light reaches the CCD sensor.
4. In DynamicStudio enter Acquisition mode, and wait until the camera has been detected.
5. Black Level Adjustment In the Device Properties for the "FlowSense 2M" camera, enter the com-
mand "BLF=0;" in Arm command.

6. This command adjusts black level of the right site of the image. The value entered after BLF= can
be in the range from -512 to 511. Ex: Arm String: BLF=-100; A higher value gives a higher black
level.
7. In System control click "Free Run".
8. Examine the image captured. If you can see any intensity difference near the center of the image
repeat from step 5 with a new value. (Use "Display options" to change the lookup table for the
image, to increase the intensity shown.)
9. Gain Balancing Remove the cap in front of the camera.
10. In the Device Properties for the "FlowSense 2M" leave the command and setting found during
black level adjustment and add the command "GAF=0;" ex: Arm command BLF=34;GAF=0;

121
This command adjusts the gain level of the right site of the image. The value entered after GAF=
can be in the range from -2048 to 2047. A higher value gives a higher gain.
11. Examine the image captured. If you can see any intensity difference near the center of the image
repeat from step 10 with a new value.

The camera is now adjusted.

13.5.5 FlowSense 4M Camera


The FlowSense 4M camera uses a high performance progressive scan interline CCD chip but with a sen-
sitivity similar to the FlowSense 2M camera (approximately 55% quantum efficiency at 532 nm)
The camera resolution is 2048 by 2048 light sensitive cells and an equal number of storage cells. In rela-
tion to PIV experiments, the FlowSense 4M camera has the benefits to record in 8- or 10-bit data res-
olution. The 8- or 10-bit dynamic range is a valuable flexibility in the practical performance of the
experiment.

FlowSense 4M MKII is similar to FlowSense 4M but has can do 12-bit per pixel. Apart form this the cam-
era is slightly more sensitive compared to FlowSense 4M.

Scanning format
It is possible reduce the size of the image by selecting Scanning format. By reducing the image size the
trigger rate can be increased. The table below shows the scanning area size and start position relative to
the full image.
Scanning area Start Line(from top of image) Effective Area
Full Frame 1 2048 x 2048
1/2 Frame 525 2048 x 1000
1/4 Frame 775 2048 x 500
1/8 Frame 901 2048 x 250

122
Tips and Tricks

Power-up Issue
There exist an issue powering up some versions of FlowSense 4M cameras. It is often not enough just to
remove the cord plug to the camera, instead to avoid this problem unplug the entire power supply.

Black and White Level Adjustment FlowSense 4M MKII


The FlowSense 4M MKII camera is a two TAP camera, this means, that there are two AD converters in
the camera, each one responsible for one half of the total image.
Due to small differences in the analogue circuits surrounding the two ADC's and the CCD, the image
produced can have different intensities for each half image. This note describes how to adjust the
analogue setting to bring the black and white level intensity as close as possible to one another.

Two procedures exists, the first one described just initiated the balancing, and then the camera will per-
form the adjustments needed.

Auto Balancing procedure:

1. Turn the camera on.

2. Wait until the camera has reached work-temperature.

3. Cover the lens of the camera and click "Free run".

4. In Device Properties for the camera enable "Enable black balance". This feature will try to adjust
the two images areas so that the black level is optical equal. The parameter will reset to "Disable"
state when done.

The best way to examine the black level adjustment is to use the LUT control from the context
menu of the display -> Display option->Colors.
(The adjustment may take some time, and if you are satisfied with the black level adjustment, and
the parameter has not yet been set to Disable by the system, then you can just stop the camera
from going on by disabling "Enable black balance". )

5. Remove the cap in front of the lens and place a white surface in front of the camera. Make sure
that pixel values is not above 80% of full scale This feature will try to adjust the two images areas
so that the gain level is optical equal.
Enable "Enable Gain balance". The parameter will reset to disable state when done.

Manual Balancing procedure :

1. Start up DynamicStudio and enter Acquisition mode. Click "Free Run". Let the camera run in free
run until the temperature of the camera has stabilized.

2. Put a cap on the camera.

3. Disable Auto Gain Control (Done in Parameters for the camera)

123
4. Open "Color map and Histogram" by double clicking the Image display. Right click the histogram
window and select "Auto Fit to Lut"

5. Expand the property entry "User Gain control" (It is an advanced property, do if you can not see
the property right click the property windows and select Advanced View)

6. Set "Master voltage offset" and Slave voltage offset fine tune" to 0

7. Adjust the value "Master voltage offset" so that "maximum value"(in Statistical info of Color map
and Histogram window) is around 30. (This is an iterative process, start out with the value 1000.)

8. Adjust the value "Slave voltage fine tune" so that the two image half's have the same intensity.
(This is an iterative process, start out with the value 2000.)

9. Take off the cap, and either take the lens of or put a White Balance filter in front of the lens.

10. Adjust the exposure time or open the aperture so the the mean intensity in the image is around 2/3
of the full range.

11. As a start value set the Master Gain to 2000, then go back and do 10. (this is an iterative process.
You will have to find a value that makes sense. No pixels values must be saturated)

12. Adjust "Slave gain fine-tune"(start out with a value of ex 1250) so that the two image half's have
the same intensity. (This is again an iterative process)

13. Put a cap on the lens and check and adjust the "Slave voltage offset fine tune" so that the two
image half's have the same intensity.

14. Take off the cap, and either take the lens of or put a White Balance filter in front of the lens.

15. Adjust "Slave gain fine-tune" so that the two image half's have the same intensity.

13.5.6 FlowSense EO Camera series


The FlowSense EO cameras are advanced, high-resolution, progressive scan CCD cameras. They are
built around SONY’s and KODAK’s line of interline transfer CCD imagers.
FlowSense EO cameras are feature rich, have small size, very low power consumption, low noise, and
efficient and optimized internal thermal distribution.
The FlowSense EO cameras feature programmable image resolution, frame rates, gain, offset, pro-
grammable exposure, transfer function correction and temperature monitoring. A square pixel provides for
a superior image in any orientation.
The cameras includes 8/10/12(/14) bits data transmission. The adaptability and flexibility of the cameras
allows them to be used in a wide and diverse range of applications.

124
FlowSense EO Camera specifications
Minimum
Maximum FPS Pixel Tap
Bit Depth Exposure
Model Resolution (full Size Read
(bits) Time (μs)
(pixels) frame 1) (microns) out

FlowSense EO VGA 640x480 8, 10, 12, (14 10 207/259 7.40 1/2


sing.)

FlowSense EO 2M 1600x1200 8, 10, 12, (14 10 35/44 7.40 1/2


sing.)

FlowSense EO 4M 2048X2048 8, 10, 12, (14 10 16/20 7.40 1/2


sing.)

FlowSense EO 5M 2448X2050 8, 10, 12 12.5 11/16 3.45 1/2

FlowSense EO 11M 4008X2672 8, 10, 12 15 4.8/6.5 9.00 1/2

FlowSense EO 16M 4872x3248 8, 10, 12, (14 15 3.2/4.3 7.40 1/2


sing.)

FlowSense EO 29M 6600x4400 8, 10, 12, (14 15 1.8/2.5 7.40 1/2


sing.)

FlowSense EO 4M- 2072x2072 8, 10, 12 8 32/26 7.40 4


32

FlowSense EO 4M- 2352x1768 8, 10, 12 10 41/33 5.50 4


41

FlowSense EO 16M- 4920x3280 8, 10, 12 10 8,3/6,6 5.50 4


9

FlowSense EO 16M- 4880x3256 8, 10, 12 10 8/6 7.0 4


8

FlowSense EO 29M- 6600x4400 8, 10 ,12 10 4.7/3.5 5.5 4


5

FlowSense EO 8M- 3296x2472 8, 10, 12 10 21/17 4


5.5
12

1: The cameras can run in two different modes: Normal clocked and Over clocked. First figure is normal
clocked, second one over clocked.
The frame rates listed are valid when the cameras are used in Free running mode. When running in Single
frame mode the exposure time has influence on how fast the cameras can run. The maximum frame rate
in Single frame mode can be calculated in the following way: max fps = 1/(1/Full fame rate + exposure
time).

Connecting the frame grabber and 4 Tap Read out Cameras


Connect Port 0 on the frame grabber to BASE on the camera.
Connect Port 1 on the frame grabber to MEDIUM on the camera.

Camera appearance in DynamicStudio


When the camera is detected by Dynamic studio it will appear in the device tree as follows:

125
Each of these "device" has properties that will be described in the following.

Camera Parameters

Device information and version


This section of the camera parameters describe the camera in some detail.

Full name
The camera name

Serial number
The camera serial number.

Image sensor
This section describes the camera sensor.

Sensor size
The width and height of the sensor in pixels

Effective sensor size


Some sensors have inactive pixels that are not used. This parameter describes the area of the sensor that
are active.

Pixel depth(s)
Supported pixel depth.
The desired Pixel depth is selected in parameters for Image Format.
(See "FlowSense EO Camera series" on page 124

Pixel pitch
Describes the distance between two adjacent pixels(center to center).

Settings

Camera file.
The camera file used.

Analog gain control


Specifies the analog gain and offset.
In Dual Tap readout the FlowSense EO camera has dual analog signal processors (or Analog Front End –
AFE), one per channel. It features one dual processor, each containing a differential input sample-and-hold
amplifier (SHA), digitally controlled variable gain amplifier (VGA), black level clamp and a 14-bit ADC. The
programmable internal AFE registers include independent gain and black level adjustment.
The figure below shows the relationship between the video signal output level and gain/offset. Theo-
retically, the black level should reside at 0 volts and the gain changes should only lead to increasing the
amplitude of the video signal.
Since the camera has two separate video outputs coming out of the CCD, there is always some offset
misbalance between the video outputs. Thus, changing the AFE gain leads to a change in the offset level

126
and to a further misbalance between the two video signals. To correct the balance between two signals for
a particular gain, always adjust the offset for each output.
Analog gain and offset can be linked, meaning that if gain for one tab is changed the other will follow.

Bit selection
The Bit selection feature allows the user to change the group of bits sent to the camera output and there-
fore manipulate the camera brightness. Up to 7 bits left or right digital shift can be selected. The internal
camera processing of the data is 14 bits. If the camera is set to output 10 bits of data then the four least
significant bits are truncated. In some cases the user may need to convert from 14 to 10 bit by preserving
the 4 least significant bits and truncating the 4 most significant ones. Please note that the camera signal-
to-noise ratio will be reduced using the data shift option.

Tab Balance
Three different modes can be selected : None, Dynamic, Dynamic Once and Static Automatic.
In Dual tab readout the camera has two separate video outputs coming out of the CCD. There is always
some offset misbalance between the video outputs. Thus, changing the Gain leads to a change in the off-
set level and to a further misbalance between the two video signals. To correct the balance between two
signals at any particular gain, the cameras have static and dynamic balancing algorithms implemented in
the firmware. The algorithms compares the black and bright levels of the adjacent pixels around the tap
line, and adjusts the gain and offset for each tap accordingly, until the balance has been reached. The
selection to use static or dynamic balancing depends on the application.
(If the “Dynamic Once” is to be used, the parameter has to be set from None to Dynamic Once.)

Digital Gain control


The camera has a built in digital gain and offset control. There are 20 possible digital gain levels from 1.0x
to 3.0x with step of 0.1x, and 1024 offset levels from (–511, to + 511).

Pixel binning
Horizontal binning combines adjacent pixels in horizontal directions to artificially create larger and more
sensitive pixels but with less resolution. FlowSense EO supports binning modes 2x2, 4x4 and 8x8.
Horizontal Binning is done in the digital domain, where the data from the binned pixels are added digitally.
Vertical binning is a readout mode of progressive scan CCD image sensors where several image lines are
clocked simultaneously into the horizontal CCD register before being read out. This results in summing
the charges of adjacent pixels (in the vertical direction) from two lines.

127
Horizontal and Vertical binning is used simultaneously. Vertical Binning is done in the time domain, where
the data from the binned lines is added in the CCD. Binning will result in higher possible frame rates.

Others Settings

Dynamic black level correction


As described in Analog Gain Control, the reference black level on each CCD Tap output fluctuates around
0V. The AFE offset correction works on the entire image and if there are noise fluctuations on a line level,
the AFE is not capable of correcting them. The camera has a built in dynamic signal-to-noise correction
feature to compensate for this effect. In the beginning of each line the CCD has several back (masked) col-
umns. The dark level for each tap is sampled over several of these masked pixels and the average per tap
black level floor is calculated for each frame. The average floor level for each tap is then subtracted from
each incoming pixel (from the corresponding tap) from the next frame.

Hot- and Defective- Pixel correction


A CCD sensor is composed of a two-dimensional array of light sensitive pixels. In general, the majority of
the pixels have similar sensitivity. Unfortunately there can be pixels where sensitivity deviates from the
average pixel sensitivity. A defective pixel is defined as a pixel whose response deviates by more than
15% from the average response. In extreme cases these pixels can be stuck ‘black’ or stuck ‘white’ and
are non-responsive to light. There are two major types of pixel defects – “Defective” and “Hot”. The cam-
eras come with a preloaded Defective Pixel Map file.
Hot and Defective Pixel correction can be enabled or disabled.

Sensor Over clocking


The FlowSense EO cameras provides a unique way to control and increase the cameras nominal speed.
When 'Sensor Over clocking' is not enabled, the pixel clock is set to the CCD manufacture recommended
pixel clock frequency. Since the FlowSense EO camera internal design is optimized for higher clock rates,
it is possible to over clock the camera sensor. What happen is that the internal clock will run ~20% faster
than the CCD manufacture recommended pixel clock. Special measures have been taken in order to pre-
serve the camera performance when over clocking is enabled.

Tab readout
4 Tap cameras can only run in 4 Tab mode., where as other cameras can run in single or dual tap mode.
When operating in a Dual Tab mode, the image is split in two equal parts, each side consisting of half of
the horizontal pixels and the full vertical lines. The left half of the pixels are shifted out of the HCCD reg-
ister towards the left video amplifier – Video L, while the right half of the pixels are shifted towards the right
video amplifier – Video R. In the horizontal direction the first half of the image appears normal and the sec-
ond half is left/right mirrored. The camera reconstructs the image by flipping the mirrored portion and
rearranging the pixels.

CCD Temperature
The camera has a built in temperature sensor which monitors the internal camera temperature. The sensor
is placed in the warmest location inside the camera.

Negative Image
When operating in the negative image mode, the value of each pixel is inverted. The resulting image
appears negative.

128
Flat Field Correction
A CCD imager is composed of a two dimensional array of light sensitive pixels. Each pixel within the
array, however, has its own unique light sensitivity characteristics. Most of the deviation is due to the dif-
ference in the angle of incidence and to charge transport artifacts. This artifact is called ‘Shading’ and in
normal camera operation should be removed. The process by which a CCD camera is calibrated for shad-
ing is known as ‘Flat Field Correction’. The cameras come with a preloaded Flat Field Correction file.

Image Format Parameters

Frame mode
This parameter specifies if the camera is to run in Single frame, Double frame or Single frame double expo-
sure mode. The default is "Use default from System Control" which means that settings in the System
control dialog will determine the frame mode.

Mirror or Rotate
This parameter specifies if the captured image should be mirrored or rotated.

Pixel depth
The internal camera processing of the CCD data is performed in 14 bits. The camera can output the data
in (14,) 12, 10 or 8 bit format. During this standard bit reduction process, the least significant bits are by
default truncated. If you wish to discard the most significant bits instead use bit shift as described above.

Image Area (ROI)


The frame rate is determined by the selected vertical height settings. Reducing the width will not effect the
achievable frame rate.

Known issue
The first time after DynamicStudio has been started or if you have changed Tap readout mode, then during
the first Preview or Acquisition frame 1 and frame 2 can be exchanged so that frame 1 becomes frame 2
and vice versa. This is only seen the very first time first time a Preview or Acquisition is done. Once you
stop and restart Preview or Acquisition no such issue is seen.

13.5.7 HiSense 4M Camera


The HiSense 4M / HiSense 4MC camera uses a high performance progressive scan interline CCD with a
higher resolution than the HiSense MkII and FlowSense 2M cameras, but lower sensitivity to green light
when operating in full-resolution mode (approximately 55-50% at 532 nm and approximately 45-30% in the
yellow-orange region of the light spectrum). Pixel binning (2x2) possibility is available as well to gain in sen-
sitivity.
The camera resolution is 2.048 by 2.048 light sensitive cells and an equal number of storage cells. It runs
with 12-bit resolution and re-sampled (upper) 8-bit resolution to gain space on the hard disk. (Note the
latter settings do not increase the frame rate.)
The difference between HiSense 4M and Hisense 4MC is that the 4MC camera is cooled.

13.5.8 HiSense 11M Camera


The HiSense 11M camera uses a high performance progressive scan interline CCD with a higher res-
olution than the HiSense 4M cameras. The sensitivity is approximately 50-45% at 532 nm. Pixel binning is
not available.

129
The camera resolution is 4.000 by 2.672 light sensitive cells and an equal number of storage cells. It runs
with 12-bit resolution and re-sampled (upper) 8-bit resolution to gain space on the hard disk. (Note the
latter settings do not enhance the frame rate.)

13.5.9 HiSense 6XX cameras


The HiSense 6XX cameras are based on high performance progressive scan interline CCDs.
The camera resolutions are:

l HiSense 610: 1600 x 1200


l HiSense 620: 2048 x 2048
l HiSense 630: 4008 x 2672

The cameras support 14-bit resolution only.

Camera settings
The following settings can be adjusted for the camera.

Pixel binning
The cameras support pixel binning 2 x 2. Binning combines neighboring pixels to form super pixels. It
increases the sensitivity of the sensor while decreasing the spatial resolution.

Camera Pixel clock


The sensor pixel readout clock can be set to:

l HiSense 610: 10MHz or 40MHz


l HiSense 620: 10MHz or 40MHz
l HiSense 630: 8MHz or 32MHz

The pixel clock has influence on the image quality. A low pixel clock results in low readout noise, where as
the high pixel clock results in faster image readout and thereby higher frame rate.

ADC converters
Using two analog to digital converters (ADC) will decrease the readout time. The Region of Interest (ROI)
must to be symmetric in the horizontal direction. When selecting an ROI to use, specify the x position,
and the width will be adjusted accordingly.
When two ADCs are used the maximum gray level is 10000. The reason for this is that the output amplifier
of the CCD has not enough bandwidth.
When one ADC is used the maximum gray level is 16384.

Using only one ADC you can freely select the ROI, but the sensor readout time is increased resulting in a
lower maximum frame rate.

ADC Offset control


To minimize the offset between the to ADC's it is necessary to add a certain signal level to the real signal,
to enable the measurement of the total noise floor (if the offset would be zero, an unknown amount of
noise would be cut off, since a negative light signal is not possible). The stability of this offset is usually

130
guaranteed by a proper temperature control and a software control, which uses the information of "dark pix-
els" information from the sensor limits. Further, algorithms must be applied to match the sensor per-
formance if 2 ADC's are used for readout. All this can be done automatically (Offset control enabled) or
can be switched off (Offset Control disabled).

Noise filter
By enabling the Noise Filter the image data will be post-processed in order to reduce the noise. This is
done by the camera.

Camera preset Temperature


This setting controls the temperature of the image sensor.

13.5.10 Image buffer recourses


Unless the camera has onboard RAM for storing images during acquisition, you will have to allocate RAM
or system memory in the PC.

Two ways exists to allocate memory.

1. Reserved memory (default)


Reserved memory is memory a preallocated when booting the PC. This is the preferred method,
since you always know exactly how much memory you have preallocated and there by how many
images you can acquire. It is also possible to allocate very large bufferes compared to Allocate on
demand. On a 64bit Vista PC you can choose to allocate more then 4GByte of memory.

2. Allocate on demand
Each time the acquisition starts memory is dynamically allocated to hold images during acqui-
sition. There is always other programs running on the PC, and you will never know exactly how
much memory you can allocate before the real acquisition starts.

The size of the image buffer that you can allocate using this mode is limited to how much the
DynamicStudio process can allocate, ussaly below 1024 MByte.

You can choose between the to ways of using memory in properties for the "Image buffer recourses"

Image buffer recources also handles Streaming to disk.

Reserve memory for image buffer


After you have installed DynamicStudio or just the acquisition agent on a PC there is no pre-allocated
memory. When the system has detected the cameras connected the following dialog will be shown:

131
You will now have to chose which type of memory you will use during acquisition:

1. Right click the in the Devices view and set a check mark at "Advanced view"
2. Select "Image buffer resources" in the Device tree view.
3. Select the Memory mode to use.
4. In the Device Properties view set the property "Buffer size" to ex. 1024 MByte
5. If you have chosen "Reserved memory" you will have to reboot the PC, otherwise the system is
ready for acquisition.

Reserving memory on 32bit systems


On 32bit systems it is recommended to only allocate up to 2GByte RAM. The operating system and appli-
cations running on the PC must have access to memory in order to function correctly. Reserving more
than 2GByte memory for image buffer can lead to very slow behavior of the PC.

Reserving memory on 64bit systems


On 64bit systens only memory above 4GByte physical address space can be allocated. It is therefore rec-
ommended to have more than 4GByte installed on 64 bit PCs.
You will be able to allocate all memory above 4GByte physical memory. If you f. ex. have 24GByte mem-
ory installed on the PC you will be able to allocate ~20GByte memory for image buffer.

If the PC only have exactly 4GByte memory installed


Devices such as Display Adapters, Network interface cards etc. will occupy physical memory space
below 4GByte, so if a PC has 4GByte memory installed, then part of the memory installed on the PC will
be placed above the 4GByte physical address space. The amount of memory above 4GByte address
space in such case will vary from a few MByte up to several 100MByte. This is why that even if you only
have 4GByte memory installed on the PC you will be able to reserve some memory for image buffer.

13.5.11 Streaming to disk


Using frame grabber based cameras it is possible to stream data to disk while acquiring. In this way the
number of images that can be acquired is not limited by the amount of physical memory (RAM) in the PC,
but limited only by the size of the disk used for streaming.

Enabling streaming to disk


Streaming to disk is enabled in Device Properties of the "Image buffer resources" device:

l Find the parameter Setup->Disk buffer.


l Change the setting to "Enable".

When the Disk buffer has been enabled it is possible to select which disk/drive to use for streaming. This
is done by changing the setting of Disk-buffer->Drive. ( See "Selecting the disk to use for streaming to
disk" on page 134

How streaming to disk works in DynamicStudio


Even if you choose to use streaming to disk DynamicStudio still need a RAM buffer. This RAM buffer is
used as a FIFO or Ring buffer (http://en.wikipedia.org/wiki/Circular_buffer).
Acquired images are filled into the FIFO from the frame grabber and then moved from the FIFO to the disk.
The recommended size of the RAM buffer when streaming to disk is at least 200 MBytes per camera.

132
Acquisition rate limitations when streaming to disk
When doing a normal acquisition (without streaming to disk) the only bandwidth limitation (measured in
MBytes/second) is the PCI or PCI Express bus (and in some rare cases also the RAM). When streaming
to disk there is however the disk bandwidth that might reduce the maximum acquisition trigger rate.

Measuring Disk performance


The bandwidth limitation for a specific disk can be measured using a special tool named 'Disk-
PerformanceTest.exe' located in the installation folder of DynamicStudio (C:/Program files/Dantec
Dynamics/DynamicStudio). Using this tool it is possible to get an idea about the disk bandwidth limitation
during acquisition. The tool does not measure the exact bandwidth that can be used to calculate the max-
imum Acquisition rate during acquisition, since the data flow is only in one direction:

l from the RAM to the disk.

and not:

l from the frame grabber, to the RAM, and then to the disk.

but it gives a good indication of what can be achieved.

Monitoring disk performance during acquisition


During acquisition and streaming to disk it is possible to monitor the usage of the Ring buffer.
To do this move the mouse over the camera progress bar in the System Control window and a tool tip will
appear with information about the image acquisition from the camera.

As you can see in the example above the 'Ring buffer usage' is at 1.85% .

Understanding Ring buffer usage percentage:

l If the 'Ring buffer usage' stays at a low percentage the disk buffer bandwidth is high enough to
handle the data from the camera.
l If the 'Ring buffer usage' grows towards 100% the disk can not handle the amount of data from
the camera. At the time it reaches 100% the system will stop. In the log there will be a message
telling that the Disk buffer is lagging behind.

133
Selecting the disk to use for streaming to disk
Do not use the disk on which the Operating System (OS) is installed, but use a disk which is only used for
streaming to disk, and again this disk should not be a partition on the same disk as the OS, but a totally
independent disk.
Use a RAID 0 disk if possible, since these disks are optimized for speed. (see http://e-
n.wikipedia.org/wiki/Redundant_array_of_independent_disks)

Saving images stored in a disk buffer


After the acquisition is done it is now possible to save the data to the database. When images are
streamed to disk the images saved in the image buffer can be quick converted. What is done is that the
main database is given a reference to where the data is saved. Of course the images are still kept in the
image buffer meaning that the space occupied for the saved images cannot be used before it is released.

In the database the image ensembles representing the data saved in the database will have an indication
that the data is remote, and the data is also handled as if the data is remote relative to the main database.
Please see Distributed Database.

To enable Quick Conversion of image data, click Tools-Options and select the Acquisition tab. Disable
"Save Acquired data in main Database" and Enable "Do quick conversion on data streamed to disk".

13.5.12 NanoSense Camera Series


Installing the NanoSense camera
Operating the camera (Single and double frame)
iNanoSense Cameras
See "Using long time between pulses in double frame mode" on page 136

The NanoSense camera series use high-speed CMOS image sensors. Apart from most other cameras
they feature onboard RAM storing images in the camera for later transfer to the host PC. This means also
that no frame grabber is required for these cameras; instead they connect via a USB2 port in the PC.
The NanoSense MkII has a resolution of 512 x 512 pixel and runs up to 5.000 fps, and the NanoSense
MkIII has a resolution of 1.280 x 1.024 running up to 1.000 fps. Both cameras can run even faster with
reduced image height, and both are 10-bit cameras, but for increased transfer speed you will normally
keep only the upper, middle or lower 8-bits. Later versions of the camera can only read pout the upper 8
bits. Combined with an enhanced gain function, which minimizes the read out noise, this ensures very
nice and crisp PIV images.

Installing the NanoSense Camera


The NanoSense camera driver is embedded in the DynamicStudio installation package. Thus it is not nec-
essary to install a separate driver. However, the first time you plug in the camera, you will need to make
sure that windows selects the correct driver before you start DynamicStudio. Please follow the steps in
the figure below to guide the Windows Plug-And-Play installation system to assign the driver correctly.
Windows will at a stage (step 3) ask if you want to continue the installation. You must select "continue
anyway" at this stage. Please also note that the driver is assigned to a specific USB port. If you install the
camera on a different USB port you must repeat this procedure.

134
Detecting the Camera
NanoSense cameras are either equipped with an USB interface only or with both an USB interface and a
Gigabit Ethernet interface:

l When using the USB interface the camera is detected automatically when connected to the acqui-
sition agent PC.
l When using the Ethernet connector, the camera can only be detected, when the NanoSense cam-
era detector is added to the acquisition agent PC. This is done by right clicking the agent in the
Devices list, and choose Add new device -> Detector for NanoSense camera.
Use a separate/dedicated Gigabit Network Interface Card (NIC) for connecting NanoSense cam-
eras using Ethernet.
Set a static IP address on this NIC ex. 100.100.100.1 and the subnet mask 255.255.255.0.

Operating the Camera (Single and Double Frame)


In order to run the NanoSense camera it must be configured using the cable connections diagram. Please
see the section See "Normal Use" on page 47 for more information. The NanoSense camera can be run in
both single or double frame mode. Single frame mode is typically used for alignment purposes and for LIF
applications. In single frame mode, the exposure time can be set in the "Device properties" window. Click
on the camera in the device tree and edit the "exposure time" entry

135
Running the NanoSense camera in Single frame mode

Running the camera in double frame mode is typically used in connection with PIV applications. if the
radio button in the system control window is set to "Double frame mode" the corresponding entry in the
Device Properties window is changed to " Setting for Double Frame Mode". Note that in double frame
mode it is only possible to adjust the exposure time for the first frame (Frame 1). Please also note that set-
ting the exposure time for frame 1 to a high value will limit to obtainable frame rate. The default setting us
20 Microseconds.

Running the NanoSense camera in double frame mode

Using long time between pulses in double frame mode


In some cases, the sum of the duration of the first and the second frame exposure is smaller than the
desired time between pulses. DynamicStudio will not accept the user to type in a time between pulses
which execs the total exposure time. Instead the system will adjust the entered time between pulses
down to the maximum allowable.
To resolve this issue the user must, manually, specify a longer exposure time for the first frame. This is
done in Device Properties for the camera.

iNanoSense Cameras
The main difference between the an ordinary NanoSense camera and an iNanoseSense camera is that
the iNanoSense camera has a build in Intensifier in front of the image sensor.
The Intensifier can be operated in to different ways, selected on the back of the camera.

136
l Continues on
Used for running camera in Free Run. (The Intensifier is always on.)
l Gated mode
This is the preferred mode of operation when the camera is operated in a synchronized mode,
either Preview or Acquire.(The intensifier is only on when the Gate pulse is high)

There are two setting for the intensifier:

l Gate Pulse Width


The gate pulse width is the time period that the intensifier is open.
l Gate Pulse Offset
The gate pulse offset is the time period from the laser pulse to when the intensifier opens.
l Trigger Polarity
The trigger polarity is the polarity of the signal that the intensifier needs to open.

Using the iNanoSense camera


1. Always use the camera in gated mode. The gate switch is located on the back. If you want to use the
camera in Free run connect a BNC cable from the SyncOut to the Gate in on the back of the camera.
2. Keep the gain knob around 3/4 of the way towards maximum. This ensures that sufficient gain is
applied to the unit.
3. For safety when the unit is turned on keep the exposure to a minimum and the camera aperture almost
closed. Then adjust the camera lens aperture to the desired setting. If the image saturates decrease the
exposure to prevent damage to the Intensifier tube. If there is no image increase slowly the exposure.
4. The previous approach is indicated when you use the camera to visualize a normally illuminated scene.
5. In Preview and Acquire: Make this exposure as small as possible. To avoid over-exposure that could
damage the tube start with the lens aperture totally closed and then slowly increase the aperture until an
image is obtained.

13.5.13 Photron Camera Series

The Photron camera series use high-speed CMOS image sensors. Apart from most other cameras they
feature onboard RAM storing images in the camera for later transfer to the host PC. This means also that
no frame grabber is required for these cameras; instead they connect via a FireWire port or a gigabit Eth-
ernet to the PC. 

Cameras supported
Camera Specifications
APX frame rate: 2000 Hz at full resolution
full resolution: 1024 x 1024
pixel depth: 10 bit
APX RS frame rate: 3000 Hz at full resolution
full resolution: 1024 x 1024
pixel depth: 10 bit
SA3 frame rate: 2000 Hz at full resolution
full resolution: 1024 x 1024
pixel depth: 12 bit
SA1 frame rate: 5400 Hz at full resolution

137
Camera Specifications
full resolution: 1024 x 1024
pixel depth: 12 bit
SA-X2 Frame rate 12.5KHz at full resolution
Full resolution: 1024 x 1024
Pixel depth 8/12 bit
*Supports showing images while acquiring
*Supports Full frame rate in mode TimerBox slave (Preferred con-
nection!)
*Supports Retrieving number of images acquired if an acquisition is
stopped prematurely

See "Working with more than one Photron camera " on page 138
See "Photron Camera Series" on page 137
See "Single/double frame mode" on page 139
See "Trigger modes" on page 140
Controlling Intensifier
More images acquired than specified/acquisition takes forever
Slow working of the Photron cameras
Preview and multiple cams

Detecting the camera


Photron cameras are either equipped with a firewire interface or an Ethernet interface. A firewire interface
camera is detected automatically when connected to the acquisition agent PC. An Ethernet camera can
only be detected, when the Photron camera detector is loaded. This is done by right clicking the agent in
the device list, and choose - add new device - Detector for Photron camera.
The camera must be connected and powered up before the detector is added.

Ethernet cameras 
In the case that it is needed to change the IP address of the camera then the detector IP search range
should include the cameras address.
For Ethernet cameras it is an advantage to setup the Ethernet adaptor in the PC to use Jumbo frames if
the adaptor supports it. This can dramatically reduce the time it takes to save acquired data.
To use Jumbo frames (MTU) open the Windows Control Panel->Network Connections. right-click the
adaptor that is connected to the camera and select properties. Under the General tab click the Configure...
button. Under the Advanced tab enable Jumbo frames. It is only possible to use Jumbo frames if the adap-
tor supports it.

Working with more than one Photron camera 


Before working with the cameras it is important to make sure that the cameras have different IDs.
This is necessary for DynamicStudio to save individual settings to the right camera.
To check this start DynamicStudio and detect the cameras as described in See "Photron Camera Series"
on page 137
The ID can be seen from the properties:

If there are more cameras with the same ID they should be changed.

138
For Ethernet cameras the ID is the same as the last number in the IP address. The ID for these cameras
can be changed from Device Properties of the camera by changing the IP address. After changing the IP
address the camera must be restarted to update it.

For firewire and PCI cameras the ID is by default 1.


The ID for these cameras must be changed via the keypad delivered with the camera. For more infor-
mation on how to do this see the camera manual.

Single/double frame mode


The Photron camera can be run in both single or double frame mode. The double frame mode the exposure
time will follow the cameras internal frame rate. In single frame mode, the exposure time can be set in the
"Device properties" window. Click on the camera in the device tree and edit the "exposure time" entry. In
this mode, the camera is running with a fixed frequency, and the laser is fired in the center of each
exposure. Therefore it is possible to adjust the exposure time of the frames. However, the camera clock
does only support a number of different exposure time , which can be chosen from a pull down list.

Operating a Photron camera in single frame mode

Running the camera in double frame mode is typically used in connection with PIV applications. If double
frame mode is selected the laser pulses will be in the end of one frame and the beginning of the next frame
respectively.

139
Operating a Photron camera in double frame mode

Running the Photron camera in circular trigger mode


Running the camera in circular trigger mode makes it possible to listen to an external event/trigger and
when this event happens save images before and after the event.
Running this mode is done by selecting circular as the trigger mode in camera properties. This will reveal
the Trigger frames property making it possible to enter the number of frames to save before and after a
trigger event.
In the cable connection diagram the general in input on the camera must be connected to an external
device ex: pulse generator.
The actual camera must also have some kind of pulse generator connected that only generates one pulse
pr. acquisition.
No other trigger generating device can be connected to the camera. The camera must operate as a master
(see below for description).

Acquired images
As can be seen from the picture below the images acquired starts from two. This is because the first two
images acquired are poor and should not be used.
(For the SA-X2 camera the first image will be image 1. Usually this camera run in TimerBox Slave mode.
Here the first image will be good quality, where as in Master mode the first image can be distorted, in this
case, just sjio saving this image to the database)

Trigger modes
In order to run the Photron camera it must be configured using the cable connections diagram. Please see
the section See "Normal Use" on page 47 for more information.
The camera can act differently depending on how it is connected to other devices.
There are three ways to connect the camera:

l As master
l As slave
l As timer box slave

Master
Running the camera as master is the preferred way of operating the camera. The camera's performance is
best when it runs in this mode, since the clock of the CMOS sensor is set to the preferred value. When run-
ning the system in this mode, the camera clock will control other devices in the system.
In order to run this mode, connect the sync-out connector to the device that needs to be synchronized and
connect the general-out or TTL-out if the device needs a camera start pulse (for a definition of general-out
and TTL-out refer to the camera manual).
The width of the sync signal on sync-out is equal to the exposure time of the camera.
The lowest possible frame rate when running in Master mode is 50 Hz.
Note for the image below. If the SA1 or SA3 are used then the 'General out' is called 'General out 2' and
the 'Sync out' is called 'General out 1'

140
The above image shows the use of the Timer Box and the below image shows the use of the TimingHub.
The TimingHub do not have the ability to control one input trigger with another so there are no way of syn-
chronizing the start of measurement with this approach.

Note
For the image below. If the SA1 or SA3 are used then the 'Sync out' is called 'General out 1'

An example of how to run the Photron camera as a master clock

The camera sends out sync pulses even it is not running, this means that the system needs a start meas-
urement pulse to synchronize the start of other devices connected to the timer box.
The start/enable signal from the camera can go high both in between and at the same time as the sync sig-
nal goes high.
This and the fact that the timer box needs a start/enable pulse to arm the trigger in input could result in an
image shift in the camera by one. The same effect is described in the Master/Slave configuration.
A visual validation of the images must be done to see if the images are shifted or not. If they are, then the
frame shifter should be set to 1 (Found under "info" in the image buffer devices properties).

When a shift has been made browse through the images to update. This is something that must be done
before saving the images. The validation must be performed on every acquisition. After a shift the first or
last double frame image will not be usable depending on what shift direction.

141
Master / Slave
It is important to note that when more than one camera is connected it is not possible to run in preview
mode without the risk of getting bad images. See Preview and multiple cams
If two or more Photron cameras are used in synchronization (as in a stereo setup) they must be of the
same model and one camera must be the master and the rest must be slaves of the master. Any other
device in the synchronization chain is likewise a slave. The slave cameras expects a sync signal on its
sync-in connector, and this sync signal MUST be provided before and while the measurement is ongoing.
To run this mode, connect a sync signal from the sync-out of the master camera to the sync-in connector
on the slave camera and connect a start measurement signal to the general-in or the TTL-in connector (for
a definition of general-in and TTL-in refer to the camera manual).
The slave camera is synchronized with the master so when changing properties these should be change
for the master and the master will automatically update the slave with the changes.
This is true for frame mode, ROI, frame rate and exposure time.
The slave camera is synchronized with the master so when changing properties these should be change
for the master and the master will automatically update the slave with the changes.
The lowest possible frame rate when running in slave mode is 50 Hz.
Note for the image below. If the SA1 or SA3 are used then the 'General out' is called 'General out 2' and
the 'Sync out' is called 'General out 1'

The above image shows the use of the Timer Box and the below image shows the use of the TimingHub.
The TimingHub do not have the ability to control one input trigger with another so there are no way of syn-
chronizing the start of measurement with this approach.
Note for the image below. If the SA1 or SA3 are used then the 'General out' is called 'General out 2' and
the 'Sync out' is called 'General out 1'

142
An example of how to run Photron cameras in a master/slave configuration

An artifact of the master slave configuration, is the fact that the start measurement signal emitted from the
master is not completely synchronized with the sync signal. Due to delay in the electronic circuits this sig-
nal may be slightly delayed when received by the slave, which could therefore be starting one frame to
late. When running in double frame mode, this would result in a double frame image, where frame B of the
first double frame image becomes frame A of the second and so on. This is clearly visible when toggling
the double frame images. The problem is solved by setting the frame shifter to 1 (Found under "info" in the
image buffer devices ).

The result of the delay in the start measurement signal. The problem is corrected by using the frame
shifter (below)

The frame shifter

143
Timer box slave
For other cameras than the SA-X2 camera (this is the preferred way of synchronizing the SA-X2 camera!),
it is important to note that when more than one camera is connected it is not possible to run in preview
mode without the risk of getting bad images. See Preview and multiple cams
If necessary it is possible to synchronize the camera from an external timing device (Such as an 80N77
timer box). In this case the camera is a timer box slave. Since the camera always is running on an internal
clock, the only way to achieve external synchronization is to reset this clock repeatedly. This is called ran-
dom reset, and can only be done at a maximum rate of half the camera frame rate including a reset time in
single frame and 2/3 the camera frame rate including a reset time in double frame. Timer box slave mode
is the only way to make the camera acquire data at a frame rate less than 25 Hz double frame mode)
To run this mode connect the timer box camera-out to the general-in on the camera.

The above image shows the use of the Timer Box and the below image shows the use of the TimingHub.

Running the Photron camera as a slave of an external timer

144
The difference between internal clock mode and external triggered mode. The drift is highly exaggerated

More images acquired than requested/Acquisition takes forever


The camera needs to fill up a memory partition before it will stop and partition sizes cannot be set freely.
This means that even if just 2 images has been requested the camera will continue measuring until the par-
tition is full and thus provide more images than requested. If a low trigger rate and/or a small ROI has been
selected it may take a while to fill up the memory partition and the acquisition may thus take much longer
than expected.

Slow working of the Photron cameras


Due to verification of settings and the general working of the camera, acquisition and general handling of
the camera may appear slow compared to other cameras. This is a well known issue.

Controlling Intensifier
The intensifier is setup through the keypad or the Fastcam viewer utility that came with the camera. Set
the intensifier mode to external and disable the focus mode time out. When that has been done the inten-
sifier can be gated through DynamicStudio. In the Devices window add an intensifier to the camera and
establish the necessary connections in the Synchronization Cables window. There are three setting for
the intensifier:

l Gate Pulse Width


l Gate Pulse Offset
l Trigger Polarity

The gate pulse width is the time period that the intensifier is open. The gate pulse offset is the time period
from the laser pulse to when the intensifier opens (this can be a negative value). The trigger polarity is the
polarity of the signal that the intensifier needs to open.

Preview and running with multiple cameras


It is important to note that when more than one camera is connected it is not possible to run in preview
mode without the risk of getting bad images. The reason for this is that in preview the cameras are started
and stopped again and again to be able to get images out of the cameras to show to the user. During this

145
starting and stopping the cameras must not get triggered on 'General in'. Because of the nature of the
timer box and of the master camera in a master/slave configuration it is not possible to prevent this pulse
in 'General in', thereby risking poor image quality.
The Photron APX, and APX-RS cameras uses a standard FireWire (IEEE 1394) interface. FireWire® is
the registered trademark of Apple Computer, Inc., for the IEEE 1394 interface developed by the 1394
Working Group within the Institute of Electrical and Electronics Engineers (IEEE).
Photron APX, APX-RS, and SA1 are registered trademarks of PHOTRON LIMITED, PHOTRON USA
INC, and PHOTRON EUROPE LIMITED. For more information visit http://www.photron.com.

13.5.14 SpeedSense 10XX series


SpeedSense 10XX cameras are camera link cameras. The data from the camera are transferred to sys-
tem memory or RAID disk using a frame grabber.

Main specification SpeedSense 1010:

l 1280x1024 pixels.
l 10 bits/pixel (only 8 bits can be readout from the camera (lower, middle upper 8 bit selectable)).
l maximum frame rate(full frame): 520 Hz single frame / 260 Hz double frame.

Main specification SpeedSense 1020: 

l 2320x1728 pixels.
l 10 bits/pixel (only 8 bits can be readout from the camera (lower, middle upper 8 bit selectable)).
l maximum frame rate(full frame): 170 Hz single frame / 85 Hz double frame.

The SpeedSense 10XX camera is connected as shown below:

146
The frame grabber must be a National Instruments PCIe-1429 or PCIe-1433 (Full camera link frame
grabber).

Calibration file
The camera is delivered together with a CD. The content of this cd includes a calibration file for the cam-
era. The first time Noise reduction is enabled, and the calibration file is not found on the PC, Dynam-
icStudio will ask for this CD. Follow the instructions given by DynamicStudio to have the calibration file
copied from the CD to the PC.

Known issues:
When acquiring image data from a SpeedSense 10XX camera the acquisition can suddenly stop. The rea-
son for this is FIFO-buffer overflow on the NI PCIe-1429 frame grabber.

To minimize the risk of this issue happening, reduce the width of the image acquired.

This issue can have two reasons. The first is due to the system RAM being to slow or scarce, the second
is due to the northbridge on the motherboard.

RAM Cause
The frame grabber is trying to push data through the bus, and the RAM just can't fill fast enough to keep
up. If this is the case, the buffer error is being caused by the system RAM, and can typically be solved by
upgrading the RAM speed or adding more RAM to the system.

Northbridge Cause
Theoretically, PCI express is supposed to have dedicated bandwidth. This means that if you have two
PCI express cards in your computer, the bus traffic on one will not affect the bandwidth on the other. How-
ever, some motherboards (and more specifically the northbridge on the motherboard) do not handle PCI
express protocol as well as others. The result is that during times of intense loading of the northbridge, the
available bandwidth of the PCI express bus will decrease. If it decreases below the transfer rate of the
frame grabber (the data inflow from the camera is greater than the outflow through the PCI express bus),
the buffer memory on the frame grabber will start to fill up. Because the onboard memory on the PCIe-
1429 is very small, it quickly fills up and the error occurs. If this is the cause, a new motherboard will have
to be procured or a different computer used. Alternatively, decreasing the frame rate or image size also
reduces the likelihood of the northbridge reducing the bandwidth, thus avoiding the error.
Another possibility is that the PCI express slot is not a full slot. For example, some video card slots have
16 channels going to the device (from the northbridge to the PCIe card) but 1 channel coming from the
device. The result is a x16 slot that the PCIe card fits into, but can only capture images at a x1 speed.

For more detailed information see: http://-


digital.ni.com/public.nsf/allkb/32F3F5E8B65E9710862573D70075EED1

Note: These causes could exist in any PCI or PCI express card. The likelihood of it happening is low how-
ever because most other frame grabbers don't transmit enough data to overload the northbridge.

Specification on a PC that is known to work well


Below table shows the vital components of a PC that is known to work well with the SpeedSense 10XX
cameras and the PCIe-1429
PC CPU Intel® Core™ i7-940
Motherboard ASUS P6T/DELUXE INTEL X58
RAM OCZ 3x2048 MB DDR3 1600 MHz

147
Bright horizontal line in the image
Images grabbed from the SpeedSense 10XX cameras may have a horizontal bright line and a difference in
the average intensity between the areas above and below this line:

The reason for this phenomena is that the camera is exposing an image while reading out the previous
image. The "Shutter line" appears only if the exposure time and the readout time satisfy the following for-
mula:

Exposure time + Sensor readout time < 1/Trigger frequency

Horizontal line seen in image:

In order to remove the line reduce or enlarge the exposure time or reduce or enlarge the trigger frequency
for the camera.

Horizontal line not seen in image (Trigger frequency reduced) :

Horizontal line not seen in image (Exposure time reduced and Trigger frequency enlarged) :

148
13.5.15 SpeedSense 1040 camera
The SpeedSense 1040 camera is a CMOS camera with an effective resolution of 2320 x 1723 pixels,10 bit
per pixel (8 bit per pixel read out). The camera can run at frame rates up to 193 Hz. The frame rate can be
doubled, by using two frame grabbers(not yet supported)

Connecting the camera

Frame grabber

The camera is a Full Camera Link camera, meaning 2 Camera Link cables are required to connect the cam-
era.
Only the National Instruments PCIe-1433 frame grabber supports this camera.

Connect the frame grabber and the camera as follows:

Camera Frame grabber

O2 Port 0

O1 Port 1

Note:
-The L2 indicates Power on and L3 indicates Active exposure.
-Power is connected to the camera via the SUB D 15-pol connector named Control

149
Synchronization
The camera is triggered via the Frame grabber. On the frame grabber the SMB trigger connector labeled
'TRIG' must be connected to the trigger output of the Synchronizer.

(The camera is delivered with a small SMB to BNC converter cable)

13.5.16 Parameters for the camera

Gain
The digital gain setting controls the shifting of the pixel bits. It selects, which eight of the ten digitizer bits
are output to Camera Link.(Overflow is avoided by saturation to maximum)

Dark Value offset


This value is a number between 0 and 255 which is added after the "in camera" calibration is performed.
The offset is used to adjust the dark level and avoid clipping of pixels to black in low light situations.

It is only applied when "in camera" calibration has been performed. The eight bits are aligned with the least
significant bits of the 10-bit pixel data from the sensor. Thus if Gain is 0, offset must increment in steps of
4 to increase the output gray level in steps of 1.

150
Calibration (in camera firmware)
The in camera noise reduction is an implementation of simple Background removal. A more advanced soft-
ware implementation exist that handles the background much more effectively. (See "Calibration (in PC
software)" on page 151

The In Camera calibration has to be done in order to be able to use Dark Value offset.

To do an In Camera calibration Click the button and follow the instruction.


(The calibration takes up to 10 seconds to perform),

Calibration (in PC software)


This calibration handles static noise, pixel sensitivity, defective pixel readout. To use the calibration, two
different images have to be created by running the camera in special modes, where a black reference
image, a sensitivity map, and an error pixel map is generated.
Note: The Black reference and Flat field correction must be performed while images are acquired
by the camera and shown in the user interface. Do a Free run or Preview to perform the cal-
ibration.

Black reference
First an average black reference image has to be acquired and calculated. The Lens must be covered so
that no light reaches the sensor and the images must be acquired in Free run of Preview to have a steady
stream of data for the calibration routine to work on.
The acquired images are averaged and saved to an internal black reference image for later use.
After 5 seconds of operation, the system prompts the user to stop the process when a steady Black Ref-
erence image seen on the screen.
The user should look at the image to see if it is static, and no major changes are seen.
To be able to examine the image, which is very dark, the lookup table in the Color map dialog has to be
used.

Flat field correction


Second, a pixel sensitivity map is created. To do this a White Balance filter and a light source with con-
stant light has to be used.
Put the White Balance filter in front of the lens, and aim the camera at the light.
The camera must be acquiring images in either Free run or Preview.
The Image must not be saturated. The mean intensity of the image should be around 2/3 of the saturation
level (i.e. around 170 for an 8-bit camera).
After 5 seconds of operation, the system prompts the user to stop the process when a good White balance
image is seen.
The user should examine the images presented to see if the image is all the same pixel value, and no
major changes are seen. As the image gets more and more static, less and less different pixel values will
be seen.
To examine the image, the lookup table in the Color map dialog should be used.

Defective pixels
After both the black reference, and the Pixel sensitivity calibration is performed, the calibration routine will
try to find defective pixels.
This defective pixel map is created based on information from the black reference image and (if cal-
culated) the pixel sensitivity map.
(The outer row and columns and rows of the full image is not validated or corrected)

151
Finding hot sensor cells
Hot sensor cells are cells where:
Black Reference Image [x,y] > mean (Black Reference Image) + RMS(Black Reference Image)*2)

Finding defective sensor cells


Defective (insensitive) sensor cells are cells where:
Abs(Pixel Sensitivity map (x,y) - mean(Pixel Sensitivity map)) > RMS(Pixel Sensitivity map) * 1.4

When Calibration is done


When calibration is done the parameter "Perform black reference calibration" and Perform flat field cor-
rection calibration" should be locked. This to secure that the parameter not by accident is clicked and a
new calibration is started. If a new calibration is started the old calibration will be lost.
Locking and unlocking parameters is done in the following way:

l To lock a parameter right click the parameter name and select "Lock Value".
l To unlock the parameter right click the parameter name and select "Unlock value".

Enabling and using the calibration


The tree different calibrations can be enabled separately in parameters for the camera.

Background removal
Enabling this calibration, the calculated background reference will be subtracted from the image acquired.

Flat field correction


Enabling this calibration, the acquired images are multiplied by the calculated pixel sensitivity map.

Defective pixel correction


Enabling this calibration, will replace the pixels in the acquired image, indicated in the Defective pixel
map, with a calculated value based on the surrounding pixel values.
The calculation is done in the following way:

The surrounding pixels are weighted by use of the following metrics, The center value is 0 representing the
Pixel value that is to be calculated:
1 1
1
2 2
1 0 1
1 1
1
2 2
(Pixels along the image edge are not validated or corrected)

Using Streaming to disk and Quick conversion warning


When streaming to disk only raw images are saved to the disk. This means, that if you do Quick con-
version the Background removal, Flat field correction and Defective pixel correction settings will have no
effect, the images will be saved raw to the database.
If you do not use Quick conversion and save the streamed images normally to the main database, the
Background removal, Flat field correction and defective pixel correction settings will have effect and the
images saved to the database will be without noise.

152
13.5.17 SpeedSense 90XX and MXXX Cameras

The SpeedSense 90XX and MXXX series cameras are state of the art high speed cameras from Vision
Research. It is not a traditional PIV camera but it is possible to acquire both single and double frames at a
very high rate with full frame resolution. The older generation cameras (SpeedSense 9020, 9030, 9040,
9050) need two synchronization pulses to acquire a double frame. For these older cameras, there is no
way to tell which pulse is the first and which the second one, therefore the user needs to decide which
images to pair into a double-frame before saving the images to the database. This can be done by entering
the Acquired Data window and toggling between the two frames. If the two frames do not show correlated
particle movement, then an offset needs to be entered in the device properties of the image buffer before
saving the images from RAM to the database (See "Wrong light pulse in double frame" on page 165. The
newer generation cameras (SpeedSense 9060, 9070, 9072, 9080 and 9090 marked with a yes in Double
frame trigger mode in the table below) need only one synchronization pulses to acquire a double frame like
any other CCD camera. Therefore the user does not need to care for the correct pairing.

153
Minimum Double
Maximum Bit Minimum Pixel
Exposure FPS frame
Model Resolution Depth Interframe Size
Time (μs) (full frame) trigger
(pixels) (bits) time (ns) (microns)
mode

9020 1152 x 896 8, 10, n/a n/a 1000 11.5


12

9030 800 x 600 8, 10, n/a n/a 6688 22


12, 14

9040 1632 x 8, 12, n/a 1500 1016/508 11.5


1200 14

9041 1600 x 8, 12, n/a n/a 1000/500 11.5


1200 14

9050 2400 x 8, 12, n/a 1500 480/240 11.5


1800 14

9060 1280 x 800 8, 12 n/a n/a 6242/3121 20 yes

9070 1280 x 800 8, 12 n/a n/a 3140/1570 20 yes

9072 1280 x 800 8, 12 n/a n/a 2190/1095 20 yes

9080 2560 x 8, 12 n/a n/a 1514/757 10 yes


1600

9084 2560 x 12 n/a n/a 1450/725 10 yes


1600

9090 1280 x 800 8, 12 n/a n/a 7500/3750 20 yes

211 1280 x 800 8, 12 1 700 2190 20 yes

311 1280 x 800 8, 12 1 700 3250 20 yes

611 1280 x 800 8, 12 1 500 6242 20 yes

711 1280 x 800 8, 12 1 500 7530 20 yes

341 2560 x 8, 12 1 400 12000 10 yes


1600

641 2560 x 8, 12 1 400 16000 10 yes


1600

M110 1280 x800 12 2 500 1630/815 20 yes

M120 1920 x 12 1 1400 730/365 10 yes


1200

M140 1920x1200 12 1 1400 400/200 10 yes

M310 1280 x 800 12 1 500 3260/1630 20 yes

M320 1920x1200 12 1 1400 1380 10 yes

M340 2560 x 12 1 1400 800/400 10 yes


1600

1210 1280 x 800 12 1 400 12700/6350 28 yes

154
Minimum Double
Maximum Bit Minimum Pixel
Exposure FPS frame
Model Resolution Depth Interframe Size
Time (μs) (full frame) trigger
(pixels) (bits) time (ns) (microns)
mode

1610 1280 x 800 12 1 400 16600/8300 28 yes

1211 1280 x 800 12 1 725 12600/6300 28 yes

1611 1280 x 800 12 1 525 16600/8300 28 yes

Connecting the Camera

l Connect the power supply


l Connect the Ethernet to the Agent PC
l Connect the F-Sync via a BNC cable to the synchronizer (some cameras doesn't have an F-Sync
on it's back panel, in this case use the supplied sync cable that came with the camera).

It is preferable to have a dedicated network adaptor for communicating with the camera(s).
The camera does not support DHCP so when connecting the camera it is important to setup the network
adaptor as well.
This is done through Windows Network Connections properties for the specified adaptor connected to the
camera.
The subnet mask must match the one of the camera. It can be retrieved from the bottom of the camera.
If it is necessary to alter the IP settings for the camera it can be done using the Vision Research software
supplied with the camera.

Connecting multiple Phantom cameras to the Synchronizer


You can chose to connect multiple cameras to one or more outputs on the synchronizer.

155
If you chose to use one output for ex. two or more cameras, then when changing parameters as ex. expo-
sure time , the exposure time for all cameras connected to the same output will bi changed in one oper-
ation.

Detecting the Camera


To control the camera from DynamicStudio, a Detector for SpeedSense 90XX and MXXX camera needs
to be added to the device tree. This is done by right-clicking the Acquisition agent and selecting 'Add New
Device...'. From the device list select "detector for Speedsense 90XX and MXXX camera". This will auto-
matically detect the camera connected to the PC.

Running the camera


The camera is operated like any other camera from Dynamic Studio but there are currently a limitation.
During acquisition (free run, preview or acquire mode) the display is updated with preview images from the
camera, this is a special way of getting images during acquisition. In double frame mode it can not be guar-
anteed that frame one is really frame one. This is because the preview image from the camera is a random
image and could just as well be frame two. This however is only during preview. The images acquired are
arranged correctly.
On some cameras an F-Sync connector is supplied both directly on the camera but also on the "Capture"
cable. Always use the F-Sync directly on the camera.

Calibrating
The camera can be operated in two ways, Single frame and double frame mode. For both operation
modes, it is strongly recommended to perform a calibration when changing camera setting such as image
frame rate or exposure time.
Single frame mode (In Camera Calibration):
DynamicStudio indicates automatically in the device properties window (property called "Perform in Cam-
era Calibration") when a calibration should be performed.
If needed it will say "Calibration is old".
This property is a drop-down containing a button called "Ok". If the button is clicked, the system will per-
form a calibration on the camera.
Even if the property doesn't say "Calibration is old" it is still possible to do a calibration.
After pressing the button the system will guide the user regarding further action.
It is advisable to make a black reference calibration when the camera has reached operating temperature,
even though the property doesn't say "Calibration is old".
Note: To give the best result the calibration must be done in preview mode, where live images from the
camera are presented in DynamicStudio.
Note: It is possible to have DynamicStudio do calibration on all cameras connected to the same syn-
chronizer in one operation. In this case DynamicStudio will ask if calibration is to be performed on all or
only the selected camera.

Double frame mode


When working in double frame mode the calibration performed by the camera is not enough. The in camera
calibration is designed only to work in single frame mode, and when investigating the images acquired
after having performed an in camera calibraion, you will notice that the intensity of the second frame is
dark compared to the first frame.
To overcome this the in camera calibration has to be "disabled", or have as little influence as possible on
the images acquired. Images correction is in this case handle in the software of DynamicStudio.
Steps by step procedure:

1. Set the camera in Single frame mode


2. Click Free run

156
3. Set the exposure time to minimum (entering a 0, and let the system validate this figure resulting in
the minimum exposure time for the camera).

4. Set the Trigger rate to the maximum (entering ie. 99999, and let the system validate this figure
resulting in the maximum trigger rate for the camera)
Note: If you have a laser or another devices connected to the same synchronizer as the camera,
that can not run as fast as the camera, you must disconnect this device from the synchronizer in
Synchronization cables diagram, right-click the device(s) in Synchronization cables diagram and
select "Disconnect")

5. Make sure that "Raw image" is disabled

6. Perform in Camera Calibration. the way this calibration is performed will to make sure that the cal-
ibration have as little effect on the images acquired later by the camera. This is necessary in order
to have the software calibration work correct.

7. Select double frame mode


8. Set the exposure time to correct (needed) value
9. Enable "Raw Image", making sure the that the images received from the camera, is as raw as
possible. (Even though you enable"Raw image" the in camera calibration performed in step 6 still
has some small influence, and a very high if step before step 6 is not performed correctly)
10. Perform Black Reference Calibration. The camera will close the internal shutter, but for cameras
that does not have this internal shutter you will have to put a cap in front of the lens.
Let this calibration run until the image shown is stable.
11. Optional for fine tuning calibration, Flat Field Correction Calibration can be performed. Put a white
balancing filter in front of the lens and extra light to get the Color Map Histogram in the middle of
the range.

The in Software calibraion is now done. You have full control over what part of the in Software calibration
should be enabled. To control this the following parameters are available:

Noise background removal


Enable or disable the use of noise background removal

Black reference Offset Value


When the background removal is performed an offset value is added. This value can be adjusted here.

Flat Field Correction


Enables Flat Field Correction

Hot Pixel Correction


During Black reference calibration and Flat field calibration hot or defect pixels might have been found.
This pixels can be set to a value based on its neighbor pixel values.

Circular acquisition (event triggering)


The Phantom cameras can be triggered by an event, this means that it is possible to start the acquisition
and the camera will then acquire in a circular fashion and when an external trigger is applied to the camera
it will start recording post images. When done it will stop the acquisition and the images can be browsed
from the "Acquired Data" window. That images are arranged so pre images will have a negative index and
the post images will have positive index (See "Examining Acquired Data after a Preview" on page 164).

157
To run the camera in circular mode just select it from the "Trigger mode" drop-down list in the camera prop-
erties.
The input used on the camera is "Trigger".
Running circularly is not possible when recording to removable image buffer.

Electronic Shutter
When Electronic shutter is disabled the electronic shutter in the camera will be disabled, which changes
two features in the camera behavior :

l Make the interframe time (the time between the two exposures in double frame mode) shorter.
The reason why the interframe time is shortened is that the camera does not spend time con-
trolling the electronic shutter between two pulses.
l The camera exposes quasi continuously.
This can be a disadvantage if acquisition is to be performed in areas with much ambient light,
especially if the double frame mode with low trigger rate. Here one frame will be brighter than the
other one.

This is illustrated in the following figures. Note that on those figures, only the camera triggering is illus-
trated, for more specific details on Synchronization the reader is referred to See "Synchronization" on
page 172.
The first one represents the timing diagram of a camera running with "Electronic Shutter" Enabled. As it
can be seen both images have same exposure time but the interframe time is quite long.

The second figure represent the timing of the same camera running with "Electronic Shutter" Disabled. As
it can be seen, the interframe time as been reduced so that shorter "Time between pluses" can be
reached. It can also be seen that the exposure time of the second frame is now longer than the first one.

158
To get to the minimum Time between pulses for the camera in Double frame mode "Electronic Shutter"
has to be Disabled.
The default timing parameters for the camera in Double frame mode (Delay to open) is for "Electronic
Shutter" Enabled. If the camera is set to run in "Electronic Shutter" Disabled, the timing parameters has to
be changed. As illustrated in the following figure, the "Delay to Open" has to be adjusted so that the laser
lights flash into their corresponding frame.

The following will describe a way to find the correct value for Delay to open.

1. Set the system to Double frame mode.

2. Be sure that the Activation delay for the laser is correctly entered (Delay from rising edge of the Q-
Switch trigger signal to light out of the laser). The correct activation delay for the laser can be
found in the manual for the laser.
The activation delay for Flash pumped laser as are typically around 150-200 nanoseconds. For a
Diode pumped laser the activation delay can be several microseconds.
No cavity in a laser is exactly matching any other cavity therefore the activation delay for one cav-
ity will be different from the other one. (See "Synchronization" on page 172).

3. Set the trigger frequency of the system to the value at which you will do you acquisitions
4. Set Time between pulses to 10 microseconds.

5. Physically disconnect sync signal to cavity 2(and connect cavity 1.)


6. Do an acquisition of 10 images
7. Examine the acquired images. Check that the light from cavity 1 is only seen in the first frame.
If the light is not to be seen in the first frame decrease Delay to open. Go to step 6.

8. Physically disconnect sync signal to cavity 1(and connect cavity 2.)


9. Do an acquisition of 10 images
10. Examine the acquired images. Check that the light from cavity 2 is only seen in the second frame.
If the light is not seen in frame 2 increase Delay to open. go to 9.

11. Decrease Time between pulses and go to step 5.

At one point it will no longer be possible to adjust Delay to open so that there will be light on both frames.
At this point you have found the minimum Time between pulses for the camera and also the correct Delay
to open value.

As you reduce Time between pulses, the adjustments done do Delay to Open should be in 0.1 micro-
seconds steps.

159
The following table indicates an estimated value of the Delay to Open to enter the software in order to
reach the minimum Time between pulses. Note that this value may be subjected to changes, depended on
the firmware of the camera, actual values of the laser's activation delay...

Delay to Open Delay to Open


Model "Electronic "Electronic Shutter" Dis-
Shutter" Enabled abled

M310 1.0 µs 1.6 µs

M340 2.5 µs 1.6 µs

Time between pulses


The reset time of the camera (time that it takes to get ready to take the next image) gets shorter if the
image size gets smaller. This means that it is possible to set a shorter time between pulses if the res-
olution is smaller.

Internal image buffer


The image buffer of SpeedSense cameras can be split up(formatted) in any desired number of partitions.
This is done by entering the corresponding value in the device properties. By using multiple partitions it is
possible to use the internal buffer memory as a remote storage before retrieving the data to the PC through
the network.
In order to use of this feature, the database handling options must be set accordingly by unchecking "Save
acquired data to main database" and checking "Do quick conversion" (in Tools/Options/Acquisition).
Example of use :

l Enter the number of desired partitions (in that case 2)


The maximum number of images that can be acquired in one run is displayed in the device prop-
erty. It corresponds to the maximum number of images that the camera can store divided by the
number of partitions.
2 partitions appear in the properties, and the first one is indicated as "Active". This is the one that
is going to be used first for the acquisition.
The second partition is indicated as "Free".

160
l Acquire the first run

l Save in the data base


The data is remotely stored as indicated by the blue globe on the data (See "Distributed Data-
base" on page 231).
The first partition changes status and is now indicated as "In Use", meaning that it contains data.
The second partition takes over and becomes active.

161
l Acquire another run
As a portion of the memory is still free, it is possible to run another acquisition without having to
transfer the first acquisition to the PC.
Then both partitions are "In Use" and pop-up message will announce that no more acquisition can
be done before a memory partition is freed.

l Free some memory space


In order to acquire some more data, a partition must be freed.
There are 5 ways to free partitions :

- The camera can be turned of and back on. All images stored in the camera are lost.

- Number Of Partition is changed.All images stored in the camera are lost.

- Delete one of the quick converted image ensembles stored in the database.

162
- Move the partition contents to a Storage Device (CineMag or CineFlash) if it exists (See "Cine-
Mag, CineFlash Storage devices" on page 163).

- Collecting the remote content to the main data base as indicated in the following figure.

This can be very useful to save transfer time when several experiments has to be repeated.

CineMag, CineFlash Storage devices


If Storage device, such as a CineMag or a CineFlash is mounted on the camera, it will automatically be
recognized and appear in the Devices tree as in the following figure.

Higlighting the "SpeedSense Storage Device" will display its properties in the dedicated menu :

l Number of stored Recording


This parameter describes how many recording is stored on the Storage device.

163
The number of recordings can correspond to a number of ensembles in the active DynamicStudio
database. It can also represent ensembles linked to another database, that is not currently in use.

l Storage Size
Specifies the size of the attached Storage device. In this case a 60GB CineFlash is attached.

l Storage Used
Specifies how much of the Storage device is used by the stored recordings.

l Protected
Some Storage devices have the possibility to make the storage device Read only, protecting the
data on the device. If the write protection is set, it will not be possible to write data to the storage
or to erase the storage contents.

l Erase storage
When the storage device is full and all data is collected to the Database(s) you will have to erase
the storage device to get rid of the stored recordings. This is done by clicking “Erase Storage…”
and then click OK on the button that appears.
It is not possible to erase individual recordings stored on the Storage device.

Removable image buffer


The removable image buffer is a separate image storage module that can be attached to some Speed-
Sense cameras. This opens up for much longer acquisitions than with just the camera internal memory.
To acquire images to the removable image buffer change the 'Image buffer' property to 'Removable image
buffer'. The Image count will update to the approximate number of images that can be recorded.
An important note is that when recording images to the removable image buffer the maximum trigger rate
is lower than when recording to internal memory.
The maximum trigger rate at full resolution are as follows:

l SpeedSense 9060, 9070, 9072 and 9090: 700 fps.


l SpeedSense 9080: 175 fps.

The trigger rate can be increased if the resolution is lowered.


The removable image buffer can only hold one recording at a time. This means that on every start of acqui-
sition the memory will be cleared if necessary. It is possible to clear the memory beforehand using the
'Clear image buffer' property.

Images in the removable image buffer are always stored with a pixel depth of 10 bit no matter what pixel
depth was specified by the user.
Due to the way the camera records images to the removable image buffer it is not possible to make a nor-
mal acquisition by pressing the Acquire button. It is only possible do a free run and a preview. The images
shown during double frame preview is not sorted meaning frame 2 can be shown as frame 1 and vice
versa but the images are stored correctly and can be viewed correctly afterwards.

Examining Acquired Data after a Preview


When doing preview the acquisition is done synchronized to the laser. The camera is set to use the inter-
nal image buffer as a circular buffer. When done Acquired Data will look like below:

164
Images are actually acquired, but the frame index will have negative indexes. Below, the range -3500 to -
3000 is selected to be saved to the database:

Use camera Timestamp


It is possible to get image timestamp information from the camera, instead of the synchronizer´s times-
tamp. Some Synchronizers does not provide hardware timestamp. In this case it is possible to acquire an
accurate timestamp information from the camera saved with each image/double image. This is done by
enabling “Use camera Timestamp” in properties for the Image buffer.

Camera Timestamp adjust


Each camera has it's own internal clock. If more cameras are used with “Use camera Timestamp” ena-
bled, the camera internal clocks has to run synchronized. This is done by choosing one camera as
"Master" and connect the IRIG-out from this camera to each of the other cameras IRIG-in. This has to be
done in both physically and also in the Synchronization cables diagram.

Depending on the cable length and electronics in the camera, the IRIG signal from the "Master" will or can
be delayed some nanoseconds. It is possible to adjust for this by entering a value in "Camera Timestamp
Adjustment" parameter found in parameters for the Image buffer.

Trouble shooting

Image quality issues


Due to the electronic shutter, if the trigger rate period is much longer than the exposure time you may col-
lect a little bit of light during that non-exposure period resulting in a brighter image than set by the exposure
time.

Problems finding camera 


In the case that the Phantom detector can't find the camera, it is most likely because the camera needs a
.stg file. This file is a setup file that holds the camera settings.
In the properties of the detector it is possible to set a path to this file. The path must be the execution path
of DynamicStudio meaning the location from where DynamicStudio was executed. In the case of Agents
the path must be the location from where the Agent was executed.

Wrong light pulse in double frame


After an acquisition in double frame mode, the images can be arranged so that light 1 is in frame 2 and light
2 is in frame 1. This can be corrected by shifting the images by 1 from the device properties of the image
buffer.
After changing Image shift, examine a new image set from the image buffer, the change of Image shift
only has an effect on new images examined from the buffer.

165
Corrupt images
The stg structure in the camera can sometimes become corrupt. If this is the case the images might look
corrupt or the settings of the camera does not mach the settings of DynamicStudio.
To fix this you must have the Phantom software installed and restore the nonvolatile memory settings of
the camera with the factory .stg (located on the CD that came with the camera).
With more than one camera extra care has to be taken to insure that it is the correct .stg file that is used to
restore the camera with.
For further information on how to reset the camera to factory settings please se the Phantom doc-
umentation supplied on the CD that came with the camera.

Light from laser 1 seen on both frames in Double frame mode


This issue is only seen on cameras that supports burst mode.
The cause of this issue can be that "Delay to close" is set too low. When set to low, it will look like light
from cavity 1 on the laser will show on frame 2 as well.
What is happening is that Dynamic Studio thinks that the camera is running in double frame mode and
therefore also capture 2 images on each sync signal send to the f-sync input, but in fact only one image is
captured.
To solve the issue you must increase the Delay to close. First try with a large delay to open, ex. 10 micro-
seconds, and verify that the camera is now really capturing two images on each sync signal. Then
decrease delay to open until the issue is seen again, now Delay to close is at the limit. Just increase the
delay to open with 100 nanoseconds and the issue is solved.

13.5.18 VolumeSense 11M Camera


The VolumeSense 11M uses a 11Mpixel interline CCD sensor. On top of the sensor a micro lens array is
placed. This modification together with special software makes it possible to do 3D reconstruction of the
original volume, using only one camera.

The 3D reconstruction can be done online using the Online LightField method, or after the acquisition has
been saved in the database.

Note: The camera has a heat sink attached to the housing. A good airflow is highly recommended.
If not done and operated in high temperature environment image distortions will be observed. In
worst case the camera can be destroyed if you do not follow this instruction and warranty will be
void !

166
Frame grabber
Only two frame grabbers support the VolumeSense 11M, the PCIe-1430 and the PCIe-1433 frame
grabber.

The PCIe-1430 is a dual port Camera Link base frame grabber, either port 0 or port 1 can be used for con-
necting the camera. This frame grabber supports two cameras at the same time.

The PCIe-1433 is a Full Configuration Camera Link frame grabber. Here only port 0 can be used for con-
necting the camera.

The image below shows the back plane of the PCIe-1433 frame grabber.
(The back plane of the PCIe-1430 looks look nearly identical to the PCIe-1433. The placement of the SMB
trigger is the same on both frame grabbers.)

Synchronization
The camera is triggered via the Frame grabber. On the frame grabber the SMB trigger connector must be
connected to the trigger output of the Synchronizer.

(The camera is delivered with a small SMB to BNC converter cable)

167
13.5.19 PCO Dimax cameras
The PCO dimax cameras are based on high speed CMOS sensor.
The cameras support 12-bit resolution only.

ROI vs frame rate


ROI Frame rate Single farme Frame rate Double farme
2000 x 2000 2277 1139
1400 x 1050 5475 2743
1280 x 720 8219 3975
1000 x 1000 7033 3524
800 x 600 23823 3435
640 x 480 17951 9017
320 x 200 46523 23413

Camera settings
The following settings can be adjusted for the camera.

Fast Timing mode


Sets the camera fast timing mode for a dimax. To increase the possible exposure time with high frame
rates it is possible to enable the 'Fast Timing' mode. This means that the maximum possible exposure
time can be longer than in normal mode, while getting stronger offset drops. In case, especially in PIV
applications, image quality is less important, but exposure time is, this mode reduces the gap between
exposure end and start of the next exposure will be reduced to it's minimum

Noise filter
By enabling the Noise Filter the image data will be post-processed in order to reduce the noise. This is
done by the camera.

Image buffer settings

Number of images
When setting number of images different from "Default" you can here decide how many images this cam-
era is to acquired during acquisition

Use Camera Timestamp


When enabled the timestamp is collected from the camera instead of from the synchronizer.

13.6 Synchronizers

13.6.1 Scanning Light Sheet controller

168
Scanning Light Sheet internals
The internal of the scanning light is build up around a Galvanometer. The galvanometer rotates a mirror,
thereby creating the light sheet scan as illustrated below:

The mirror is never still (except at when the direction of the rotation is changed), it will always be rotating in
one or the other direction.
The rotation of the mirror can be seen as sine wave....

Detection of the scanning Light Sheet Controller


DynamicStudio automatically detects the Scanning Light Sheet Controller. No extra steps are necessary.
A device in the acquisition system Device tree will be created representing the Scanning Light Sheet con-
troller.
The Scanning Light Sheet Controller in Synchronization Cables diagram:

169
Connecting Sync cables for using the Scanning Light Sheet Controller
Signal name Description Usage Recommended Con-
nection

Cycle This signal is high dur- This signal can be used to start the Should be connected
ing the scan period. - synchronization on the syn- to TimerBox In2 (Start
The positive edge indi- chronizer. /Enable)
cates Scan period
start and will go high
together with the first
sheet pulse.-The neg-
ative edge indicates
that previous sheet
scan has ended,

Sheet This signal goes high This signal is used to trigger a Sys- Should be connected
at each sheet posi- tem(1). to TimerBox In1
tion. A burst of sheet (Trigger in)
pulses will be gen-
erated, one per sheet
The timing offset of
these pulses relative
to the actual sheet
position will cal-
culated so that the
timing fits the Sys-
tem(1).

Arm External signal that


tells the Scanning
light sheet to send a
Start signal on output
Start when ready.

Start A signal that can be should be connected


used to trigger cam- to the trigger input of
eras to start there the camera
acquisition.Only one
pulse is generated.
The position is placed
in between two sheet
bursts.

1. System: A system usually consists of a Synchronizer (preferable the TimerBox) a Camera and a Laser.

170
Properties for the scanning Light Sheet controller
Property name Description Default Min Max
value

Number of light Specify the number of 1 1 Depends on the


sheets light sheets to generate connected Sys-
tem(1)

Volume width Specify the with of the 40 1mm 80 mmDepend


volueme to place the light also on the Scan
sheet in Frequency

Scan Frequency Specify the scan 5 10Hz 500HzDepend


frequency. (One scan also on the Vol-
included Number of light ume width
sheets)

Single sheet posi- Incase only one light 0 -40mm 40mm


tion sheet is to be generate,
you can specify the posi-
tion in the volume to place
the sheet. 0 mm is in the
middle of the volume.
Valid values are (-Volume
width)/2 to (Volume
width)/2

Sheet Frequency Shows the calculated - - -


Light Sheet frequen-
cy(Read only)

Start method Specifies how the Start “Ignore” No “Time”The “User Action-
signal is activated. (Some start signal system will ”After the acqui-
systems does not require will be gen- after the sition as been
this signal, in that case erated. specified initialized and is
just ignore this property) time gen- ready for acqui-
erate the sition the user
Start signal. will be prompt-
ed.When the
user click “OK”
the Start Signal
will be generated

Start time Specifies the time from 5000 mil- 5000 mil- -
the System is ready for liseconds liseconds
acquisition to the Start sig-
nal is activated. (If Start
method is set to “Time”)

Start signal polar- Specifies the polarity of Positive Negative Positive


ity the start signal to gen-
erate.

1. System: A system usually consists of a Synchronizer (preferable the TimerBox) a Camera and a Laser.

171
Setting up the System

1. Connect and turn on all devices connected to the PC.


2. Startup DynamicStudio and enter Acquisition mode.
3. Let the system detect all devices.
4. In the Synchronization cables diagram connect
-TimerBox In1 to Sheet Sync on the Scanning Light Sheet Controller
-TimerBox In2 to Period on the Scanning Light Sheet Controller
-Connect the Laser to the TimerBox
-Connect the Camera(s) to the TimerBox
Depending on the type of camera connect Start on the Scanning Light Sheet Controller to Trig in
on the camera(s) (Not sync in)
5. In properties for the TimerBox, set:
-Start on External trig
-Mode External
6. In properties for the Scanning Light Sheet Controller, set:
-Number of pulses
-Volume width
-Scan Frequency
-Start mode(Depending on the type set Start mode to Time, User action or External trig.)
7. Do a preview to see if images are acquired and focused.
8. Do acquisition

13.6.2 Synchronization
Synchronizing devices in an imaging system is not trivial. Below is shown a timing diagram that illustrates
a simple synchronization between one camera and a double cavity laser.

The diagram looks simple, but what has to be considered in order to generate the signals for this setup is
much more complex. The diagram below shows the trigger signals that are sent to the devices, and some
of the different time definitions that have to be taken into the calculation.

172
1. Camera exposure timer
2. Delay to open (most often in the range from 1 to 10 microseconds)
3. Camera trigger pulse start time relative to T0
4. Camera trigger pulse width (most often in the range from 1 to 10 microseconds)
5. Flash 1 to Q-Switch 1 delay (most often in the range from 100 to 200 microseconds)
6. Activation delay Q-Switch 1 (most often in the range from 100 to 200 nanoseconds)
7. Flash 2 to Q-Switch 2 delay (most often in the range from 100 to 200 microseconds)
8. Activation delay Q-Switch 2 (most often in the range from 100 to 200 nanoseconds)

Every device that needs a synch signal has at least one parameter that must to be defined in order to cal-
culate the right time of the signal. As an example let’s look at a gated light source.

In order to shape the light pulse in the right way three timing figures have to be defined:

1. The wanted light pulse time (Pulse width)


2. The time from the trigger signal is sent to the light source until the light starts coming out of the
light source (Activation delay)
3. The time from the trigger signal ends until the light stops coming out of the light source (Delay to
close)

There is a DeviceHandler that represents each device in the Acquisition System. A DeviceHandler can
have multiple synchronization inputs and outputs.

Timing Calculation in DynamicStudio Acquisition System


The timing calculation in the DynamicStudio acquisition system is done in each of the device modules.
Only the device module that handles a specific device knows all the timing definitions that are needed to
calculate the right timing signal for the device. The calculation is done based on two definitions:

173
T0 : The time at which the first (or only) light pulse is fired and is at its highest intensity. This is the only
fixed time in the system. The value of T0 is “0.0”. (If no light source is part of the system, then think of this
time as just T0.) The time before T0 is negative and the time after T0 is positive.

T1 : The time at which the second light pulse is fired. T1 = T0 + “Time between pulses” (”Time between
pulses” can be changed in “System Control” dialog of DynamicStudio or at the Synchronizers if more syn-
chronizers is connected to the system)

In the synchronization diagram below T0 and T1 are inserted:

During start of acquisition the DeviceHandler is asked to calculate signal shape and timing for each of the
input connectors on the device. If the input connector is not connected to another device then no cal-
culation will be done.

If the Synchronization Between Devices is not Correct


Usually if synchronization between a camera and the laser is not correct you should start out by checking
if the laser timing specification is correct. Cameras that are integrated in DynamicStudio are tested very
thoroughly with lasers that are very well defined, when it comes to timing. When we integrate lasers we
usually rely on the manufactures specifications. Most often it is necessary to fine tune lasers settings.

Another thing that can influence the synchronization is noise. We have seen that if the laser power supply
is standing too close to the cameras the magnetic emission from the laser, at the time the flash lamp is
fired, can trigger the camera. The camera then ignores the real sync signal sent to the camera from the
synchronizer since it has already started the exposure.
Cable length can also in rare situations have an influence on timing. Every 1 meter cable introduce approx-
imately 5 nano seconds delay.

13.6.3 USB Timing HUB


The USB timing HUB is a plug and play device which will allow you to generate trigger pulses and thus
control your image acquisition system. The Timing HUB is equipped with 8 output port and two input ports
and can thus be trigger both internally and externally. The Timing HUB runs at an internal clock of 50 MHz,
allowing for a pulse positioning accuracy of 20 nanoseconds

Installing the NanoSense camera


The USB Timing HUB driver is embedded in the DynamicStudio installation package. Thus it is not nec-
essary to install a separate driver. However, the first time you plug in the USB Timing HUB, you will need
to make sure that windows selects the correct driver before you start DynamicStudio. Please follow the
steps in the figure below to guide the Windows Plug-And-Play installation system to assign the driver cor-
rectly. Windows will at a stage (step 3) ask if you want to continue the installation. You must select "con-

174
tinue anyway" at this stage. Please also note that the driver is assigned to a specific USB port. If you
install the USB Timing HUB on a different USB port you must repeat this procedure.

When the installation has completed correctly the timing HUB should be shown as displayed in the device
tree below:

The default properties of the HUB are shown below:

175
The HUB is setup to start acquisition automatically (when the user starts an acquire from the "System
Control" window).
The mode that the HUB will use is Internal (the HUB will supply its own clock to generate pulses to
devices attached to it).
Trigger rate is set to default meaning that it will use the trigger rate set in the "System Control" window.

13.6.4 Start options


The HUB has three different ways of starting an acquisition:

l Automatically
l Start on user action
l Start after time

Automatically
The HUB will automatically start generating pulses when an acquisition is started from the "System Con-
trol" window.

Start on user action


The HUB will start as if "Automatically" was selected except that a message box will appear and it will not
trigger the cameras connected to it until the button on the message box is pressed. All other devices will
be triggered.

176
Start after time
The HUB will start as if "Automatically" was selected except that it will not trigger the cameras connected
to it until the specified time "Start time" has elapsed.

13.6.5 Mode options


The HUB has three different ways of running when an acquisition has been started and is ongoing:

l Internal
l External
l External clock

Internal
The HUB will use its internal clock to trigger devices attached to it. The frequency at whish the HUB will
pulse devices is determined from the trigger rate set in the "System Control" window.
The HUB can also specify it own trigger rate rather than using the "System Control". To do so type in a dif-
ferent value than "Default" (Default means it uses the one specified in the "System Control" window). This
is only available when more than one synchronizer is detected.

177
External
The HUB will use an external signal to trigger devices attached to it.
When this mode has been selected, it is possible to specify on what input the external signal is received
(either A or B).

External clock
The HUB will use an external signal to trigger devices attached to it.
When this mode has been selected, it is possible to specify on what input the external signal is received
(either A or B).
It is also possible to skip pulses in this mode (the minimum pulses to skip is 4).
If 6 has ben entered it means that every 6th external pulse will pass to trigger devices.
All other pulses will be suppressed by the HUB.

13.6.6 Limitations
Some cameras like Photron and SpeedSense 90XX use hardware timestamps from the synchronizer in
certain modes. The HUB does not supply hardware timestamps, the timestamp on images from these
cameras can be incorrect.
This is the case in the following situations:

Running External mode and supplying an external trigger signal with a rate different from what is specified.
Running External mode and pausing or disconnecting the external trigger signal.
Running External clock mode.

These situations can be overcome by using the Timer Box (80N77) instead.

13.6.7 BNC 575


The BNC 575 Pulse/delay generator can be used as a Synchronizer in the DynamicStudio Acquisition sys-
tem. The BNC 575 can be used to control a number of different devices. The BNC device can be con-

178
nected to the PC in two different ways. You can connect the BNC via a RS232 port, or you can connect
the BNC device via a USB port.

Detection
Once DynamicStudio has been stated and Acquisition mode has been entered, DynamicStudio will find all
COM ports on the PC. Each COM port will be listed in the Device Tree of DynamicStudio. If the BNC 575
is connected via USB, a virtual COM port is established in the Windows operating system.
For each COM port detected, the system will try to detect if a device is connected.

Turn on the BNC device. After a while the BNC 575 is detected and the device is listed just below the
COM port on which it was detected.

If the BNC 575 is not detected, the BNC 575 might have a wrong Baud Rate set in the communication set-
tings of the device. To solve this do the following:

1. On the front panel of the BNC 575 press "Func"(yellow button) and then "System"(numeric button 3).
Do this until the display looks like this:
[ ] System Config
Interface: RS232
Baud Rate: NNNNNN
Echo : Disabled

2. Press Next until the cursor is at the number specifying the current Baud Rate of the BNC 575.

3. Change the Baud Rate to 115200 and press "Next".

BNC 575 Trigger modes


The BNC575 offers three different trigger modes that will be described in the follow sections.

Trigger mode Internal


By using this setting (default) the system will run at the trigger rate specified in the "System Control"
panel. The system will start when the user presses the acquire button.

Trigger mode External


In this setting the system will run with a trigger rate, which is given by an external signal through input trig
on the BNC 575.

179
NOTE: THE LAST OUTPUT (Out H for a 8 channel BNC and OUT D for a 4 channel BNC) and
"Gate" has to be physically connected using a BNC cable. Out 8/4 together with Gate functionality
is used to generate "Dead time" in which trigger pulses on Trig input are inhibited.

Trigger mode External clock


This trigger mode is able to skip external trigger pulses. The parameter "Use every N'th pulses; N=" define
how many pulses to skip before triggering an image capture.

NOTE: It is up to the user to set a high enough N value - too small N value will cause unexpected
additional synchronization pulses sent to devices, this will result in additional laser pulse expo-
sure in Frame 2 especially with CCD cameras
NOTE: THE LAST OUTPUT (Out H for a 8 channel BNC and OUT D for a 4 channel BNC) is used
for counting incoming signals. This output can not be used for synchronizing devices.
NOTE: Trig Acq-device- and Trig Q-Switches - at Trigger rate/N can not be used in trigger mode
External clock

Trigger mode External frequency multiplication


This trigger mode is able to multiply the incoming frequency with an integer number. The system will then
trigger all devices using this new trigger rate.
The "Trigger input frequency" must be measured and entered by the user.
The integer number "Burst pulses" for the frequency multiplication is calculated base on the parameters
"Trigger input frequency" and "Trigger rate".
The "Trigger rate" is first entered by the user, then "Burst pulses" is calculated based on "Trigger input
frequency" and "Trigger rate"."Trigger rate"is then validated and corrected to the nearest valid frequency.

Trigger mode External burst trigger


This trigger mode will generate a number of trigger pulses on each trigger input. This new 'trigger rate' will
be used to trigger all devices in the system.
During execution of the burst triggering, all incoming trigger signals will be ignored. After the execution of
the burst triggering the system is ready of a new Burst trigger.
The integer number "Burst pulses" specifies how many pulses are generated on each trigger.
The "Trigger rate" defines the time between each burst pulse.

180
Start modes
To start an acquisition four different start modes are available :

l Automatically
The acquisition will start as soon as all devices have been configured and are ready.:

l Start on user action


When all devices have been configured and are ready for acquisition, the laser will start flashing,
but the sync outputs for cameras and A/D boards will not be enabled. A message box will appear
asking the user to start the acquisition. When OK is clicked all sync outputs for cameras and A/D
boards are enabled.

l Hard start on user action (can be used in Trigger mode Internal only)
When all devices have been configured and are ready for acquisition, the system waits for a soft-
ware trigger. No devices are triggered. A message box will appear asking the user to start the
acquisition. When OK is clicked all sync outputs will start to pulse the connected devices.

l Start on external Trig(can be used in Trigger mode Internal only)


When all devices have been configured and are ready for acquisition, the system waits for an
external trigger pulse on Trig input. No devices are triggered. A message box will appear asking
the user to start the acquisition. When OK is clicked all sync outputs will start to pulse the con-
nected devices.

l Start after time.


When all devices have been configured and are ready for acquisition, the laser will start flashing,
but the sync outputs for cameras and A/D boards will not be enabled. Now the system is waiting
for the time "Start time" to expire. When the time has expired all sync outputs for cameras and
A/D boards are enabled.

The BNC 575 cannot enable more than one output at a time. Each enabling is done via a serial command
to the BNC device, and the execution of a command takes about 10 ms. This means that if more cameras
and analog inputs are connected to different outputs it can not be guaranteed that acquisition for each
device will start at the same time. One camera can be enabled just before a sequence starts, and the next
camera just after, leading to different start time, since the second camera will not see the first trigger. This
issue applies to all Start modes except Automatically.

Parameter "Trig Acq-devices at 'Trigger rate'/N" and "Trig Q-Switch at 'Trigger rate'/N
When either of these parameters are enabled, the parameter 'N' is enabled. N specifies how often Q-
switches or Cameras are to be triggered relative to the selected trigger frequency.
NOTE: Trig Acq-device- and Trig Q-Switches - at Trigger rate/N can not be used in trigger mode
External clock

Laser warm up
It is possible to define how many times or for how long the Laser Flash lamp and Laser Q-switches are to
be pulsed before cameras and A/D sampling are enabled. The Parameter "Warm up unit" defines 'whether
to use Time' (in seconds) or 'Pulses' for the parameters "Flash lamp warm up" and "Q-Switch warm up".

181
Pulse Mode
Determines whether the BNC 575 should follow the default setting or an internal setting:

l Follow default pulse mode:Follows default setting as set in the “System Control” panel. If the
system control window is set to double frame, the timer box will deliver double pulses, i.e. pulse
both cavities in a laser.
l Single pulse: The BNC 575 will generate single pulses regardless of settings in the “System Con-
trol” panel.
l Double pulse: The BNC 575 will generate two pulses regardless of settings in the “System Con-
trol” panel.

Outputs
Each output on the BNC 757 can be defined to output either "TTL/CMOS" signals or a voltage from 2 to 20
volts as High. (Low will always be 0 volts)

"Not able to generate required signals" issue


This is a known issue and usually it is because the system requires more timers than the BNC 575 can
offer. To solve the issue remove the connection to Out H. You are probably using Trigger mode External or
External Clock.

Time Stamping
There are two ways of starting the time measurement that is used for Time stamping images acquired by
the system. This is control via parameter "Start Timestamping on first image".
When the synchronizer is set to trigger mode External, the timer used to time-stamp images is started
when the system starts the acquisition. This will make the first image have a time-stamp depending on
how much time has passed since the acquisition started. If you select Yes the first image will have time-
stamp 0.000s

13.6.8 Timer Box


One of the timing devices, which can be used together with DynamicStudio, is the Timer Box. The Tim-
erBox exist in three different versions 80N75, 80N76 and 80N77. 80N77 is the most recent type and
replaces the two other devices.
Systems delivered before February 2007 will be equipped with 80N75, 80N76 or 80N48 (USB Timing
HUB). 80N77 has increased functionality, which will be described at the end of this document.

13.6.9 Installing the TimerBox


The timer box is driven by a National Instruments PCI-6601 or PCI-6602 board (the latter is for 80N77
only). It is very important that the correct National Instruments driver is installed before DynamicStudio is
installed. Please refer to the DynamicStudio release note to learn which driver to install.
Please install the National Instruments driver before installing the physical card in the PC. After the driver
has been installed, switch of the PC, install the card and restart the PC. Make sure that the new Board
has been properly detected by opening National Instruments "Measurement & Automation Explorer" and
inspecting the list of devices in the "Devices and Interfaces\Traditional NI-DAQ(Legacy) devices" folder.
After the National Instruments driver has been installed, you can install DynamicStudio. Be sure to go
online with the system in order to check if DynamicStudio has properly detected the board. For more infor-
mation about device detection, please read the Getting Started guide.
Connect the Timer Box to the Timer board using the shielded cable. Make sure that the screws are fas-
tened securely. Check if DynamicStudio has detected the timer box properly.

182
13.6.10 Connecting the timer box
The timer box can be used to control a number of different devices, which all require TTL input for the trig-
gering. Additionally, the timer box can also be used as an interlock control via the DSUB 9 plug on the
backside of the box. As the possibilities for connecting the box are numerous, only a few are described
here. Generally it is recommended to use the “Create Default Connections” function in the “Syn-
chronization Cables” panel of Dynamic Studio. Please read the getting started guide for more information.

Connecting the 80N75 box


This TimerBox - also called the DC PIV timer box - is primarily intended to be used together with lash lamp
pumped lasers, such as the NewWave Solo PIV and a CCD camera such as the HiSense MKII. The dia-
gram below shows an example of how to connect the devices to the box.

Connecting the 80N76 box


This timer box is the TR-PIV equivalent of the 80N75. It has been designed primarily to be used with TR-
PIV systems consisting of a diode pumped pulsed lasers and CMOS cameras. It is however possible to
use it with standard PIV systems as well. The default cabling is slightly different from that of the 80N75.
Please see the example below.

183
Connecting the 80N77 box
This timer box is the newest timing device from Dantec Dynamics. As opposed to 80N75 and 80N76 it
has eight separate output channels and is thus more versatile. The cabling for this timer box can also be
chosen more freely. In addition to running in normal mode, this Timer Box has the following capabilities:

l True hardware time stamp


l External trigger
l External Synchronization
l External trigger enable signal
l Possibility of using double gated devices (Intensifiers and diode light sources)

A typical connection diagram for the 80N77 is shown below:

External triggering
In addition to a more flexible functionality, the TimerBox features some advanced trigger functions. These
are described briefly in the following.
All of the settings below can be accessed from the Device Properties panel. Click on the “Timer Box” icon
on the “Devices” panel and open the “Device Properties” panel to change settings.

13.6.11 Synchronization setup

Internal synchronization, internal trigger


By using this setting (default) the system will run at the trigger rate specified in the "System Control"
panel. The system will start when the user presses the acquire button.
Use the following settings in the “Device properties” panel, trigger mode setup:

l Start: automatically
l Mode: internal
l Use trigger enable input: NO

184
External synchronization, internal trigger
In this setting the system will run with a trigger rate, which is given by an external synchronization device
through input port 1 on the timer box. It is necessary to add a pulse generator to the system to make
DynamicStudio recognize the pulses. This is done by right hand clicking the “Acquisition Agent” icon in
the “Devices” panel and selecting “Add new device – Pulse generator”
Use the following settings in the “Device properties” panel, trigger mode setup:

l Start: automatically
l Mode: external
l Use trigger enable input: NO

Internal synchronization, external trigger


Here the system starts when a trigger pulse is received on IN2(for 80N75, 80N76 use IN1). The system
runs at a frame rate specified in the “System Control” panel.
Use the following settings in the “Device properties” panel, trigger mode setup:

l Start: Start on external trig


l Mode: internal
l Use trigger enable input: NO

185
External synchronization, external start, gated synchronization (80N77 only)
The system starts when a trigger signal is received on input port 2 and runs at a rate given by the syn-
chronization pulses on input port 1. However, a signal high (+5 volts) must be present on input port 2 in
order for synchronization pulses to have an effect. If this signal high (gating) does not exist, the system
will stop running until at occurs again
Use the following settings in the “Device properties” panel, trigger mode setup:

l Start: Start on external trig


l Mode: External
l Use trigger enable input: Yes

13.6.12 Synchronizing two TimerBoxes


To synchronize two Timerboxes, the first must be set to use 'Start on user action' or 'Start on external
trig'. The second Timerbox must be set to use external mode (external synchronization). This is done by
connecting Out 6 to In 1 as shown below. It is vital that it is out 6 that is used and not any of the other out-
puts.
When starting an acquisition, depending on the start mode, the user will either be prompted to start the
acquisition or he will need to supply an external signal on In 2 of the first Timerbox.

186
13.6.13 Additional settings

Pulse Mode
Determines whether the TimerBox should follow the master setting or an internal setting:

l Follow default pulse mode: Follows master setting as set in the “System Control” panel. If the
system control window is set to double frame, the timer box will deliver double pulses I.e. pulse
both cavities in a laser
l Single pulse: the TimerBox will always generate single pulses
l Double pulse: The TimerBox will always generate two pulses

Safety Switch Control


It is possible to connect a shutter or an interlock cable to a laser using the DSUB 9 plug on the backside of
the Timer Box. The drawing below shows ho the cable should be connected. The Safety Switch Control
controls when the safety shutter is closed.

l On when online: Closes the switch when the system is online


l On when acquiring: Closes the switch when the system is acquiring

Trigger Rate
If the TimerBox is set to run on internal synchronization, the trigger rate can be chosen here. This is equiv-
alent to setting the trigger rate in the “System Control” panel.

13.6.14 Cyclic Synchronizer and Linear Synchronizer


The Cyclic Synchronizer is a device that on the input can connect to a Rotary Incremental Encoder (or
Rotary Incremental Shaft Encoder), and on the output side sends synchronization signals to an ordinary
Synchronizer, which will synchronize the measurement system.

187
The Linear Synchronizer is a device that on the input can connect to a Potentiometer, and on the output
side sends synchronization signals to an ordinary Synchronizer, which will synchronize the measurement
system.

Note: The Cyclic and Linear Synchronizers are not designed to synchronize lasers and cameras directly;
an additional ordinary synchronization device is needed,e.g., TimerBox or BNC575.

Input and Output Specification

Encoder input (on Cyclic Synchronizer)


The Cyclic Synchronizer supports any Rotary Incremental Encoder (or Shaft Incremental Encoder) that
provides either a TTL or Differential output of the signals Index (one pulse per full rotation) and Angle Incre-
ment (Angle Inc. for the rest of this manual, which is a fixed number of pulses uniform distributed in one
full rotation, e.g., 3600). The maximum number of angle increments counts per full rotation is 32768.
The Cyclic Synchronizer is working on the edges of the Index pulse and Angle Inc. signal pulses. It is pos-
sible to select which of the edges to work on (either positive or negative going edge).
Note: The Cyclic Synchronizer is not able to tell which direction the encoder is moved; therefore the
Cyclic Synchronizer does not support pendulum-moving systems.
Note: In order for the Cyclic Synchronizer to calculate the current encoder position, the encoder has to
move a full rotation, passing the Reset position on the Encoder.
The maximum rotation speed of the encoder must be kept below 15000 RPM.HUSK
Below are the specifications of the Encoder Inputs & Outputs:

· BNC inputs Index and Angle Increment


Both are TTL (5V) inputs.

· Differential DB9 input


5V differential signals as described below:
Pin 1: + Angle Inc.
Pin 2: - Angle Inc.
Pin 3: + Index
Pin 4: - Index
Pin 5: 5 V DC (I max: 250 mA)
Pin 6: GND
Pin 7: GND
Pin 8: Nc.
Pin 9: Nc.
Max Differential Voltage: -5 to +5 V

188
Sensor input (on Linear Synchronizer)
The Linear Synchronizer is typical delivered with a Potentiometer, but in special cases it is only delivered
with a Cable that fits the Sensor input of the Linear Synchronizer.
The Sensor Input cable for connecting the Potentiometer has three inner wires:

Black goes to 1
Brown Goes to 2
Blue goes to the slider
After a calibration is done, the Linear Synchronizer will have made sure that if you move the slider to posi-
tion 2 you will measure around 4.09V at the slider. If you move the slider to position 1 you will measure 0V
at the slider.

· Start Acq. and Trigger Input


These inputs are used in different ways, depending of the mode of operation, which will be introduced
later.
Both are TTL (5 V) inputs.

· Encoder Out 1 and Out 2 Outputs


These are the outputs where the synchronization signal for the connected synchronizers are generated.
Both are TTL (5 V) outputs.

· Specification TTL (5 V) Input and Outputs


Note: All specifications below use the following notation:
Output Low = OL
Output High = OH
Input Low = IL
Input High = IH
VIL 0.8 V
IIL 20 µA
VIH 3.2 V
IIH 55 µA
VOH 5 V
IOH 0.5 A
VOL 0.025 V
IOL 0 A

· USB port
For communication with PC, the Cyclic Synchronizer is fitted with a USB port. In order to communicate
with the Cyclic Synchronizer a device driver has to be installed.
The device driver will create a Virtual Serial COM port on the Operating System when the Cyclic Syn-
chronizer is connected via the USB port.

189
When a Virtual COM port is created it will be persistent and will be visible in the Device Manger even if the
Cyclic Synchronizer is not connected or powered on.
Moving the Cyclic Synchronizer to a new USB port on the PC will not create a new virtual COM port.

Detection of the C/L Synchronizer


When starting up DynamicStudio and entering Acquisition mode, DynamicStudio will detect and create a
Serial COM port device and add it to the Acquisition System Device tree in DynamicStudio.
After having detected the COM ports DynamicStudio will try to detect if a C/L Synchronizer is connected
to the PC. Having detected a C/L Synchronizer a new device representing the C/L Synchronizer is added
to the Acquisition System Device tree.
After detecting the C/L Synchronizer, the C/L Synchronizer has to be updated with information about the
encoder connected to the C/L Synchronizer. The detailed instructions on how to set up the encoder are
given below.

Setting up Encoder information Cyclic Synchronizer


The Encoder Setup must be done before any operation: The following information about the connected
Encoder must be updated: (make sure the calibration mode in Mode Setup of Device Properties is
selected in this step)
· Encoder Revolutions per Cycle
Specifies the number of full rotations the Encoder has to take for one full "flow cycle". For instance, if the
measurement object is a four stroke internal combustion engine, this number should be 2.
Note: This number should be positive integer.
· Increment Pulses per Encoder Revolution
Specifies the number of Angle Inc. pulses sent out from the Encoder during one encoder revolution. This
number should refer to the properties of the encoder used, and are not to be changed unless the encoder's
own settings are changed.
· Full Cycle/Range
The functionality of the parameter is to describe the size of the Full Range for the Potentiometer or Full
cycle for a rotary encoder.
For a rotary encoder, full cycle would typically be 360 with the unit deg.
Another example is a potentiometer that has a circular movement of maximum 270 degrees. Then set “Full
Cycle/Range” to 270 and “Position Unit” to deg
One example is for a linear Potentiometer which is 500 mm long from one end to the other, then enter 500,
and set the parameter “Position Unit” to mm.
.
· Encoder Index Signal Polarity
Specifies which edge (positive or negative) of the Index pulse from the encoder the C/L Synchronizer is to
use.
· Angle Increment Signal Polarity
Describes the encoder angle Increment signal polarity: Positive or negative going edge.
Below are four situations most likely to be encountered when using an encoder:

190
For the two cases at the top of above figure, the Encoder Index Signal Polarity should be positive and the
Angle Inc. Signal Polarity should be Negative. For the other two cases at the bottom of above figure, the
Encoder Index Signal Polarity should be Negative and the Angle Increment Signal Polarity should be Neg-
ative.
· Zero Position Offset
Specify the zero position offset of the cycle in the hardware, i.e., if the tester wants to define the top dead
center of an internal combustion engine as the start of the fluid cycle, this zero position offset is the offset
between the angle position of the encoder when the piston is at top dead center with the angle position
when the encoder send out Index pulse. This number could be either input arbitrarily or systematically
determined in the calibration operation. For the systematical determination, please refer to the instruction
of Calibration Mode.
· Encoder Input Connector
Specifies which of the encoder input connectors is used.
o BNC: The Index and Angle Inc. signal is provided to the BNC Index And BNC Angle Inc. inputs.
o DB9: The Index and Angle Inc. signal must be provided via the Differential DB9 input connector.

Note: Please be aware of that the definition of 1 cycle in this application. 1 cycle means the period in
which the whole cyclic phenomena has happened, and it doesn't need to be 1 revolution - 360 degrees in
many cases. For instance, 1 cycle for a 4 stroke internal combustion engine covers two revolution of the
engine crankshaft, which is 720 degrees. These parameters should be clarified in Encoder Setup -
Encoder Revolution per Cycle and Encoder Setup - Full Cycle.

Calibration Cyclic Synchronizer


After setting up the Encoder information the Cyclic Synchronizer needs to know where the "Cycle Zero
Position" of a cyclic phenomena is relative to the physical world.
Note: The encoder will give an INDEX pulse when it pass the encoder inherent zero position. However,
this encoder inherent zero position might not be the "Cycle Zero Position" for the application.

191
This is done by selecting mode Calibration. Using this mode the Synchronizer is brought in to a state
where a physical button on the Cyclic Synchronizer is used to indicate where "Cycle Zero Position" is
located.
Start by rotating the system one full rotation, then move the system to where “Cycle Zero Position” is to be
defined, and then press the button.There should be a blink from the LED below the ZERO button, and the
number corresponding to the 'Zero Position Offset' in the Encoder Setup will be updated if you click you
mouse between different modes in Mode Setup.
Note: It is important not to move the system backwards, since the Cyclic Synchronizer is not able to
detect the direction of the movement, only that the system is moved.
Note: In order for the Cyclic Synchronizer to calculate the current encoder position, the encoder has to
move a full rotation, passing the Zero position of the Encoder.
Note: This procedure has to be done each time (and only if) the Cyclic Synchronizer has been power
cycled.
After defining “Cycle Zero Position” using the button on the Cyclic Synchronizer the zero position can be
also adjusted using the “Zero Position Offset” parameter entry in Encoder Setup.

Setting up Encoder information Linear Synchronizer

· Encoder Revolution per cycle


Specifies the number of full rotations the Encoder has to take for one full "flow cycle". For instance, if the
measurement object is that needs to go forth and back 2 times in order to have performed a full cycle, this
number should be 2.
· Position Unit
Here it is possible to enter the Unit in which the position is given, this can be any given string. here are
some examples: "deg", "mm" or "m2.
· Full Cycle/Range
The full range is a number that represents the distance between position (1) and position (2). You could
choose to enter 100 and set "Position Unit" to "%", or you could measure the distance in millimeter, enter
the number found, and set "Position Unit" to "mm"
· Zero Position Offset, End Position Offset and Range
In order for the Linear Synchronizer to work correctly two positions must be specified. The Slider must
pass both these positions, first the Zero position and then the End position, and going back the other direc-
tion, first the End position and then the Zero position.
These positions must be within the "Full Cyle/Range" value.
These position can also be set using the button on the Linear Position.
1. Move the slider to where the Zero position are to be set, and press the button on the Linear Syn-
chronizer. When the button click is recognized, the Linear Synchronizer will light the Zero Position LED
one time, indicating that the Zero position is set.
2. Move the slider to where the End position are to be set, and press the button again. When the button
click is recognized, the Linear Synchronizer will light the Zero Position LED Two times, indicating that the
End position is set.
The area between the two position is called the Working Range. The size of the Range can be seen in
parameter "Range".
· Calibrate
Is used to calibrate the potentiometer to be used with the Linear Synchronizer.

Calibration of the Linear Synchronizer


Before staring to setup anything else on the Linear Synchronizer, the Potentiometer has to be known to
the Linear Synchronizer. This is done performing a calibration. This is done in 3 steps, following the instruc-
tion presented during the calibration.
The two position referred to during the calibration are the following:

192
(1) The end at which the Black wire is connected to.
(2) The end at which the Brown wire is connected to.

Note:If for some reason the calibration fails, try to move the slider to the other end than what was
expected.
Important: Measure the distance between the last two positions, and enter this value into the property
"Full Cycle/Range".

Connecting Synchronization cables


Note: There are two types of connection of synchronization cables, depending on which operation mode
is selected. Only software connection is introduced below, but please also switch the connection in both
software and hardware when you switch between relative operation modes.

Capture mode
In Synchronization Cables connections diagram, make a connection from the Synchronizer to the C/L Syn-
chronizer “Trigger” input.
Preferable connect the C/L Synchronizer to the same output as Q-Switch 1 on the laser (or the output that
triggers the first light pulse in the PIV system). This will insure that the position captured is done at the
same time as light from the laser is emitted, and the particles illuminated

Capture mode and Start mode


If the laser has a warm-up period in which the Q-switch signal is also triggered, it is preferable to use the
"Start Acq" input.This will insure that these leading trigger signals does not provide capture of position
information which will introduce misalignment between position information and image capture.
Connect the "Start Acq." input on the C/L Synchronizer to the same output on the Synchronizer as the
Camera (or one of the cameras). And then in Operation Setup set Start Mode to “External” and the Start
Trigger polarity to the same as camera trigger polarity.

Other modes Connections


In Synchronization Cables diagram connect the “OUT 1” output of the C/L Synchronizer to the External
Trigger input of the Synchronizer that handles synchronization of the camera and laser.
Note: In the setup for the Synchronizer that handles synchronization of the camera and laser, check that
the trigger mode is set to be External.

Mode selection
The C/L Synchronizer has many different modes in which it can run. In the following each mode will be
described in detail. (Mode Calibration will not be described here, see section Calibration for more infor-
mation). In each mode, various parameters should be setup in the Operation Setup below Mode Setup.

Capture
In this mode the C/L Synchronizer receives sync signals from both an ordinary Synchronizer and the
encoder.

193
As long as the acquisition sync signal is receive by the C/L Synchronizer from the ordinary Synchronizer,
the Cycle count and position information is acquired from the encoder and saved. The position information
is added to the Timestamp information of the acquired image and when images (or Analog data) is pre-
sented the position at which the image was acquired can be monitored.
Note:For the Linear synchronizer, all position on the slider will be used and not only those with in the Work-
ing Range. The Zero position is automatically moved to Position (1), no matter what Zero position has
been entered.

Operation Setup - Trigger Polarity


Specifies which edge of the Angle Inc. signal to use to trigger the saving information needed.

Continuous Fixed Position


In this mode the C/L Synchronizer is set to continuously generate a sync signal at a specific position on
each function cycle.

· Mode Setup - Start Cycle


Specifies the Cycle number at which the synchronization starts.
· Mode Setup - Skip Cycles
Specifies a number of Cycles to skip in between each sync signal send to the Synchronizer that handles
the camera and laser.
· Mode Setup – Position
Specifies the Position at which to synchronize the system.

Burst Modes
The definition of a burst is a predefined number of sync signals sent from the C/L Synchronizer to the con-
nected Synchronizer. When the predefined number of signals has been sent a new burst can take place.
For all burst modes the following parameters can be used:
· Operation Setup – Trigger Mode
This parameter describes how each burst of sync signals are to be started.
· Internal
The Burst will start automatically after the C/L Synchronizer receives the first Index pulse from the
encoder.
· External
The Burst will start on the first pulse received on Trigger input. (Remember to Set Trigger polarity cor-
rectly).
· User Action

194
A prompt will be presented in DynamicStudio. Here it will be possible to start a burst or stop the acqui-
sition.
· Operation Setup – Burst Interval
Using this parameter it is possible to describe a minimum time between each start of a new burst in mil-
liseconds. If the time set is less than the duration of a full burst period, this setting will be ignored.
· Mode Setup - Number of Sync Signals per Burst
Specifies the number of sync signals per burst.

Burst within Cycle


In this mode a burst of sync signals is send to the Synchronizer. All sync signals are within the same
Cycle.
Whenever a burst is over and done, the next burst can be started. The starting position in each
burst/Cycle will always be the same, as well as the following positions in the burst.

Note: Take care that the resulting trigger frequency does not exceed the maximum trigger frequency for
the Synchronizer that handles synchronization of the camera and laser.
The generated trigger frequency can be monitored in Status - "Sync Output Frequency" - as shown below:

· Mode Setup - Start Cycle


Specifies the Cycle number at which the synchronization can star a burst.
· Mode Setup – Position
Specifies the starting Position at which to start the burst in the cycle.
· Mode Setup - Position Increment
Specifies the position increment that will be added to previous position position.

195
Burst Fixed Position
This mode is very similar to Continuous Fixed Position, the difference is that here a burst of sync signals
are send to the connected synchronizer, one sync signal per rotation.

· Mode Setup - Start Cycle


Specifies the Cycle number at which the synchronization can star a burst.
· Mode Setup – Skip Cycles
Specifies the number of cycles to be skipped in between each sync signal sent to the Connected Syn-
chronizer.
· Mode Setup – Position
Specifies the Position at which each sync signal is sent to the connected Synchronizer.

Burst Position Increment per Sync


This is similar to Burst within Cycle, except that only one sync signal per full rotation is send to the con-
nected synchronizer.

196
· Mode Setup - Start Cycle
Specifies the Cycle number at which the synchronization can start a burst.
· Mode Setup – Skip Cycles
Specifies if number of cycles is skipped in between each sync signal sent to the Connected Synchronizer.
· Mode Setup – Position
Specifies the starting Position at which the first signal in the burst triggered.
· Mode Setup - Position Increment
Specifies the position increment that will be added to previous position for the next sync signal in one
burst.

Burst Position Increment per Burst


In this mode the first burst is of signals will be send at the same position, only one per each Cycle. The
synchronization signal out from the C/L Synchronizer in next burst will be at the previous position plus the
position increment.

· Mode Setup - Start Cycle


Specifies the Cycle number at which the synchronization can start a burst.
· Mode Setup – Skip Cycles

197
Specifies if non or more cycles is skipped in between each sync signal sent to the Connected Syn-
chronizer.
· Mode Setup – Position
Specifies the starting Position at which the first burst of signals will be created.
· Mode Setup - Position Increment
Specifies the position increment that will be added to previous position for the next burst of signals.

Engine Multi Operation Sync.


This mode provides nearly-full control of each synchronization pulse sent out from the C/L Synchronizer.
In this mode it is possible to define a number of Burst periods for one full Burst Cycle. Each signal in the
burst period can be set at an arbitrary position(maximum one position for each cycle). Also it is possible to
define on which (or both) output (OUT 1 , OUT 2) the sync signal is to be set.
It is particular useful for measurement on an optical internal combustion engine. Normally an optical
engine cannot be run with combustion for more than several minutes, due to the thermal limitation of the
optical glass. Therefore it needs to run on motored mode - without combustion after the fired mode to avoid
over-heating of the optical engine. Therefore, in order to continue the measurement during the motored
operation, it is desired to have various synchronization signal for different operation mode of the engine.
Note: although it is called "Engine Multi Operation Sync.", it doesn't mean that this mode could be only
selected for engine measurement. It provides very large freedom to control each synchronization pulse
sent out from the C/L Synchronizer, and thus could be also selected for other application desire complex
synchronization!

· Mode Setup - Start Cycle


Specifies the Cycle number at which the synchronization can star a burst.
· Mode Setup - Number of Burst Periods Per Full Cycle
Defines the number of “burst periods” in the one Burst Cycle.
· Mode Setup – Burst Period #x (x is the index of each burst period)
For each “Number of periods per full cycle” defined above a Period entry will emerge in parameters for the
C/L Synchronizer.
In each period it is possible to define the following:
· Period Name

198
Here specifies the name of this period. Default name for Burst Period #1 is "Fired", and for Burst Period #2
is "Motored".
· Cycles in Burst Period
Here it is defined how many cycles this Period covers.
· Sync pulses
Defines the number of Sync pulses to be created during the period.
· Skip cycles
For each sync pulse to be created a number of Skip cycles can be defined in between each sync pulse.
This defines how many cycles to skip before next sync pulse output.
· Positions
For each Sync pulse to be created a the position must be defined. This is done here.
· Enable Sync1 and Enable Sync2
These parameters define if the sync pulses are to be created on one or both of the Sync outputs. If it is
selected as true, means this channel is enabled, and vice versa.
Example
Burst Period #1: 4 | 2 | 1; 1 | 90; 90 | True | False
Burst Period #2: 1 | 1 | 0 | 120 | False | True
In this setup, there will be two burst periods in one burst cycle. The first one will contain 4 cycles. Among
these 4 cycles, sync pulse will be generated in the first one and the third one, because 1 cycle is set to be
skipped after each sync pulse. The position for each sync pulse where it is sent out in one cycle is 90. The
sync pulse will be sent out from OUT 1 because only this channel is enabled.
The second burst period only contains 1 cycle, and only one sync pulse without any skip cycle. The posi-
tion for this sync pulse where it is sent out in one cycle is 120. It will be sent out from OUT 2 because only
this channel is enabled here.

Fixed Position Window Triggering


In this mode the C/L Synchronizer sends signals at a predefined fixed rate on OUT 1. The C/L Syn-
chronizer will continue to do this even though no rotation of the Encoder shaft takes place.
When the C/L Synchronizer starts to receive Encoder Reset and Increment signals, and the frequency
and position starts matching the predefined trigger signal The C/L Synchronizer will try phase-lock to the
external signal, until eventually the encoder rotation now drives the OUT 1 signal.
When in phase-lock the C/L Synchronizer starts to send out sync signals on the OUT 2.
If for some reason the external signal disappears the C/L Synchronizer will take over and again driver a
steady sync frequency on OUT 1 and the OUT 2 signal will stop.
Note: This mode is designed to be used along with those devices with fixed trigger rate (some lasers) and
a warm up period of 10 minutes (or even longer)

199
· Mode Setup – Position
Specifies the starting Position at which the sync signals is created.
· Mode Setup – Sync Signal Frequency
Specifies the frequency at which the system will be driven.
· Mode Setup – Sync Signal Window Size
Specify the window size in which the system can be triggered from the cyclic world (in time). This should
be refer to the specifications of related devices. For example, if a laser's trigger rate have to be 10±0.4 Hz,
this window size should be (1000/9.6-1000/10.4)=8 ms.

Start Mode
The “Start Acq.” input on the C/L Synchronizer can be used to trigger the starting point of when to start trig-
gering the Synchronizer (or in Capture mode when to start the capture).
For all start mode goes that a valid RPM has to be measured before any operation can start.
There are three different ways to start:
· Internal
The operation will start automatically
· External
The operation will start on the first pulse received on Start Acq. input. (Remember to Set Start trigger polar-
ity correctly)
· User Action
A prompt will be presented in DynamicStudio. Here it will be possible to start the operation or cancel the
acquisition

Valid RPM Range


Here it is possible to specify a RPM range in which the system will synchronize to the cyclic experiment.
If the rotation speed is lower or higher than specified no synchronization will take place.
Current RPM measured by the C/L Synchronizer can be monitored in Status, which will be introduced
below.

200
Status
The Status section of the properties for the C/L Synchronizer provides information about the current state
of the C/L Synchronizer.

Sync Output Frequency


When the C/L Synchronizer has been setup and is providing sync signals to a connected Synchronizer the
maximum frequency of the Sync output can be monitored here.
When setting up the system, make sure that the frequency specified here does not exceed the maximum
trigger frequency for the Synchronizer attached.

Rotation Speed (rpm/Hz)


When the C/L Synchronizer starts to receive Index and Position Increment pulses the rotation speed can
be monitored here.

Cyclic repetition Speed (rpm/Hz)


This is monitor of cyclic repetition speed, which is calculated from the rotation speed shown above.
Note: To update the above two status information, please click the mouse between two devices in Syn-
chronization Cables panel.

Internal Temperature
The internal temperature of the C/L Synchronizer can be monitored here.

Triggering the Synchronizer


When triggering a synchronizer in PIV mode, the Laser is fired and an image (or double image) is acquired.
But the process of triggering a laser and camera takes time.
Below is a timing diagram of what happens when the synchronizer is triggered (See synchronization in gen-
eral for detailed information on this diagram):

As can bee seen above, the very first thing that is happening is the triggering of Flash lamp 1 in the laser.
Usually the Flash pumped laser has to be pumped around 200 microseconds before the Q-switch can be

201
triggered and then light is emitted from the laser. In this example this is the period that takes the longest
time to prepare for an acquisition.
In these 200 microseconds the rotation continues, and therefore the actual acquisition will not take place
at the position specified!
Ex: To calculate the actual position at which the acquisition will take place lets try the following:
Rotation speed is 1000 RPM = 16.666 RPS => position speed 6000 deg/s
Flash to Q-switch delay is 200 micro seconds (which for this example is the time needed before acqui-
sition can take place):
200e-6 s * 6000 deg/s = 1.2 deg
Note: Have a look at the timing diagram for the Synchronizer to see what the actual delay from an external
Trigger to the laser is fired.

13.6.15 Using Two Synchronizers


In some applications it is necessary to use more than one synchronizer. This could be if one synchronizer
does not supply enough outputs to accommodate all devices needed or if some devices in a system
needs to run at a different speed than others.

For a description on how to setup a system using two synchronizers, refer to the help for the specified syn-
chronizer:

Timer Box

13.6.16 Pulse Receiver


In some applications Dynamicstudio needs to synchronize external equipment. In this case a Pulse
Receiver can be added to the Acquisition system to represent the external device. Using the Pulse
Receiver you can define synchronization pulses that the other equipment need.

Adding the Pulse Receiver to the Acquisition system


The Pulse Receiver is added as any other device that is not automatically detected by the system:

1. Right click the Acquisition Agent under which the Pulse Receiver is to run
2. Select "Add new Device..."
3. From the list of devices select "Custom Synchronization -> Pulse Receiver", and the Pulse
receiver is added to the system

Using the Pulse Receiver


The following parameters are used to define a pulse train to the external equipment. The description of the
parameters refers to T0 and T1. T0 is defined to be the first light pulse from the light source in the system.
For furhter details regarding T0 and T1 please refer to See "Synchronization" on page 172

Activation time
All devices in the system calculates activation time relative to T0. If this device needs to be activated (trig-
gered) before T0 a negative time can be entered.

202
Activation time relation
If the system is running in Double Frame mode, it is possible to set the specified Activation time relative to
T0 or T1 (first or second light pulse). If the system is running in Single frame mode the Activation time will
always be relative to T0.

Activation pulse train


Here a pulse train for the external device is defined: The pulse train must always contain one or more pairs
of timing values, where each pair consists of a pulse width and a dead (off) time following the pulse:
Pulse width 1; Dead time 1; Pulse width 2; Dead time 2;  ... ;Pulse width N; Dead time N.
To define one pulse with an active time of 10 micro seconds and a dead time of 100 micro seconds, enter
"10; 100".
To define a pulse train with first an active time of 10 micro seconds, then a dead time of 100 micro sec-
onds, then an active time of 20 micro seconds and a dead time of 10 micro seconds, enter "10; 100; 20;
10".

Activation signal polarity


Here the active time polarity is defined. If the active time is to be high select Positive, else select Neg-
ative.

Sync. with TBP


If two pulses have been defined in the Activation pulse train, then selecting Yes for this property will auto-
matically place the second activation time relative to T1 (the second light pulse). This ensures that with
later changes in Time Between Pulses, the pulse train will automatically be updated accordingly.

Sync at every Nth signal


Specifies how often the pulse is to be generated. If N = 1 then its every time. If N = 2 then every second
time, and so forth.

Start sync at Nth signal


Specifies when to start the generation of pulses. If N = 1 then start right away. If N = 2 then start at sec-
ond signal, and so forth

Examples:

Ex.1
Given the following input:

Will result in the following output for the external device:

203
Ex.2
Given the following input:

Will result in the following output for the external device:

13.7 Illumination Systems


The DynamicStudio Imaging System supports many methods of producing stroboscopic light-sheets;
pulsed lasers, continuous wave lasers and electro-optical shutters are some typical examples.

Note
The DynamicStudio Imaging System has the capability of supporting pulsed laser or a shutter device com-
bined with a continuous laser system.

Please read about "Laser Safety" (on page 21).

13.7.1 Pulsed Lasers


The synchronization part of the DynamicStudio Imaging System controls the generation of trigger pulses.
Two trigger pulses are generated for each laser cavity to fire a light pulse.
Some synchronization units also feature a failsafe watchdog timer, which can cut the laser interlock to
shut it down in case of system failure.

It is also recommended that you use the following accessories when working with the Nd:YAG lasers

204
l Laser goggles
l Laser power meter

13.7.2 Dual Cavity pulsed lasers


When running Dual Cavity lasers in single frame mode you will in most cases have the possibility to use
both cavities of the laser, making it possible to run the laser at the double frequency as in Double frame
mode.
This functionality has to be enabled in parameters for the laser. Below there is an example of parameters
for DualPower 132-15 Laser:

To use both cavities in single frame mode, enable the parameter "Use both cavities in single frame mode".

13.7.3 DualPower Lasers

Software Control
The flash pumped DualPower lasers have the option to be controlled through the DynamicStudio software
or a control pad. To control the laser via the software, the laser needs to have its serial cable (RS232)
attached to the PC. If the PC don't have an RS232 connector it is possible to use a USB-to-Serial con-
verter compatible with the operating system used.
To control a DualPower laser from DynamicStudio, just add the laser to the "Device tree" by right-clicking
the "Acquisition Agent" icon and select "Add New Device...". When added the device tree will look like
this: 

205
The DualPower device is specifying the laser trigger properties. These properties are used by Dynam-
icStudio to setup correct timing for acquisition.
The Laser control device is for controlling the laser.
When the Laser has been added it will automatically try to establish connection with the Laser control. If it
can't be found on the default COM port it will notify with a message where it is possible to disable the
Laser control. This would generally be the case if the laser is to be controlled by a control pad. The soft-
ware control of the laser can also be disabled by right-clicking the Laser control device and selecting "Dis-
able".

Normal View
In "Normal View" the only property accessible is the laser energy level (for the DualPower 10-1000,15-
1000 and 20-1000 this property is called "Laser diode current"). This is in percent of the maximum and con-
trols the energy of the laser light.

Advanced and Expert View


In "Advanced" & "Expert" view it is possible to change the communication port used to control the laser. It
is also possible to view the status of the laser (and see for example if water pump is on, if any interlocks
have been tripped, etc ...).

206
Tripped interlocks will in most cases prevent the laser from firing, so interlock status can be useful for trou-
bleshooting. For further information on the interlocks, please refer to the laser operators handbook.

13.7.4 Time resolved lasers


For the DualPower 10-1000, 15-1000, 20-1000, 23-1000 and 30-1000 there are some extra properties in
advanced view mode:

Laser Repetition Rate: Used if the Trig 1 and Trig 2 are set to use the internal trigger.

207
RF On: The RF switch turns on or off the Radio Frequency generator for the Q-switches. This must be on
for pulsed operation of the laser. It is recommended to leave the RF switch constant on (default) whenever
the pump diodes are on. If the RF is switched off the laser will emit continuous (CW) at unspecified power,
and the Q-switch trigger will have no function.
Trig 1 Externally: If this is set to false the laser will be triggered with the internal repetition rate.
Trig 2 Externally: If this is set to false the laser will be triggered with the internal repetition rate.

13.7.5 Shutter Devices


Instead of a pulsed laser, a shutter device may be used. This shutter can be used together with a con-
tinuous wave laser and the trigger signals from the synchronization unit of the DynamicStudio Imaging
System can be employed to open the shutter (and close it again after a specified period of time).

13.7.6 Lee Lasers

Products described on this page are all registered trademarks of Lee Laser Inc. For more information visit
http://www.leelaser.com.

13.7.7 New Wave Lasers

Pegasus and Solo are trademark of New Wave™ Research, Inc. For more information please visit
http://www.new-wave.com.

13.7.8 Microstrobe
The microstrobe light source is a gated light source. By definition the gated light source is illuminating dur-
ing the period of time that it is fed a pulse. The image below shows the relation between the pulses fed to
the light source, and the lighting that it emits.

It is possible to configure the two pulses independently from one another, with the exception of the posi-
tion of the pulses relative to time zero (shown in the image above as the dotted red line) and the delay to
close, which are shared settings.

208
The Activation Delay is the time duration from when the light source receives a pulse till it illuminates.
The sum of the Pulse Width and the Delay to Close is the time duration that the light source will illuminate.

Example:
Activation delay: 1 us
Pulse width: 3 us
Delay to close: 1 us
Trigger polarity: positive
(the time bar unit is us)

13.7.9 Brilliant B laser


The Brilliant B laser is a powerful flash lamp pumped, Q-switched, single cavity Nd:YAG laser, well suited
for intensity based measurements. It can be used as a stand alone laser source using the second, third or
fourth harmonic for various applications such as soot diagnostics or tracer-LIF. Optionally it can also be
used as a pump laser source for a tunable dye laser such as the TDL+ dye laser models. This approach is
used for LIF measurements of e.g. combustion species.
To learn more about the control of the TDL+ laser, please see See "TDL+ laser" on page 210
The Brilliant B laser has the option of being controlled from either the DynamicStudio software or a control
pad. To control the laser from the software, the laser needs to be connected to the PC with a serial cable
(RS232) and the option has to be enabled from the control pad. [If the PC does not have an RS232 port it
is possible to use a USB-to-Serial converter compatible with the operating system].

After adding the Brilliant B laser to the Acquisition Agent, DynamicStudio will try to detect it. The image
below shows the detected laser in the device tree:

Even if the laser is not detected it is possible to fire the laser by connecting synchronization cables to a
synchronizer and setting up the laser using it's control pad.

When the laser is detected it is possible to monitor and control it from DynamicStudio using the property
editor.

209
The Q-Switch status can be controlled from the properties. This makes it easy to enable or disable the Q-
Switching during a preview while keeping the Flash lamp running.

When an interlock is tripped, a message box will pop up describing the problem. The problem can also be
seen in the properties of the Brilliant Control.
In the property help, at the bottom of the property editor, a description of each interlock can be seen.
For more information on the Brilliant B laser, see the manual that came with it.

13.7.10 TDL+ laser


The TDL+ laser is a tunable dye laser, capable of producing laser radiation within a very wide wavelength
range from ultra-violet through visible light and up to infra-red. This laser needs to be pumped by another
laser source in order to emit laser light, and cannot be used as a stand-alone laser. The most commonly
used pump laser source for the TDL+ is the Brilliant B Nd:YAG laser. This laser combination can be used
for LIF measurements of e.g. combustion species.
To learn more about the control of the Brilliant B laser, please see See "Brilliant B laser" on page 209See
"Brilliant B laser" on page 209
The TDL+ laser has the option of being controlled from either the DynamicStudio software or a control
pad. Both options will allow you to control the output laser wavelength and set a wavelength scanning
speed.
To control the laser from the software, the laser needs to be connected to the PC with a serial cable
(RS232). [If the PC does not have an RS232 port it is possible to use a USB-to-Serial converter com-
patible with the operating system].
The TDL+ laser is added to the device tree by right clicking the laser source that pumps it and selecting
'Add new device...'. When it has been added DynamicStudio will automatically try to detect the laser.

210
The TDL+ can be controlled either from it's properties or from a dialog. The dialog is opened by clicking the
button (...) in the 'Wavelength control' parameter.

Parameter description

l Current wavelength:
Shows the current output wavelength of the TDL+ laser.
l Fundamental wavelength:
Shows the current fundamental wavelength using the selected UV extension scheme (see
below).
l UV extension scheme:
The dye laser will typically emit visible light. If necessary this can be converted into UV light in a
number of ways, selectable by the user.
It is possible to select between 4 different schemes:
Not used: λ = λ .
f out
Frequency doubling: λ = 2 x λ .
f out
Frequency mixing: λ = 1 / (1 / λ - 1 / λ ).
f out IR
Mixing after doubling: λ = 2 / (1 / λ - 1 / λ ).
f out IR
λ : The wavelength (in nanometers) of the fundamental dye laser beam.
f
λ : The wavelength (in nanometers) of the output laser beam.
out
λ : 1064 nm.
IR
l Stop:
Stops any ongoing movement in wavelength.

211
l Go to wavelength:
Enter the wavelength that you want to go to as fast as possible (1 nm/s)
l Scan start:
Enter the wavelength that you what a scan to start from.
l Scan end:
Enter the wavelength that you what a scan to stop at.
l Scan speed:
Select the speed of the fundamental wavelength that you want to use during a scan.
l Start scan:
Starts a scan.

For more information on the TDL+ laser, see the manual that came with it.

Using the handheld controller while connected to DynamicStudio


If the TDL+ laser is operated via the handheld control pad while connected to DynamicStudio, the prop-
erties are not updated. To update the properties press the stop button.

13.8 Traverse systems

13.8.1 Traverse Control


The following traverse systems are supported by DynamicStudio:

l "Isel" (on page 213)


l "Tango" (on page 220)

To interface with other traverse systems please contact Dantec Dynamics.


For troubleshooting click here.

Setting up the hardware


See the manual that came with the traverse system.

Setting up the Traverse Agent


To control a traverse system you need to set up a Traverse Agent. This is done from the menu Tools->Co-
nfiguration Wizard. The wizard will guide you through the process. The next time you enter Acquisition
mode in DynamicStudio the Traverse Agent is started automatically and is visible from the Windows Sys-
tem Tray.

When entering Acquisition mode the Traverse Agent will initialize and by default load the Isel traverse
driver. A communication error message may appear if an Isel system is turned off, not connected or if you
have a different traverse system. Just ignore this error.

Loading the traverse driver


To load the driver right click the Traverse Agent icon and select the 'Show' item in the context menu.

212
This opens the Traverse Agent dialog. From the menu File->Load Driver... it is possible to load any trav-
erse driver supported.

Drivers are using the following naming convention: Traverse.Driver.[traverse name].dll

Isel
When the driver is loading error messages may appear. Typical reasons for error messages are:

l The traverse system is not connected and turned on.


l The traverse system is not connected to the default communication port of the driver (setting the
right communication port is part of).

Configuration
Configuration of the Isel driver is done through the 'Configuration' menu item shown in the image above.
The different tabs in the configuration is described below.

COM port 
To be able to control the traverse, the driver needs to know which port the traverse system is connected
to.

213
Here it is possible to see all COM ports on the PC and their status. Select the one that the traverse sys-
tem is connected to.
Determining the COM port used can often be done by inspecting the connector on the PC (a number near
the connector).
It is possible to change the Baud rate. If the wrong Baud rate is used communication with the traverse sys-
tem will fail.

Axis Assignments
It is possible to assign and disable the physical axis.

214
Instead of rearranging the cables from the traverse controller to the axis it is possible to assign the axis via
software.
Example: If the axis assignment above causes the physical X-axis to move when the Y-coordinate is
changed in the software, then reassign 'Y pos' to axis #1. Apply similar tests for the other axes.

Speed and Calibration


It is necessary to specify the speed at which each axis should move and also the number of pulses/mm
(the latter can be read from the physical traverse axis).

215
In this case axis 1 is set to a speed of 40 mm/s and a calibration factor of 80 pulses/mm. This means that
when moving this axis 1 mm, the traverse controller needs to give 80 pulses to the axis motor and the
motor receives 320 pulses/s.
If the calibration factor is set wrong, each movement will be wrong.
Ramp, specifying the acceleration ramp of the traverse start and stop movements, is only supported on a
few controllers. (For example the iCM-S8 controller)

Software limits
It is possible to predefine a set of limits for each of the axes on the traverse system.

216
A minimum and a maximum position can be entered to ensure that the traverse keeps within the specified
bounds. It is also possible to disable this feature by placing a checkmark in 'Disable limits'.

General 
In both the Traverse Agent and the Traverse Control dialogs there is a reset button. The operation of this
button depends on the Reset mode.

217
There are two options:

l Reset reference at current position.


The software will set the current physical traverse position to (x, y, z)=(0, 0, 0).
l Move to home position and reset reference.
The software will move the traverse to its home position (home position is a physical switch on
each axis) and then set the position to (x, y, z)=(0, 0, 0).
l Simulate reference movement.
The software will simulate a movement to the home position without actually moving the traverse,
and then set the position to (x, y, z)=(0, 0, 0). This option is only available for the iMC-S8 ISEL
controller.

Programming
Some Isel controllers have the ability to be programmed. If a program is loaded in to the Isel Controller,
then when starting Acquisition, the program will be executed.

The program is saved in a file with the extension ".out". This is a text file holding the instruction that are
executed when the program is running.
Below is an example on such a program with some description:

218
Code Description

p0,128,0 Turn off LED's

71 move to reference position (home)

m10000,5000 move to start position in steps, and with a speed in Hz

n1 reset position to 0

j300 set start/stop frequency to 300Hz

J80 acceleration forward in Hz/ms

p0,128,1 Turn on LED 1 turn off LED 2

m16021,12000 move forward to end position

p0,128,2 Turn off LED 1 and turn on LED 2

m0,23535 move backward to reset position

3 20,-4 loop the last 4 lines 20 times

p0,128,0 Turn off LED's

9 end programming

For details on the programming of the Isel controller please refer to the user manual for the Isel controller.

To load the program enter the full path for the program, or click Browse and selected the file by browsing to
the right file to load.
Click Load and Start Program to load the program in to the Traverse controller.

219
NOTE: If a valid program is loaded into the controller, the program will be executed when starting an acqui-
sition in DynamicStudio.

To make sure that no program is loaded in to the Traverse controller click "Unload Program"

Tango
To setup the Tango traverse driver see the manual on the CD that came with the system.

Note
For microscopic measurements, when pressing the home button, the motor used for focusing will run for a
while because there is no stop switch on that axis and by definition the home position is the switches on
the axis.

Controlling the traverse


There are tree ways to control any supported traverse system.

l Controlling directly from the Traverse Agent.


l Controlling from DynamicStudio Traverse Control.
l Controlling from DynamicStudio Acquisition Manager.

Controlling directly from the Traverse Agent


Controlling the traverse system from the Traverse Agent is usually just to see that the connection is work-
ing and every thing is set up and configured correctly.

l Read button
Reads the current position of the traverse. If the position of the axis has been manually moved
without the use of the software then the position will not be correct.
l Move button
Moves the traverse to the position specified in the number fields on the left of the dialog.
l Stop button
Stops an ongoing movement of the traverse.
l Reset button
Resets the control and traverse system. The functionality of this button depend on the Traverse

220
system used and the way its driver is configured (See Reset mode of the Isel driver as an exam-
ple).
l Home button
Moves the traverse to its home position which is defined to be the position of the physical stop
switches on each axis.

Controlling from DynamicStudio Traverse Control


To control the traverse system from the Traverse Control in DynamicStudio you need to setup Dynam-
icStudio to automatically load the Traverse Agent, See Setting up the software. After setting up Dynam-
icStudio and Acquisition mode has been entered, the Traverse Agent can be seen from the Device tree.

From the properties of the Traverse Agent it is possible to specify the traverse driver either by selecting
from a list or by entering the path and name to the driver.
The Traverse Control can be found in the menu Run->Traverse Control.

Number fields and buttons correspond to the ones in the Traverse Agent, see Controlling directly from the
Traverse Agent.

221
Controlling from DynamicStudio Acquisition Manager
The Acquisition Manager can be found in the menu Run->Acquisition Manager. See Acquisition Manager
for more info.

222
Troubleshooting

223
Problem Action

When loading the driver an error mes- 1. Check that the


sage comes up Traverse sys-
tem is turned on
and connected
to the PC.
2. Check that the
right com-
munication port
has been
selected in the
Traverse system does not respond to configuration of
any commands from the Traverse the driver.
Agent For the Isel
driver see COM
port.
For the Tango
driver see the
manual that
came on the CD
with the sys-
tem.

Using the Tango traverse. Pressing The motor used for focusing will
Home button seems to freeze the soft- run for a while because there is
ware no stop switch on that axis and
by definition the home position is
the switches on the axis. Wait
for the motor to stop.

When pressing the Move button the 1. Check that


traverse does not move none of the axis
on the traverse
have overshot
the home
switches. If so,
manually move
the position to
the other side of
the home
switch.
2. When using the
iMC-S8 ISEL
Controller,
remember to
move to home
before moving
the axis. Press
the Reset but-
ton and ensure
that the Reset
Mode is set to
Home or Sim-
ulate Home.

Axis movement does not correspond From the configuration of the trav-
to the entered value erse driver make sure that the

224
Problem Action

right calibration constants have


been entered.

The traverse system does not respond 1. Make sure that


to any commands from Dynam- the Traverse
icStudio Agent and the
right traverse
driver is loaded.
See Setting up
the software for
more info.
2. Leave the Acqui-
sition mode and
exit the Trav-
erse Agent from
it's context
menu and enter
the Acquisition
mode again.
This will start
the agent again.
3. Power the trav-
erse system off
and back on.
4. Restart Dynam-
icStudio.

The wrong axis is moving when press- 1. In the con-


ing the Move button figuration of the
traverse driver
check that the
axis assign-
ment is correct.
For the Isel
driver see Axis
Assignments.
2. Rearrange the
physical con-
nectors.

225
14 Analysis
After images and related data is acquired an Analysis can be applied, selecting Analyze... from the menu.
DynamicStudio includes a wide range of common analysis for image and vector processing, LIF, particle
sizing, spray diagnostics etc.

Note
The different analysis methods available are dependent on the add-on' enabled and installed on your sys-
tem.

14.1 Analysis Sequence


An Analysis Sequence is a combination of different analysis steps. The sequence can be applied to an
ensemble just like a normal individual analysis.
All the individual steps in the sequence is executed until the end-result is achieved.

14.1.1 "Pickup" an Analysis Sequence


Before creating an Analysis Sequence all the individual analysis steps must be added and executed man-
ually. When settings in each individual analysis is adjusted to give the expected end-result, the analysis
Analysis sequences can be picked up for later use. This is done in the following way: Select all results in
the sequence you just created (Hold down 'Ctrl' key and click the results). Right click on one of the steps
in the sequence and select "Add Analysis Sequence to Sequence Library" from the context menu.

When you select "Add Analysis Sequence to Sequence Library" the dialog "Add Analysis Sequence" will
appear. Here the sequence is displayed as well as all external dependencies. External dependencies are
other analysis results needed for the sequence to be executed. All internal dependencies are handled by
the Analysis Sequence itself . You can view any dependencies that any single analysis step has by click-
ing in the analysis step. You can also see which analysis step requires what external dependency by click-
ing on any of the external dependencies.

14.2 Analysis Sequence Library


When an Analysis Sequence is picked up it is placed in the Sequence Library. This library holds all pre-
viously defined Analysis Sequences. The sequence library is global and is available from all databases.
When you want to apply an Analysis Sequence to your data, select Apply Analysis Sequence from the
Context menu, and select the sequence from the library. Please note that you can only apply a sequence
if the parent data matches, meaning f.ex. that if you have defined a sequence starting with a single frame,
you will only be able to apply this on single frames. From the library you are able to manage your
sequences: Each sequence can be given a name, and description, you can also remove unneeded
sequences.

Duplicate an Analysis Sequence


It is possible to duplicate an analysis sequence. This is done by selecting the sequence that you want to
duplicate and clicking the Duplicate button, alternatively click Ctrl+D or right click- the sequence that you
want to duplicate and select "Duplicate" in the context menu that appears.
The name of the duplicated sequence will be "copy of " + the original name.

Import/export an Analysis Sequence


The analysis sequence can be exported to and imported from a .sequence file .

226
Rename an Analysis Sequence
It is possible to rename an analysis sequence. This is done by selecting the sequence that you want to
rename and Clicking the Rename button, pressing F2 or by right clicking the analysis sequence and select-
ing "Rename". The sequence name will now be editable.

Changing the description of an Analysis Sequence


You can change the description of an analysis sequence. This is done by clicking in the Description text
box and editing the contents. Once the content is changed, press the Tab button, or click elsewhere in the
dialog. You will be asked if you really want to change the contents or not, and if you click yes the changes
are saved.

Change recipe setting for a method in an Analysis Sequence


For many Analysis Methods in an Analysis Sequence it is possible to change recipe settings.
To change the recipe for a method in a sequence, select the method in the sequence tree. If recipe set-
tings can be changed the button “Show Recipe” will be enabled, click it to see and possibly modify the rec-
ipe settings of the analysis method.When done, click "OK" and you will be asked if you really want to save
the changes to the analysis sequence or not.

14.3 Using Analysis Sequences


When a Analysis Sequence is to be applied, select the ensemble from where the analysis is to take place.
This ensemble is referred to as the "root node" of the sequence. If the sequence has multiple root nodes
select these as Input to Analysis. All external dependencies must be selected as Input to Analysis before
you right click and choose "Apply Analysis Sequence...".

At that point you will have the dialog "Analysis Sequence Library" displayed. Here you can choose the
Analysis sequence that you want to apply. When you select a sequence the sequence will be displayed in
the Sequence view and all the external dependencies for the sequence will be displayed in the external
dependencies view.

If some external dependencies are missing, they will be marked with ">>> MISSING <<<" and it will not
be possible to start the execution of the sequence. You will have to leave the sequence library and make
sure that all external dependencies are selected.

You can examine what dependencies an Analysis step requires by clicking on an Analysis step. By click-
ing on any of the external dependencies you will have the analysis step highlighted that requires this exter-
nal dependency. External dependencies and the root nodes will also be highlighted in the database view.
By clicking any of the root nodes in the sequence you can see where in the database the analysis will
start.

In some cases it is necessary to change the default association of external dependencies or root nodes
with the Input for Analysis. This is done by right clicking the root node or the external dependency. A list
will be shown from where you can choose the right input for analysis.

14.4 Predefined Analysis Sequences


A number of Predefined Analysis Sequences exists. The only difference between predefined and user
defined analysis sequences is that the predefined sequences can not be deleted.
The Predefined Analysis Sequences are marked with "(predefined)" at the end of the sequence name.

227
14.4.1 Use of Analysis Sequences
Analysis Sequences can be applied in many useful situations, for example:

Applying the same analyses to many ensembles (Batch processing)


You have made a lot of measurements, and saved the data in multiple ensembles. Use the first ensemble
of data to experiment with your analysis settings until you get the result you need. Then save the
sequence of analyses and re-apply this to all the remaining measurement ensembles.

Batch processing is however not possible if the Sequence has multiple root nodes. Instead if you have the
Add-on "Distributed Analysis" you can add one sequence at a time to the job queue and continue to the
next.

Frequently used results


If you repeatedly is looking for the same results in your data, you can save a sequence of analyses that
specifically finds your answer. Since the sequence can be built up using all combinations of existing
anayses, you can adapt your sequence to return precisely the result you seek.

14.5 Context menu


A context menu is a menu that pups up when either right clicking an object or at the time an object is
selected and clicking on the Application key on the keyboard. The menu offers a list of options which vary
depending on the context of the action, and the item selected.

14.6 Distributed Analysis


Distributed Analysis is a way to reduce time spent analyzing data by utilizing a network of PC's . Each PC
will receive a dataset and calculate a result. This means that if 4 PC's were used for analysis, then in
theory all datasets of an ensemble will be calculated 4 times faster than if only one PC was used. There
is, of course, some overhead involved in transferring datasets over the network.
Distributed analysis has a close relation to the distributed database capabilities of DynamicStudio since
by default results are saved locally where they have been calculated.

14.6.1 Installing Analysis Agent software


A PC that is to be used during distributed analysis must have DynamicStudio installed.
It is not necessary to install a dongle on the remote agent, the system will always use the dongle installed
on the DynamicStudio PC.

14.6.2 Configuring Distributed Analysis.


Before distributed analysis is to be used the system has to know which PC's to use during analysis.
To tell DynamicStudio which PC's to use, start up Agent Host as described in "Installing Analysis Agent
software" on each of the PCs. Then startup DynamicStudio and from the Tools menu select "Con-
figuration Wizard".
The Configuration Wizard will guide you through the selection of Analysis Agents to be used. Note that it
is not possible to disable the local Analysis Agent.

Options for Distributed Analysis


Options for Distributed analysis can be found in the Options dialog. To have the Option dialog displayed
select Options form the Tools menu.
The tab Analysis of the Options dialog holds four checkboxes related to Distributed Analysis:

228
l Use Distributed Analysis (when checked enables distributed analysis)
As described in the dialog this enables you to queue up analysis jobs. If you remove the check-
mark the analysis will be carried out immediately using the traditional analysis engine.
Distributed Analysis can also be enabled/disabled in then Select Analysis Method dialog.
l Close Pending Analysis Jobs dialog on all jobs done(default checked)
This lets you decide if the 'Pending Analysis Jobs' dialog is to be close automatically when all
jobs has been done.
l Start Analysis jobs Automatically (default checked)
This lets you decide if the execution of the job queue is to be started automatically.
If you uncheck this option you must manually start the execution of the analysis jobs queue in the
Pending Analysis jobs Dialog.
l Local Agent Only (default unchecked)
This lets you decide if analysis is to be assigned to the local agent only.
l Save Analyzed data in Main Database(default unchecked)
By default all results are saved on the PC where analysis took place. This setting lets you decide
if you want all remote agents to save results in the main database.
(Note: It is possible to specify for each Analysis Agent if results are to be saved distributed or in
the main database. This is done in the Pending Analysis Jobs dialog.)

14.6.3 Analyzing using Distributed Analysis


When distributed analysis is enabled all analysis is added to a queue of analysis jobs. This job queue is
accessed via the "Pending Analysis Jobs" dialog. The "Pending Analysis Jobs" can be opened from the
Analysis menu by selecting "Analysis Jobs..."

Pending Analysis Jobs


The dialog is divided in to three tabs:

l Analysis Jobs
This tab displays the content of the job queue. It is possible to start/stop the execution of the job
queue and it is also possible to remove individual jobs as well as clearing the entire job queue.
(Note: it is not possible to remove a job that is being analyzed).
After having started the execution of the jobs in the queue, and after the first job is finished the
system tries to give an estimate on how long time it will take to do the rest of the jobs in the
queue. This estimate is presented just below the job content.
l Job progress
This tab displays the overall progress of the current job being analyzed. Also, after the very first
result is saved, the system tries to estimate the remaning time.
The job is split in to several Tasks. Each Task represent the analysis of one dataset in the ensem-
ble. The progress of each Task is also displayed.
Some methods are implemented to utilize all CPU cores found on the PC, and some are not. In
the case a method is not implemented to utilize all CPU cores, Distributed Analysis will instan-
tiate one analysis Task for each CPU that will run in parallel.

Below are two examples of Distributed Analysis. The first one (on the left) is a display a single task run-
ning on several CPU cores. This task can run on several cores thanks to a specific implementation in
DynamicStudio. The second example (on the right) is a display of a job split in multiple tasks that are run-
ning in parallel.

229
l Analysis Agents
This tab holds information about the Analysis Agent connected. Each Analysis Agent has a row in

230
the table shown. There are four columns :
l Host

The host name of the agent's PC.


l Status
The connection status of the agent. If for whatever reason an agent disconnects
you can attempt to reconnect by clicking on 'Connect Agent'.
l Save remote
A checkmark that can change whether to save analysis results. If checked the
Analysis agent will save the result at the PC where it is running. If not checked
the result will be saved in the main PC.

14.7 Distributed Database


DynamicStudio Distributed Database has the ability to manage remotely stored data. A distributed data-
base is a database where part of its contents is not saved at one single PC but is spread over a number of
PC's.
There are two ways a database can end up holding remote data:

l a Distributed Acquisition agent saves acquired data locally


l a Distributed Analysis agent analysis results saves data locally

If a database holds remote data it is visualized in the database tree. The example below shows a part of a
database tree where one ensemble has remote content. Ensembles with remote data are marked with a
globe.

The globe indicates that all or parts of the ensemble is stored remotely.

If for some reason it is not possible to get access to parts of the ensemble the globe will change color:

l Yellow indicates that one ore more remote PC's are not reachable.
l Red indicates that all remote PC's are not reachable.

The example below show a database where all the remote PC's that hold part of a ensemble is not reach-
able.

To get an overview of remote PC's involved in a distributed database, select the root icon of the database
tree and examine record properties of the section Distributed information. In the example below the remote
PC 'tst_xp' is involved. This is indicated in the 'Remote Agents' property. If one ore more PC's is not
reachable they are be listed at 'Unreachable Agents'

231
14.7.1 Collecting remotely stored data
Remote contents can be moved to the main database. Hereby we can ensure that all data is stored in one
location. Collecting remote data is done from the context menu of the root, a branch or a single ensemble
in the database tree by selecting "Collect remote contents to main database".
For example, right clicking on a project of the database tree and selecting "Collect remote contents to
main database" will collect all remotely stored data, that is included in that specific project. "Collect
remote contents to main database (Recursive)" will also collect all child records of the remotely stored
data.
All remote data will be copied to the main database and then deleted at the remote PC. When all remote
data has been collected, the globe disappears from the database view.

14.7.2 Troubleshooting Distributed Database


There can be a number of reasons why it is not possible to get in contact whit a remote PC. Here is some
guide lines as to how to solve some of these issues:

l Check that AgentHost is started on the remote PC (if AgentHost is running it has a notify icon in
the system tray). AgentHost can be started by clicking Start->All programs->Dantec Dynamics-
>Remote Agent for DynamicStudio.
l Network changes. Check that the PC's involved are located on the same subnet.
l Firewall and Firewall settings on the PC's involved. During installation a number of firewall excep-
tion are added to the Windows Firewall exception list. If these exceptions have been deleted or
another Firewall tool is used, it might not be possible to reach remote PC's. For experienced
users and administrators the exceptions can manually be added. You can also choose to reinstall
DynamicStudio. Finally you can choose to disable the firewall (not recommended).

14.8 Custom Properties


Every record in the database; projects, runs, ensembles and datasets contains properties. The Record
Properties can be displayed in the Record Properties view by selecting Alt+1. The properties are a col-
lection of fixed properties defined and maintained by the system, and custom properties which can be
added dynamically.
Custom Properties can be manually added by selecting Custom Properties... in the context menu on the
record. Datasets automatically inherits the custom properties of the parent ensemble, but the values of
the properties can be changed individually.

14.8.1 Example
Custom properties are used in a number of situations. During calibration of LIF images custom properties
can be used to store condition values like temperature or pressure. Another example is the Sort option
which stores the sort value as a custom property.

14.9 Timestamp
Timestamp or acquisition time is an indication of when data was acquired. DynamicStudio provides a very
precise way of determining exactly when images and analog data is acquired using hardware time stamp-
ing. The timestamp is assigned every acquired dataset, and is if possible, maintained and transferred
through the analyses.

232
14.10 Selection (Input to Analysis)
Selection are indicated in the database with a check icon .

A Selection is a way to mark one or more external ensembles as additional inputs to an Analysis. Some
analysis methods requires that not only the parent data is available, but also a reference to an external
dataset.
A Selection in the database tree is made by selecting "Select Toggle" in the context menu, or by pressing
the Space bar on the ensemble.

Un-selection is done similarly to select , but you also have the possibility to click Unselect in the tool
bar.

14.10.1 Example
One example is applying a mask to a series of images. The mask is created using the Define Mask anal-
ysis method, and the later applied to the images using the Masking Image analysis method. In this case
the ensemble containing the mask must be selected as user selection for the masking routine to work.

14.11 Fixed Selection (Input to Analysis)


Fixed Selection are indicated in the database with a check icon .
Fixed Selection is as normal Selection(see above). The difference is how the selection and un-selection is
performed.

A Selection in the database tree is made by selecting "Select Fixed Toggle" in the context menu, or by
holding down the CTRL key and pressing the Space bar on the ensemble.

Normally when clicking Unselect all selected record will be unselected, but not the Fixed Selected rec-

ords. To unselect Fixed Selected record bye clicking Unselect, you must hold down the Ctrl key.

The advantage using Fixed Selection is (as the name described fixed) meaning that the normal way of
clearing selection does not influence the Fixed selected records. This means that if a database holds rec-
ords that are use frequently you can chose to Fix select these records, and they will stay selected until
manually un-selecting these records.

233
15 Analysis methods
After images and related data is acquired an Analysis can be applied, selecting Analyze... from the menu.
DynamicStudio includes a wide range of common analysis for image and vector processing, LIF, particle
sizing, spray diagnostics etc.

You can access online analysis help and documentation from DynamicStudio.

1. Select a raw or processed data set or ensemble in the database tree


2. From the context menu select analysis
3. Press the help icon in the Recipe dialog

Note
The number of analysis methods available depend on the add-ons enabled and installed on your system.

15.1 2D Least squares matching (LSM)

2D Least Squares matching (LSM) is a method for determining 2D velocity fields in highly seeded flows in
water and air. The input data consists of double-frame images or of time-resolved single-frame images and
the output data are equally spaced vector fields. Interrogation areas from within the image are analyzed to
determine local affine transformations.

15.1.1 Background information


The fundamental theorem of Helmholtz states that every infinitesimal motion of a fluid element can be
decomposed in translation, rotation and deformation. In the last decades several investigations have been
performed to experimentally describe these fluid motions. In classical PIV based on correlation tech-
niques, 2D cross-correlation is most frequently applied to extract the zero order translational velocity com-
ponents neglecting the higher order terms of rotation and deformation. The assumption is that the flow field
is smooth and not significantly influenced by rotational or shear displacements, thus yielding the zero-
order translational displacement field with an additional measurement uncertainty due to neglecting the
higher-order terms. The measurement uncertainty can be reduced by window deformation techniques,
which require manipulation of the raw particle images. The higher-order motion terms are then indirectly
estimated on discrete grids by finite difference schemes. The assumption is that the higher order fluid
motion of an element is only affected by the translational velocity components of the neighboring ele-
ments.
In contrast to correlation based techniques, LSM shifts, rotates and stretches a fluid element. For this pur-
pose, the LSM algorithm iteratively compares gray value information of an interrogation area in the first
time step with the gray value information in the second time step. This is an iterative least squares pro-
cedure applying a proper transformation on the interrogation areas. In 2D this results in six transformation
parameters. The advantage of LSM is that whilst calculating the zero order translational velocities, the
first order terms of motion are simultaneously optimized increasing the accuracy of the velocity field. The
resulting displacement gradient tensor includes parameters like rotation, shear and strain of the inter-
rogation area resulting from the particle displacement within the area.
2D LSM performs a geometric and radiometric transformation between two successive states of the same
system. In the case of a gray value filled interrogation area its state is a gray value distribution of the pixel
elements. For this purpose, the transformation is optimized such that the gray value differences between
a template area and a search area reach a minimum. Compared with conventional 2D cross-correlation,
LSM considers the deformation of a fluid element for the calculation of the displacements. This results in a
more accurate calculation of the velocity field. The velocity gradient tensor and as result the rotation and
deformation rate tensor can be calculated without applying central difference schemes. In LSM all this is
done without manipulation of the raw particle images.

234
15.1.2 Usage
The use of 2D LSM is very similar to that of standard cross-correlation. The recipe dialog is shown below.
You can define a Threshold level such that only areas of the image are used which show a variation in
intensity above the chosen treshold. The Interrogation are size can be chosen independently in x- and y-
direction, and any odd number may be chosen. Also the spacing (Shift) between the vectors (i.e. over-
lapping of the interrogation areas) can be chosen independently in both directions, and any even number of
pixels is allowed.. The start position (Grid Start) can be chosen to be different from the lower left corner.
The size of the vector field can be specified with the parameter Grid Width Height. These two last param-
eters allow to compute the vector field on the smaller area then the whole image, if desired.
The Search factor defines the size of the second interrogation area, which the first interrogation area is
matched to. The iterative procedure is stopped when all parameters have converged or when the number
of Maximum iterations has been reached. In the latter case these vectors may be rejected by checking
the corresponding check box. A significance test can be applied to check if the calculated shear, rotation
and scale parameters are significant. Please note, that in case they are not significant they will be
replaced with zero-entries in the data set.
As an iterative method LSM needs starting values for the affine parameters that are to be determined. If no
foreknowledge exists about the velocity field, all parameters are initialized with Zero. To speed-up the proc-
essing and reach better convergence behavior the LSM process can be initialized with the results of a pre-
ceding cross-correlation step (Use cross-correlation as input). To avoid false initialization by outliers, a
Universal Outlier Detection validation can be enabled. Its parameters (Neighborhood size and Nor-
malized Residual) can be adjusted. An alternative method to provide initialization is the Pyramid
approach. It is usually faster to use the cross-correlation as an input, and it is recommend to use this
initialization to ensure the reliability of the calculated vector field.

235
15.1.3

15.1.4 References
[1] J.Kitzhofer, G. Ergin, V. Jaunet, "2D Least Squares Matching applied to PIV Challenge data", 16th Int
Symp on Applications of Laser Techniques to Fluid Mechanics Lisbon, Portugal, 09-12 July, 2012
[2] V. Jaunet, J. Kitzhofer, T. I. Nonn, B. B. Watz, P. Dupont, and J.-F. Debiève, "2D Least Square Match-
ing: an alternative to the cross-correlation technique for PIV applications", European Fluid Mechanics Con-
ference, Rome 2012.
[3] V. Jaunet, J. Kitzhofer, T. I. Nonn, B. B. Watz, P. Dupont, and J.-F. Debiève, "2D Least Squares
Matching, une alternative à la corrélation pour la vélocimétrie par images de particules", 13ième Congrès
Francophone de Techniques Laser, CFTL 2012 - ROUEN, 18 – 21 Septembre 2012
[4] Jerry Westerweel and Fulvio Scarano. Universal outlier detection for PIV data. Experiments in Fluids
Volume 39, Number 6, 1096-1100, DOI: 10.1007/s00348-005-0016-6

15.2 Adaptive Correlation


The adaptive correlation method calculates velocity vectors with an initial interrogation area (IA) of the
size N time the size of the final IA and uses the intermediary results as information for the next IA of
smaller size, until the final IA size is reached.
The Adaptive correlation can be used with the High Accuracy Module and Windows deformation.

236
Additionally, local validation can be added to the adaptive correlation so that viewed on all the calculations
process, less 'bad' vectors are generated. To compensate for the loss of vector field resolution during the
processing, overlap of IA is often used with a typical value at 25 %. (Post-processing the resulting vector
map by re-sampling it with an oversampling factor greater than unity can be done as well to enhance the
spatial resolution - see help on the method 'Resampling of Vector Map'.)

Example: (Left) Cross-correlation with (16x16) and 25% overlap, (Right) Adaptive correlation (16x16) and
25% overlap with 3 steps refinement. Note that less bad vectors are generated on the map to the right and
that local refinement ("green vectors") is a realistic correction of the flow field.

15.2.1 Interrogation areas


When selecting N=3 and a final interrogation area of (16x16), the initial IA size is (128x128)

The parameter "Overlap - Horizontal/Vertical" defines a relative overlap among neighboring interrogation
areas, as illustrated in the figure below for (H-50 %, V-50 %). It can be set independently for the horizontal
and vertical, offering total freedom to increase vector map resolution in any direction. In the example, 5
vectors maps are created instead of 1 when H = 0 % and V = 0 %.

237
Setting the various parameters of the adaptive correlation calculations is briefly explained in the following.
More technical information can be found in the PIV User's Manual.
More information on...

l Window and Filter functions


l Validation methods available with adaptive correlation
l About Interrogation area offset

15.2.2 Window and Filter


With the development of advanced algorithms, filters are very little used and nowadays preference is
always given to setting the interrogation areas and validation methods adequately. Basically, the "Win-
dow/Filter" options set a-priori functions on the signal processing.

See more in the help file regarding Windows and Filters.

15.2.3 Validation methods


Validation parameters for the adaptive correlation method are various and can also be used in combination
to fine-tune the processing and, when needed, to remove spurious vectors.
In the "Peak validation" section, the user can set values for the minimum and the maximum peak widths
as well as the minimum peak height ratio (between 1st and 2nd peak) and thereby put more stringent con-
ditions on peak identification for the subsequent determination of vectors.
Peak validation can help identify invalid vectors, but is unable to produce an estimate of what the correct
vector might be. Consequently the invalidated vector will simply be substituted with zero, which in many
cases can be quite far from the truth. You are therefore strongly advised not to use peak validation alone,
but always combine it with a local neighborhood validation, which based on neighboring vectors is capable
of making a realistic estimate of what the spurious vector should have been.

238
With "Local neighborhood validation", individual vectors are compared to the local vectors in the neigh-
borhood vector area, which size (MxM) is set by the user. If a spurious vector is detected, it is removed
and replaced by a vector, which is calculated by local interpolation of the vectors present in the (MxM)
area. Interpolation is performed using median or moving average methodology (with n iterations).
Spurious vectors are identified via the value given to the "Acceptance factor". This factor effectively
allows a given degree of freedom on velocity vector gradient inside the (MxM) area and if the calculated
gradient is larger than set, the central vector is removed. The larger this factor is, the less the velocity vec-
tor map is spatially corrected. On the other hand, with low factor values, the vector map is smoothed at a
level that removed all

Example of validation settings for PIV analysis using adaptive correlation methodology.

15.2.4 Interrogation area offset


Velocity vectors are estimated from mean particle displacement inside interrogation areas (IA). Math-

ematically, , where ' D' is the displacement and ' u' is the velocity, is the
main formula used to calculate velocity vectors. This formula is transformed into an algebraic equation
either using a Central Difference Scheme or a Forward Difference Scheme.

The Central Difference Scheme is equivalent to a three-point symmetric algorithm for the evaluation of

, with a reference 'point' created at the time t+. The Forward Difference Scheme, on the other
hand, considers the temporal reference t0.

239
Note: The Central Difference Scheme is mathematically the most accurate methodology and therefore
shall be preferred with PIV measurements. When processing further advanced measurements such as
PIV/LIF, the Forward Difference Scheme shall be used because the LIF-image will get the same temporal
reference (t0) as the velocity vector map, which will not be the case with the Central Difference Scheme.

15.2.5 High Accuracy and Deforming Windows


The idea behind the High Accuracy and deforming windows PIV algorithm is to:

l Use a signal analysis approach without image interpolation


l Optimize the signal strength by window off-set
l Optimize signal strength by capturing particle drop-out due to velocity gradients
l Achieve bias free measurements through improved sub-pixel interpolation
l Achieve high sub-pixel accuracy independent of correlation peak shape
l Minimize displacement estimate errors by use of adaptive deforming windows

Establishment of high accuracy sub-pixel interpolation


The benefits of Gaussian and Parabolic fitting of the correlation peaks were the fast computing speed.
However, both methods have in their nature limits, simply because of the pre-assumption of the cor-
relation peak shape. The result is a bias error, which is often described as peak locking. Further when par-
ticle images convoluted with velocity distribution (the later being the most important) produce non-ideal
correlation peaks, the result is basically incorrect.
The high accuracy sub-pixel algorithm used is independent of particle image shape and correlation peak
shape. The method works on individual correlation peaks. The high accuracy is achieved by using the full
information in the correlation function and not just the nine highest values in the correlation plane.

Adaptive and deforming windows


As in any signal analysis, windowing and zero-padding discrete data is required in order to avoid aliasing
etc. In standard Adaptive correlation the situation is like shown below:

A typical choice for PIV is a round Hanning window. However, the windowing does not take into account
that there are velocity gradients in the flow. Hence ideally cross correlation should be between windows,
which follows the flow gradients.
This can be adapted in an iterative loop, where interrogation area size and shape is chosen to suit the
velocity gradients. This procedure gradually builds up the signal strength and result in successful reduc-
tion the interrogation spot to an absolute minimum.

240
When the adaptive deforming window is applied in non-integer steps, the iterative capture of the two inter-
rogation spots further ensures that particle images on the border of the interrogation regions are equally
weighted by the window function. This is particular important when reducing the size of the interrogation
spots to a minimum, because non-equal weighted border particles, will slightly bias the measured dis-
placement.

Flow diagram
In the standard Adaptive correlation, the two interrogation windows are discrete offset based on an esti-
mate from the previous displacement estimates with the same or larger interrogation regions.
Using the adaptive deforming windows, the displacement estimates must be with sub-pixel accuracy. We
have chosen to use the Gaussian sub-pixel estimate for this iteration in order to increase computing
speed. The high accuracy is applied in order to have the bias free estimate at the end.

Example, picture from the PIV Challenge 2003


Purple vectors show results from a previous pass in the iterative method. The dotted green squares show
nominal positions of the interrogation areas. Blue and orange squares show offsets on frames 1 and 2
respectively. Colored ellipsis shows the deformed interrogation spots with each line representing 10, 50
and 90 % weighting values. Please note that windows are not necessarily centered within the square
areas; This allows for non-integer interrogation spot offset.

Interrogation spots are overlapped showing only the 10 % window limits. Particles from frame 1 are
shades of blue, particles from frame 2 shades of orange. Overlapping particle images become shades of
gray. The resulting correlation plane; On account of the interrogation area offset and non-centered win-
dows, a peak near the center is expected. Peaks outside the dotted gray line are considered outliers.

241
Achieving high accuracy in practice
With the combined algorithm of adaptive deforming windows and high accuracy sub-pixel interpolation,
there is improved signal strength, hence opening the possibility to decrease the interrogation spots.
Decreasing the interrogation spot and having high accuracy allows for small particle displacements, which
combined will result in increased spatial resolution.
It is however evident that these new algorithms are pushing the results so far that the fundamental band-
width limitations of the PIV signal is challenged and other errors in the signal forming needs very careful
attention and craftsmanship.

242
This need of investigating the accuracy can be illustrated in this experiment: A micrometer stage rotates
the target plate by means of a leverage arm. The arm is 1867 mm and the vertical displacement is 2.5 mm.
Mid.: Enlargement of the particle images, simulated by sandpaper. The F-number was 2.8, 8 and 11, Left:
Resulting displacement map.

Resulting PDF of the U displacement for #F2.8, #F8 and #F11 for the above setup. The displacement is
mechanical imposed and known (above figure). Top: 9-point Gaussian sub-pixel fit (cyan). Bottom: High
accuracy method (green). The single PDF on the right hand (blue) is made from the curve fitted data and
expresses the expected PDF.

243
Read more: Westergaard, Madsen, Marassi and Tomasini, Accuracy of PIV signals in theory and prac-
tice, 5th International Symposium on Particle Image Velocimetry, Busan, Korea, September 22-24, 2003,
paper 3301.

15.3 Adaptive PIV


The Adaptive PIV method is an automatic and adaptive method for calculating velocity vectors based on
particle images. The method will iteratively adjust the size and shape of the individual interrogation areas
(IA) in order to adapt to local seeding densities and flow gradients.
The method also includes options to apply window functions, frequency filtering as well as validation in
the form of Universal Outlier Detection.
The picture below show the recipe dialog for the Adaptive PIV method.

244
The dialog has two main parts, the image view and the recipe settings seen in upper section of the dialog.
In the following the available settings will be described.

15.3.1 Interrogation areas


On the ‘Interrogation areas’ tab it is possible to adjust the layout of the IA sampling grid.
The overall area in which calculation will be performed is determined by the checkbox ‘Use full image’. If
the checkbox is checked the entire image will be used, otherwise the area is determined by adjusting a rec-
tangle inside the image display.
The number of interrogation areas (IA) and the spacing between their center positions is determined by the
parameter ‘Grid Step Size’. The grid step is specified as number of pixels from one IA to its neighbor. If
grid step is small, the IA’s will be packed closer thereby resulting in more IA’s inside the calculation area.
Grid step size is specified for both horizontal and vertical direction.
The Adaptive PIV method can automatically determine an appropriate IA size to use for each individual IA,
but specified minimum and/or maximum IA sizes limits the range.
The first iteration will always use the largest IA size allowed, while subsequent iterations is allowed to
reduce IA sizes where particle density is high enough to justify it.
Minimum IA size is also used to determine the location of vectors; Both horizontally and vertically there
will be as many vectors as possible within the area covered (full image or ROI); Grid Step Size determine
the distance between neighbor vectors, while Minimum IA Size determine how close to the borders a

245
vector may be located. When centered around the vector position, the minimum sized IA is guaranteed to
be completely inside the image area processed, while the maximum IA Size centered around the same
location may in fact extend beyond the image (or ROI) borders.

15.3.2 Windowing and Filtering


The tab ‘Window/Filter’ is used to apply a spatial windowing and/or frequency filtering function as part of
the FFT based cross-correlation.

The conventional Window and Filter Functions are the same as for other correlation methods (See "Cor-
relation option Window/Filter" on page 617, but the checkmark 'Use Wall Windowing' is unique to Adap-
tive PIV:
Wall Windowing can be used when a mask is available to identify the location of walls. The Mask must be
preselected before entering the Adaptive PIV Recipe (See "Selection (Input to Analysis)" on page
233).You can use either a regular 'Mask' (See "Define Mask" on page 284) or an ordinary image (typically
derived from acquired images through Image Processing). If you wish to use an ordinary image you need
to enable it's Custom Property 'Use as Mask' in order for it to be recognized/accepted as a mask (See
"Custom Properties" on page 232). Zero-valued pixels in the Mask image will be treated as 'Wall', while
nonzero pixels will be interpreted as 'Fluid'.
The purpose of Wall windowing is to mitigate wall bias; Correlation measures the average dis-
placement/velocity of particles within the interrogation area (IA). There are (normally) no particles inside
walls, so when an IA extends into a wall resulting displacements/velocities may be biased by particles far
from the wall, that generally move faster than particles close to the wall. Wall windowing attempts to mit-
igate this effect by masking also the particles far from the wall, so remaining particles are symmetrically
distributed around the center of the IA. Figures below show an Interrogation Area (frame 1 only) and result-
ing Correlation map with and without wall windowing applied:

Top left shows an Interrogation Area where the lower left extend into a wall (shown in blue). Top right
shows the result from correlating two such IA's.

246
Bottom left shows the same IA with Wall Windowing applied; The zero values in the lower left have been
rotated 180 degrees around the IA center to mask out the particles in the upper right as well; Remaining
pixels are now symmetrically distributed around the IA center and bottom right shows the resulting cor-
relation map.
With Wall Windowing applied there is a risk that no particles remain from which to compute a correlation in
which case there will be no velocity estimate in this location, but if a vector is found it will be unbiased.

15.3.3 Validation

The validation is used to prevent outliers from disturbing the iterations and thus the velocity meas-
urements. The validation is done by first applying peak validation on the image cross-correlation and sec-
ondly by comparing each vector to its neighbors using the Universal outlier detection algorithm.
Three peak validation schemes are proposed in order to invalidate vectors based on the image correlation
peaks:

o Peak Height
if the Peak Height validation is enabled, then only the correlation peaks above the specified value
will be retained as valid.

o Peak Height Ratio


if the Peak Height validation is enabled, then the ratio between the two highest correlation peaks
is calculated. This ratio must be higher than the specified value in order to validate the calculated
displacement. Typical value for Peak Height ratio is 1.2.

o S/N-Ratio
if S/N ratio is enabled, first the noise level in the correlation plane is evaluated by the Root Mean
Square of the negative correlation values. If the ratio between the correlation Peak and the noise
level is above the specified value, then the calculated displacement is considered valid.

We recommend using Peak Height or S/N-Ratio validation criterion. Indeed, if the Interrogation Area only
contains noise the ratio between the two highest peak may still be quite high.
If either Peak validation fails the corresponding vector will be Rejected. Later when the Universal Outlier
detection (See "Universal Outlier Detection" on page 568) is performed, and substitution is enabled, the
rejected vector may be replaced with the median of valid neighbor vectors.
After the first and intermediate iterations validation and substitution is mandatory, but after the last iter-
ation the user may choose not to validate at all, to validate, but not substitute rejected vectors or to both
validate and substitute.

15.3.4 Adaptivity
The ‘Adaptivity’ tab contains settings that will affect the adaptive adjustment that is iteratively applied to
each IA.

247
Adapt IA Size to Particle Density
It is possible to enable/disable adaptivity of the size of the interrogation area based on Particle Density.
If adaptivity to particle density is switched off the first iteration will use the maximum IA size allowed (as
normal), while in each of the following iterations the IA size is divided by two until the specified minimum
IA size is reached. If adaptivity to particle density is switched on the initial correlation will still use the max-
imum IA size allowed, while in each of the following iterations the IA size is determined from an estimated
particle density.
Two parameters adjust how the particle density adaptivity works:

Particle detection limit:


Determines how a particle is detected. A gray scale peak must rise this many times above the noise floor
to be counted as a particle.

Desired # of particles/IA:
Will affect the size of the interrogation areas by specifying how many particles an IA should nominally con-
tain. Regardless of particle density IA Size will always be in the Minimum - Maximum range specified on
the 'Interrogation areas' Tab.

Adapt IA Shape to Velocity Gradients


It is possible to enable/disable adaptivity of the interrogation area shape to velocity gradients.
To be sure that the shape of the interrogation area is not changed to something way out the user can set
two different limits
First the absolute magnitude of each of the four gradients can be limited.
Second the combined effect of all four gradients can be limited as well.

Iteration control
Convergence limit (pixel):
A stop criteria for the adaptive iteration. When the translational part of the IA shape correction is less than
the specified convergence limit, the iteration is stopped for the given IA. It may continue for other inter-
rogation areas.

Max # of iterations :
Specifies the maximum number of iteration to perform. The analysis will stop after the specified number of
iteration, no matter if the analysis of the IA has converged or not.

248
15.3.5 Diagnostic
The top right part of the recipe contain a 'Diagnostic' tool, allowing the user to single step through the iter-
ative procedure, inspecting the interrogation areas, correlations and vectors after each iteration in the adap-
tive process.

By pressing the ‘Step’ button, a single step in the iterative process will be calculated, and the result will be
shown.
Below we zoom in on a 5x5 neighborhood of interrogation areas and corresponding vectors:

The left part of the figure above show results after the first iteration and the right part show the same area
after the second iteration.
The blue rectangles illustrate the interrogation areas, possibly scaled down to prevent them from over-
lapping in the display. Comparing the left and right image, the IA size was clearly reduced from the first to
the second iteration. When an iteration has converged the blue rectangle will turn green. The actual size,
shape and offset of the interrogation areas is shown when the mouse hovers over a specific vector. In the
left image above the red rectangle shows the actual IA size and in the right image the IA translation and
distortion can be seen; The IA on frame 1 is shown red and the IA on frame 2 is shown yellow.
Right clicking an IA will open a context menu, from where the correlation image or the transformed IA can
be viewed by selecting the menu items ‘Show correlation image’ or ‘Show interrogation areas’ respec-
tively.

By default the particle image will be shown behind the interrogation areas and vectors, but the Inter-
rogation Areas can be hidden by removing the checkmark in 'Show Int Areas', and the particle image can
be replaced by the Mask (if selected), by a map of estimated particle density or removed completely by
changing 'Image to show'.

249
The image above shows the estimated particle density as an 8-bit image, where each pixel value can be
interpreted as the average number of particles within a 64x64 interrogation area centered around the cur-
rent location.
To detect particles a background estimate is first made and subtracted. From this the local noise level is
estimated and the image is divided by this noise to get an SNR-image. The SNR-image is binarized with a
user defined threshold (The parameter 'Particle detection limit' in the 'Adaptivity' tab). Peaks in the result-
ing image are interpreted as particles and they are finally counted within local neighborhoods and spatial
smoothing applied to get a density map such as the one above.

The parameter 'Particle detection limit' in the 'Adaptivity' tab is the only recipe parameter that affects the
density map.

Since a grayscale peak is by definition brighter than all neighbors in a 3x3 neighborhood at most 25% of all
pixels can be peaks(/particles). The max-value of 255 could thus theoretically be found within a 32x32
neighborhood, but it is very unlikely in practice. For low particle densities this resolution is rather coarse,
so we scale up by a factor of 4 and estimate particle count in 64x64 neighborhoods even if this may theo-
retically lead to saturation (particle counts above 255 are truncated).

A change of any of the recipe settings will make the iteration start over when 'Step' is next pressed.

250
15.3.6 Reference Vector Map
If/when approximate or expected particle displacements are known, Interrogation Areas on frame 1 & 2
are offset relative to one another. From the 2nd iteration and onward these offsets are naturally based on
results from the previous iteration, but in the first iteration particle displacements are typically not known
and no offset can be applied. It is however possible to include a displacement predictor in the form of a ref-
erence vector map, from which offsets in the first iteration can be determined. Such a reference vector
map could e.g. be the result of an ensemble correlation on the same image ensemble, providing a (tem-
poral) average flow field. The reference vector map must be preselected before entering the Adaptive
PIV recipe (See "Selection (Input to Analysis)" on page 233). The reference vector map must have the
same number of vectors in the same locations as the ones generated by Adaptive PIV itself, otherwise it
will be ignored.

15.4 Calibrate Analog inputs


This method is used to calibrate the analog input stamps recorded with raw images (i.e. images labeled

with the and icons), thereby facilitating automated LIF processing. The analog stamp can be the sig-
nal received from the Energy Pulse Monitor (e.g. for shot-to-shot laser energy fluctuation compensation) or
any other transducers such as a pressure or temperature sensors or a CTA signal with asynchronous
recordings.

15.4.1 How to set calibration values?


To set calibration values to a single raw image, right-click with the mouse on the image of interest and
select the 'Set properties' function. Complete the dialog window with the text A1 = x.xx; A2 = x.xx; etc.
(with the symbol ; between each input), where x.xx are a floating point values (e.g. average energy from
the laser, temperature 1, temperature 2 and pressure) corresponding to the analog channels A1, A2, etc.
Save the new settings.
To set multiple input images with the same (A1, ... A4) values at once, multi-select the images, right-click
with the mouse on the first image and select the menu 'Set the Log Text (Properties) for selected Rec-
ords€'. Complete the dialog window with the text A1 = x.xx; etc as shown in the example below.

Calibrated analog input values are set in the Log Property of raw image.

15.4.2 Calibration
Calibration is done according to a user-defined polynomial function, which order can vary between 1 and 5
thereby offering extended flexibility for complex fitting calibration curves (For instance, linear- and a log-
functions would be approximated using a 1st and 5-order polynomial function, respectively).
To calibrate analog inputs, select the raw images of interest (which Log Properties are properly set). Call
the 'Analog calibration' method and complete the dialog box obtained according to requirements; i.e.

l Analog channel to consider...


l and corresponding variable A1.. A4 to consider

251
l Polynomial order

Press the 'Apply' / 'Display' buttons to preview the results, modify the polynomial order if necessary and

then press the 'OK' button to accept the final results. A new file labeled with the icon is then created in
the database. This file contains all the calibration information necessary to re-sample analog inputs before
automated LIF processing (see related help file).

Analog inputs are calibrated according to user-defined polynomial functions.

Analog calibration data are labeled with dedicated icon for quick identification in the database.

15.5 When to use Average Correlation?


When there are only few particles in the flow one can either move to PTV (Particle tracking) or one can
compute the average flow field PIV through Average correlation. In Average correlation, the correlation
function of each interrogation areas is averaged at each location for all the images.

It increases the signal to noise ratio significantly and generates a clear correlation peak. Just think of 20
PIV pictures. Let's assume in the first 15, there is no particles in a certain interrogation region. The aver-
age correlation is nearly null, because these interrogation regions only contain random noise. In the last
there is a few particles in the region, these form a correlation, and when added together they form a nice
correlation peak.
See more about Average Correlation and schematic of how to obtain average velocities.
The evaluation of the peak is done normally, and it represents the average velocity within the interrogation
region and within the number of images recorded. This entails that you must record sufficient number of
pictures, so the total sum of particles in each region is reasonable. When you have a reasonable amount
of images, the result is a nice vector map without spurious vectors.

The applications are typical in micro-fluidics, where it often can be difficult to apply sufficient amount of
particles. But any application where it is difficult to obtain a result, for one reason or the other it can be use-
ful.

15.5.1 Using Average Correlation to look at average PIV signal conditions


The average correlation can also be useful to use if one want to plot the average proportions of the cor-
relation peaks, signal height or peak widths.

252
From left: Recorded instantaneous image of bubbles and seeding, scalar map of the peak widths from
average correlation, and histogram of the peak widths.
The distribution of the bubbles and seeding is clearly seen in both the map of the Peak widths and the His-
togram
This feature, was in combination with the Peak validation used to separate the two flow phases. (For fur-
ther info on this feature and the example, read the help file on Peak Validation)

15.5.2 Using an offset vector map to improve the results


As default, the Average Correlation can not operate with any window shifting, which may pose problem,
when for example the velocity is high relative to the interrogation region, say for example 20 pixels to a 32
pixel interrogation region.

In order to improve the result, one should make on run with a smaller time between pulses, corresponding
to say 5 pixels maximum displacement. Maybe fewer image recordings can be used. Validate the result
and filter strongly. Run the image recording again with the 16 pixels displacement.
For processing use the 5 pixel recording as the "Vector map for offset of IA". This shifts the interrogation
spots corresponding to the 16 pixels, and ensures a correct result. This procedure also minimizes the so-
called velocity bias, which is automatically dealt with in Adaptive Correlation.

The " Vector map for offset of IA" can be selected (y double clicking) before calling the Average Cor-
relation.

In order to ensure the most correct result, the Average Correlation function should actually always be run
two times.

253
15.5.3 Using preconditioning to minimize influence from out of focus particles
In particular in micro-fluidics there are particles out of focus which can contribute dominantly to the cor-
relation and hence result in the wrong velocity to be measured, assuming the out of focus particles has an
other velocity. This effect is particular strong using back illumination in micro-fluidics. Other effects in ordi-
nary PIV can be shadows from object in the background.

Objects or particles out of focus often have a significantly different level of gray values, so a simple way
reject these is to apply a simple gray value rejection.
Under the tab "Precondition" one can select which gray values to conserve, i.e. the particle images are pre-
dominately in the range from 120 to 233. The values outside is set to 0, or an other user defined value.
Use the display LUT to find the values you would like to use, check color-coding.

Please be aware there are many other methods which can be very useful to remove unwanted back-
ground. I.e. subtract an average image before correlation or use some of the many filter functions in the
IPL - library.

15.5.4 Schematic of Average Correlation


When there are only few particles in the flow, adding the correlation functions together builds up the cor-
relation signal peak step by step.

254
The process can schematically be shown:

Interrogations areas from image frame A (pulse 1) and A (pulse 2) are correlated. The correlation functions
are added up and the peaks are finally evaluated.

255
Two other ways of obtaining the average velocity
It is also possible to take the average of all the images. DynamicStudio does this, simply by Select Sim-
ilar of the images and then choose Mean Pixel Value from the Statistics group. This image can be cross
correlated as well as individual images. The process can schematically be shown:

It works well if there is little or no background noise, but conceptual it is not a very good way to obtain the
results.

The classical way is shown for the complete overview. DynamicStudio does this, simply by Select Sim-
ilar of the images and then choose Cross-Correlate or Adaptive Correlations. Then Select similar of the
Vector Maps and choose Vector Statistics from the Statistics group.
The process can schematically be shown:

256
< back

15.6 Average Filter


This method is used to filter out vector maps by arithmetic averaging over vector neighbors. The size of
the (MxN) averaging area, inside which individual vectors are smoothed out by the average vector, is
defined by the user with no maximum values. (Note that M and N can be set independently.)

257
Tip: Typical averaging area sizes are isotropic (3x3) or (5x5) but non-isotropic averaging can be set as
well, and work very well for situations like channel flows.

Example of results: (Left) Instant velocity vector map calculated by Adaptive correlation methodology
(16x16, 25% overlapping) and (Right) averaged vector results.

15.7 Bi-Orthogonal Decomposition (BOD Analysis)


Bi-Orthogonal Decomposition was proposed in 1991 by Aubry et al [1] and is related to Proper Orthogonal
Decomposition (POD), but goes a step further, generating not only spatial modes ('Topos'), but also tem-
poral modes ('Chronos').
Given an ensemble of vector maps BOD subtracts the temporal mean and performs a full spatio-temporal
decomposition of the fluctuating part, such that instantaneous vectors u (x,t) can be reconstructed from a
series of K modes:

u( x , t ) = ∑ k=K ( ) ()
k=1 λ k ψk x ϕ k t
λk = Global amplitude of the k'th mode
ψk = k'th Spatial mode, 'Topos'

ϕk = k'th Temporal mode, 'Chronos'

By design both Topos and Chronos are orthonormal:

ψi, ψ j = ϕ i, ϕ j = δ ij

258
The square of the modal amplitudes corresponds to modal energies from POD.
The Topos will normally correspond to the spatial modes from POD.
The Chronos describe how each of the modes contribute to the reconstruction as a function of time.

As in POD the modes are sorted by amplitude (/energy), such that the first mode is the one contributing
the largest fraction of the total energy and subsequent modes contribute smaller and smaller fractions
thereof. The first modes typically describe large scale coherent structures, while the last modes typically
describe only noise.

15.7.1 Supported input


Most of the following examples assume that u represent velocity vectors from a vector map, but it could
just as well represent scalar values from a scalar map or pixel values from an image.
The Topos wil inherit the type of the parent data, so if input data are vector maps, Topos will be vector
maps also, if Scalar Maps are provided as input the Topos will also become scalar maps and if input is
images, Topos will become images as well (double-frame images are not supported).
Amplitude, Chronos etc are the same no matter what kind of input is used.

15.7.2 Handling the mean


Prior to the actual BOD Analysis the system computes the temporal mean and subtracts it from the input
data, such that the analysis works on the fluctuating parts only. When the BOD Analysis is complete the
mean is appended to the results as 'Mode 0'. This DC-Mode is split in three parts, Amplitude, Topos &
Chronos, and also scaled the same way as all the BOD-Modes, but Mode 0 is not orthogonal to the other
modes and thus not really a BOD-Mode like the others. Including the mean will however facilitate later
reconstructions and its presence does not generate any problems as long as you are aware that 'Mode 0'
is special and differs in nature from all the other modes.

15.7.3 Mode Count


BOD Analysis generates modes, that are stored as separate datasets in an ensemble. Normally the
number of BOD modes will correspond to the number of input datasets, but in rare cases it may be lower.

15.7.4 Step by Step example


Having selected an input ensemble of (single-frame) images, scalar or vector maps, the BOD Analysis is
found in the analysis category 'Vector & Derivatives':

259
The recipe allows you to choose how resulting modes should be scaled:

260
Mode Normalization / Scaling
As explained above, both Topos and Chronos are by default orthonormal, meaning that not only are the
individual modes normal to one another, they also have unit length.
For the Topos this means that by default the Root (of the) Sum (of the) Squares of all velocity components
from all vectors will equal one. For the Chronos we find similarly that the Root (of the) Sum (of the)
Squares of all contributions should also equal one. From a mathematical point of view this is a perfectly
normal approach and corresponds to the default Mode Normalization 'Root Sum Square' in the recipe
above.
This approach however means also that the magnitude of individual vectors in each Topos will depend on
the total number of vectors and similarly the Chronos-values will depend on the number of timesteps (/sna-
pshots) included in the analysis. This makes it difficult to compare results from different analyses, so you
may set Mode Normalization to 'Root Mean Square' instead. This causes both Topos and Chronos to be
scaled up so the Root (of the) Mean (of the) Squares of all elements equals one. In effect this means that
the average Topos vector has a length of one no matter how many vectors there are and similarly the Chro-
nos values will have an average magnitude of one no matter how many snapshots/timesteps were
included in the analysis.
To make sure the fundamental decomposition remain valid, the modal amplitudes are scaled down, when
corresponding Topos and Chronos are scaled up. Examples are shown below.

Mode 0 will always be the mean of the input datasets followed by the actual BOD modes sorted by
descending amplitude/energy.
The default display of a BOD mode is the Topos, inheriting the datatype of the parent datasets; When
input is a series of vector maps, all Topos will be vector maps also.

261
The Topos above are identical apart from the normalization. The first figure shows RSS-Scaled Topos, the
second figure RMS-Scaled Topos. The two are topologically identical, but note the different numerical
values in the colorbar at the bottom.

The following examples show what Topos may look like when BOD is applied to Scalar and Image maps
(both are RMS-scaled):

The BOD dataset contain also Amplitude & Chronos, and to see those graphically you must click 'Open
as XY-Plot' ( ) in the toolbar, or select it in the File Menu:

This will open a X/Y-plot with a number of different curves. Double-Click it to get a list of the data available
and choose which to show on the x-axis, which on the y-axis and which (if any) to hide:

262
This lists all the information a BOD mode contains beyond the Topos. The example above has been set up
to plot Chronos as a function of Time and you should get a plot like this:

263
Again the first figure shows the result when using the (default) 'Root Sum Square' normalization, while the
second is the same data, but using 'Root Mean Square' normalization instead. As for the Topos the two
are identical except for scale (note the values on the y-axes). For clarity the plots above are zoomed in on
the first ~50 Chronos values instead of showing the full set.

With the XY Display Setup you can in principle plot any two parameters as functions one another, it is
your own responsibility if the resulting curves make any sense. Chronos is for example the only parameter
that makes sense as a function of time, while all other parameters should be shown as a function of the
Mode Number. Mode Amplitude distribution is one such plot:

264
As before the first figure shows BOD results when the default 'Root Sum Square' normalization is used,
while the second figure shows the same data when modes are normalized using 'Root Mean Square'. As
explained above switching from RSS- to RMS-Normalization will increase Topos- and Chronos-values
and decrease modal Amplitudes. This is clearly seen when comparing values on the y-axes of the two
plots above. For clarity both plots are zoomed in on the first 40 modes instead of showing the full set.

From the modal amplitude distribution we can directly derive the energy distribution in various forms:

l The Energy Fraction is proportional to the square of the Amplitude, but scaled so Sum(Energy
Fractions)=1.0.
l The Accumulated Energy is the sum of Energy Fractions up to and including the current mode.
l The Residual Energy equals 1.0 - Accumulated Energy and describes the amount of energy NOT
covered by modes 1..N.

From each of the Chronos and Topos you can furthermore derive Lag-1 AutoCorrelations, quantifying the
degree of similarity between temporal and spatial neighbors.
From each Chronos you can thus derive a temporal Lag-1 AutoCorrelation as follows:
2 ∑tt == 2N φk (t )φk t − 1
( )
( )
AC 1 φk =
∑tt == 2N φk2
(t) + φk2(t − 1)
This somewhat unusual formula is in fact the intraclass correlation exploiting the knowledge that both
values in each pair come from one and the same distribution and has a mean value of zero.
The Lag-1 AutoCorrelation of a Chronos quantifies the degree of similarity (~coherence) between con-
secutive Cronos-values. For the DC-mode, where all Chronos-values are equal, we get a Lag-1 Auto-
Correlation of 1.0, while temporally well-resolved modes will produces AC1-values that are "large", but
smaller than one. If the Chronos evaluated is a pure, noise-free sine or cosine, the AC1-value equals the
cosine of the phase angle between consecutive samples. If for example there is 8 samples/cycle the
phase angle between consecutive samples will be 360/8=45 degrees and the Lag1-AutoCorrelation will
become AC1=cos(45)=0.71. At the Nyquist limit of 2 samples/cycle there will be 360/2=180 degrees
between consecutive samples and the Lag-1 AutoCorrelation will become AC1=cos(180)=-1. This is very
unlikely in practice, but negative AC1-values are not unusual among the higher order modes. In a tem-
porally well resolved experiment the first few modes should however produce AC1-values in the range 0.5-
1.0.

265
The Lag-1 AutoCorrelation of each Topos is also an intraclass correlation, but including spatial neighbors
instead of temporal. It includes neighbors in all directions (i.e. horizontal and vertical for 2D vector maps).
This way it quantifies the degree of similarity (~coherence) between spatial neighbors. Please note that
the analysis weighs all neighbors equally even if horizontal and vertical distance to neighbors is not the
same. As for the temporal AC1, the theoretical maximum value of 1.0 will be achieved only for a mode
describing a totally homogeneous flow, where all vectors are identical. This is very unlikely even for the
DC-mode, except perhaps if the input data is synthetically generated. As for the temporal AC1 the theo-
retical minimum value is -1.0, but this would require a flow field where all neighbor vectors were equally
long, but pointing in opposite directions. In practice spatially well-resolved modes (corresponding to coher-
ent structures) should produce AC1-values in the range 0.7-1.0, while pure (uncorrelated) noise should
produce small (positive or negative) AC1-values. As seen from the example below even high order modes
may show quite high AC1-values. In the example used here, parent vector maps were computed with 50%
overlap between neighboring interrogation areas, explaining why neighbors appear so similar even for the
high order modes. A closer look at such 'noisy' modes will often reveal small clusters of vectors, that are
indeed similar even if the overall Topos appear random.

High and low Lag-1 AutoCorrelations can for example originate from a set of Topos and Chronos such as
these:

266
Top: Topos. Bottom: Chronos. Left: High AC1-values. Right: Low AC1-values.

The last parameter you can plot as a function of Mode number is the Kurtosis, which is derived from the
Chronos of each mode. The Kurtosis is the 4th order standardized moment of the Chronos:
xn − µ 4
Kurtosis =
1
N
∑ ( σ )
The Kurtosis can tell us if the Chronos-values have reasonably constant magnitude over time or if they are
mostly very small with a few large exceptions.
If Chronos is constant over time (as in the DC-Mode for example), Kurtosis will become 1.0.
If Chronos describe a clean, noise-free sine-wave, Kurtosis will become 1.5.

267
If Chronos describe white noise with Gaussian distribution, Kurtosis will be around 3.0.
Some textbooks subtract 3 from the formula above to get 'Excess Kurtosis'; -This will be zero for a normal
distribution, negative for distributions that are more 'Peaked' than the normal distribution and positive for
distributions that are more 'Flat'.

Modes with Kurtosis-values significantly above 3 may be caused by and describe only (undetected) out-
liers in the input data. Such modes should of course be excluded from further analysis and the Kurtosis is
intended as a tool to aid the user in identifying them. Please note however that being a fourth order
moment, Kurtosis is a statistical quantity that may depend heavily on the number of samples (i.e. the
number of input datasets). In the figure above it appears for example that Kurtosis-values in the range 2-4
is perfectly normal, so you may question whether the single Kurtosis-value exceeding 5 is 'Significantly'
bigger than 3. If in doubt please take a look at the corresponding Chronos and/or Topos to see whether it
contains evenly distributed noise or a (few) dominant outlier(s).

Apart from graphical displays, the BOD modes can of course be displayed numerically as well:

268
The first 10 columns are unique to the BOD dataset, describing Chronos as a function of Time as well as
Amplitudes, Energies, Lag-1 AutoCorrelations and Kurtosis as a function of Mode Number. The remaining
columns describe the Topos in the same way normal image, scalar or vector maps are displayed numer-
ically.
If you browse through BOD modes one by one you will see that apart from Topos, Chronos-values are the
only ones changing. The remaining parameters are common for all modes.

15.7.5 Input requirements


BOD Analysis can in principle be applied to any image, scalar or vectormap ensemble, but meaningful
Topos require input that is spatially well resolved. Similarly meaningful Chronos require input that is tem-
porally well resolved. If the flow being measured contain frequencies above half the sampling rate of the
measuring system, aliasing is to be expected and will affect BOD results also.

15.7.6 References
[1] N. Aubry, R. Guyonnet & R. Lima (1991):
"Spatiotemporal Analysis of Complex Signals: Theory and Applications"
"Journal of Statistical Physics", vol. 64, Nos. 2/3, 1991, pp.683-739.

15.8 Calibration refinement


Calibration Refinement improves the accuracy of an existing stereo calibration by using particle images
acquired simultaneously from both cameras.
Each of the original Imaging Model Fits (IMF's) refer to a coordinate system defined by the calibration tar-
get used. When using the imaging models for later analyses, it is generally assumed that the X/Y-plane
where Z=0 corresponds to the center of the lightsheet, but in practice this assumption may not hold since
it can be very difficult to properly align the calibration target with the light sheet.
Provided the calibration target was reasonably aligned with the light sheet it is however possible to adjust
the imaging model fits by analyzing a series of particle images acquired simultaneously by each of the two
cameras. This adjustment is referred to as Calibration Refinement and changes the coordinate system
used so Z=0 does indeed correspond to the center of the lightsheet as assumed by subsequent analyses
using the camera calibrations (IMF's).

15.8.1 Required input


To perform Calibration refinement you need 4 inputs:

l Two camera calibrations, one for each camera.


l Two ensembles with multiple particle images acquired simultaneously by each of the two cam-
eras.

The particle images can be from actual measurements, they need not be acquired specifically for the pur-
pose of calibration refinement.
You may benefit from preprocessing the particle images f.ex. to remove the background, in which case
the processed images can be chosen as input for the calibration refinement. Likewise preprocessing of the
calibration images may be applied to get the best possible initial camera calibrations.

269
To initiate Calibration Refinement select the required inputs as shown above, right-click either of the orig-
inal IMF's and select 'Calibrate...'. Then pick 'Calibration Refinement' in the list of possible calibrations:

You will get a recipe similar to the one below (you may have to right-click the display area and select 'Fit to
image and graphics zoom' in order to see something there):

The red and blue polygons illustrate the part of the lightsheet visible from each of the cameras according to
the present calibrations. Analysis is only possible in the overlap area visible from both cameras, so in the
example above the cameras fields of view could have been aligned better.

15.8.2 Refinement Area


The green rectangle illustrates the area in which you wish to perform the calibration refinement. At the top
of the recipe you can select between different refinement areas:

270
Full Field of View Encloses the full field of view of both cameras including areas
with data from only one of them.

Common Field of View The largest possible rectangle within the area of overlap
between the two cameras fields of view.

Common Calibration area The area where both cameras found calibration markers in the
calibration images.

Manual adjust User defined by dragging the rectangle corners or typing


values in a property dialog accessible from the context menu.

Using a very large refinement area is likely to include regions where data is not available from both cam-
eras, but avoiding this by means of a smaller refinement area will probably exclude parts of the overlap
area, where data from both cameras is in fact available.
The smallest of the predefined refinement areas, 'Common Calibration area', is mainly intended for use
with polynomial imaging models which may behave strangely if/when you extrapolate beyond the area
actually covered by the calibration.

15.8.3 Frame
Calibration refinement aims at changing the coordinate system so the X/Y-plane at Z=0 corresponds to
the center of the light sheet. Most PIV-systems use a double-cavity laser in order to facilitate a freely
selectable delay between two consecutive particle images. This means however that it is misleading to
talk about "the lightsheet", since there are in fact two, one from each of the laser cavities. Careful align-
ment of the two laser beams aim at making the two light sheets coincide, but in practice you have to
choose which of the two lasers/light sheets you wish the refined calibrations to refer to. Provided the input
particle images are double-frames this is accomplished simply by choosing whether to refine on Frame 1
or Frame 2 of the images. If input particle images are single-frame , you have no choice but to refine on the
basis of the laser cavity used when acquiring the images.

15.8.4 Interrogation Area Size


When a refinement area and a frame (light sheet) has been chosen, particle images are dewarped (i.e.
projected back to the light sheet using current calibrations and assuming that everything the cameras can
see is in Z=0). If the light sheet is infinitely thin and the current calibrations are correct (i.e. Z=0 does
indeed correspond to the light sheet), the dewarped images from camera 1 & 2 should ideally be identical
since they are acquired simultaneously and thus contain the exact same particles in the exact same phys-
ical positions in space. In practice the images are of course not identical, but for a good set of camera cal-
ibrations the deviations should be small and random. If the calibration target was not properly aligned with
the lightsheet the assumption Z=0 does not hold and there will be systematic deviations between
dewarped particle images from camera 1 & 2. Particles will appear to shift and that can be detected by
using a normal cross-correlation between the dewarped images. In practice we use average correlation
over a series of image pairs, since cross-correlation between a single pair of dewarped images normally
has quite poor S/N-ratio.
For the average correlation you can choose interrogation area sizes of 64x64, 128x128 or 256x256, we
use a fixed overlap of 50%. In general the largest interrogation areas produce the best S/N-ratio and also
has the best chance of recovering large alignment errors. Large interrogation areas are however also more
sensitive to gradients and with angular misalignment between light sheet and calibration target the dis-
tance between target and light sheet varies across the cameras field of view, so gradients will be present.

15.8.5 Disparity Map


The resulting vector map represents the misalignment between the light sheet and the calibration target
and is referred to as a 'Disparity Map'.
To see the Disparity Map, press 'Apply' and wait for the system to dewarp and correlate the series of par-
ticle images:

271
No matter the size of you refinement area, disparity vectors will only be calculated where data is available
from both cameras.
Furthermore each vector must be distinct in the sense that the correlation peak must rise significantly
above the noise floor. Vectors that do not fulfill this SNR criteria will be shown in red and excluded from fur-
ther calculations. If less than 10 valid disparity vectors are found you will receive an error message saying
that "Too few valid disparity vectors are found." and no further analysis will take place (i.e. Calibration
refinement is given up).

15.8.6 Dewarped Particle Images


You can switch on a display of the dewarped particle images (the first image in the ensemble is shown):

272
Using the context menu or keyboard shortcuts A, B & T you can toggle between the dewarped particle
images from camera A and B respectively.

15.8.7 Average Correlation Map


You can also switch on a display of the Average Correlation Map, which will follow the mouse around
showing the correlation map behind each of the disparity vectors:

273
In the example above we've zoomed in on a particular interrogation area by using the mouse: Either click-
and-drag to draw a rectangle around the area of interest or use the scroll wheel while holding the Ctrl-key
on the keyboard (zooms in and out centered around current position of the mouse cursor).

15.8.8 Interpreting correlation maps


If your disparity vectors do not look as nice as in the example above inspection of the correlation map can
be very helpful in understanding what the problem is.
It is quite normal for the correlation peak to be elongated, it is a consequence of the light sheet having a
finite thickness and the cameras looking at it from different angles.
If the peak appears to be fragmented (multiple peaks along a common ridge) it is probably because too few
particle images went into the calculation. This can be simply because there were not enough images or
because you're at or near the edge of the light sheet where light intensity is low and only a few of the
images contain particles big/bright enough to be detected by both cameras.
If the correlation map contains no peaks at all it is possible that the misalignment between light sheet and
calibration target was simply too big for the calibration refinement to recover the true position of the light
sheet. In that case the peak we're looking for will be outside the interrogation area and you may try using a
bigger one. Alternatively check if by mistake you're working on the basis of particle images that are not
acquired simultaneously in which case they do of course not correlate at all.
Finally you may consider the possibility that the cameras and/or the light sheet moved during acquisition;
If for example mirrors are involved in bringing the light sheet to the intended measuring area small mechan-
ical vibrations can cause the light sheet to move several mm. Similarly mechanical vibrations may cause
the cameras to move, which will of course also have serious impact on the calibration and subsequent
analysis of images.

15.8.9 Interpreting Disparity Vectors


Each disparity vector can be interpreted as a measure of the distance from the nominal Z=0 to where the
lightsheet really is. The vectors will in general point in one direction if the light sheet is closer to the cam-
eras than assumed and in the opposite direction if it is further away.
Assuming the lightsheet is plane we can make a least squares fit of a plane through all of the disparity vec-
tors and use the fitted plane as an estimate of where the light sheet really is. We can then define a trans-
formation (rotation and translation) that will be able to move points back and forth between the original
(target) coordinate system and a modified (lightsheet) coordinate system where Z=0 corresponds to the
center. Applying this transformation to the calibration markers from the original calibration images we
obtain a new list of corresponding image and object coordinates which can then be fed into the normal cam-
era calibration routines to generate modified calibrations.

Press OK to store the refined camera calibrations in the database. The refined calibrations will use the
same imaging model as the original camera calibrations (Pinhole camera model in the example used here).

To verify that the modified camera calibrations match the position of the light sheet better, you can try to
perform yet another Calibration Refinement, this time using the modified calibrations as input data along
with the same particle images as before. Resulting disparity vectors should be smaller than before:

274
A third set of camera calibrations will be generated and stored if you press OK again, and you can of
course continue with even more iterations if you wish.
Resulting camera calibrations will be stored in the database as a chain of imaging model fits derived from
one another:

To visually see how the iteration moves to fit the lightsheet you can also overlay each of the imaging
model fits on an image:

275
-and zoomed in on the upper left corner:

Red is the original calibration, yellow is the first iteration, green the second (a third iteration was tried, but
didn't make any visible difference in this example).
Note: Overlaying refined calibration grids on calibration images will not align very well with calibration
markers and cannot be expected to. Think of it as an indication of where the calibration markers should
have been if they had been aligned with the center of the light sheet.

15.8.10 Change of Coordinate System


Measurements will always take place where the light sheet is no matter where the calibration target was
when calibration images were acquired. Therefore calibration refinement by design changes the coor-
dinate system from one aligned with the calibration target to another aligned with the light sheet. In order to
change it as little as possible the coordinate system is first rotated around (0,0,0) to make the X/Y-plane
parallel to the light sheet (i.e. without moving the origin). Then the coordinate system is translated along
the (newly rotated) Z-axis to make Z=0 correspond to the center of the light sheet.

276
15.9 Coherence Filter
Feature trackers and cross-correlation may occasionally yield completely wrong velocity vectors. To
enhance the result of measurement, coherence based post-processing is applied to the 'raw' velocity field
obtained. The coherence filter modifies a velocity vector if it is inconsistent with the dominant surrounding
vectors. The solution we use is a modified version of the vector median filter. The procedure operates as
follows:

Given a feature point Pc with the velocity vector vc, consider all features Pi, i = 1,2,.., p, lying within a dis-
tance S from Pc, including Pc itself. Let their velocities be vi. Due to coherent motion, these vectors are
assumed to form a cluster in the velocity space. Introduce the mean cumulative difference between a vec-
tor vi and all other vectors vj,

The median vector is the vector that minimizes the cumulative difference. Its index is

, the mean cumulative difference of the median velocity, characterizes the spread of the velocity
cluster. The standard median filter substitutes vc by the median vmed. In our implementation, vc is sub-
stituted by vmed only if the difference between vc and vmed is significant:

The standard median filter tends to modify most of the measurements and introduce an additional error.
The conditional median filter only modifies the vectors that are likely to be imprecise or erroneous meas-
urements.

277
Radius: the distance from the vector of interest. All vectors within this radius are used for validation.
Effect of filter is show below:

A. No filtering

278
B. Coherence filter (radius = 30 pixels)

15.10 Combustion LIF processing


This method is used to calculate density number maps of a given combustion species such as OH, CH
and others inside a user-defined region of interest of a raw single-frame LIF image. The result is returned in
molecules/m3 (on frame_1) with the instantaneous error levels (on frame_2). Note that the error is relative
to the global energy absorption; i.e. the map should be multiplied by Tr to obtain the relative error in % of
[Nx ] (See the application manual for the notation used.)
Content:

l Image analysis using 1 energy pulse monitor (Processing based on emitted energy level) or 2
energy pulse monitors (Processing based on transmitted energy level)
l Interpretation of the results

15.10.1 Image analysis


When calling the 'LIF Processing' method, the dialog window shown below pups-up:

279
Combustion LIF processing dialog window.

1. First, in the 'Camera/ analog setup' tab the operator must specify whether 1 or 2 energy pulse
monitors are used during the experiments. Select the 'Emission' channel and the 'Transmission'
channel, which is set to "N/A" when only 1 energy pulse monitor is used. Set the parameters cor-
responding to the imaging characteristics (i.e. intensifier QE, gain, camera filter transmission,
etc.): This will be used to define the so-called ' light collection efficiency' coefficient (Refer to the
Combustion_LIF manual for transmission curves, etc.) When using 2 energy pulse monitors, note
that these parameters are not required for data processing, as analysis is based on the trans-
mitted light; i.e. Energy (Before flame) - Energy (After the flame), i.e. reference is made to the
total light budget and total signal budget on the light propagation path (on which energy absorption
is estimated).
2. Make sure the light propagation is not done at a 0-angle and used the 'Rotate image' option avail-
able in the Image Processing Library to correct the view if necessary (or simply make sure the
camera is placed correctly before the recording of LIF data). Also, make sure that the analogue
input(s) is/are re-scaled to the proper dimension and are not in Volts. (Use the 'Calibration analog
input' and 'Rescale analog input' method to this effect.)
3. Define the light sheet characteristics by setting the height (in mm), the center position (Yc in
pixel), light sheet type for local energy correction, etc. Using the mouse, set the region of interest
(right-click, drag and un-click) inside which processing should take place. If needed, manually
refine the area by entering the (X, Y) pixel positions of the upper/left and lower/right corners of this
area. Note that if this area is larger than the light sheet height, the algorithm will automatically set
0-values in the outer parts of the illumination.
4. Select the combustion radical imaged (This will set values to several constants such as the emis-
sion line) and complete the absorption cross-section box with the dimension x10-20 cm-2.
Note: With the version 1.00 of the Combustion-LIF software, only OH is available. The option
'User-defined' is enabled but a couple of spectroscopic constants will not be set properly.
Although energy profile correction, energy budget, etc will be implemented the absolute values
will not be correct.

280
Once ready, press the 'Apply' and 'Display' buttons to preview the results and then 'OK' to accept it. The

resulting image is then labeled with the icon for quick identification inside the database.

15.10.2 Interpretation of the results


The result is provided as a double-frame map with density number [Nx ] (in molecules/cm3) on frame_1 and
an error map on frame_2 (only when 2 energy pulse monitors are used) to help assess the quality of the
instant LIF result. This error map is based on energy absorption (Tr) on a portion of the light sheet and
therefore features a gradient in amplitude with the light propagation direction (Ideally, the value of Tr is
between 0 and 5 but sometimes it can be up to 15). To estimate the effective error on the instant density
number map, the user should multiply frame_2 by the absolute error done on Tr (which value can be
derived when calibrating the analogue input channels for emission and transmission light, respectively).

Example number density [Nx ] map with rescaled analog inputs 1 and 2 to mJ/pulse (Processing using 2
energy pulse monitors).

15.11 Cross-Correlation
It is possible to use both double images and single images as input.

15.11.1 Cross-correlating single images


It is possible to correlate single frame images in many different ways. The order of selection is also impor-
tant.
Please be aware that DynamicStudio uses the Time between Pulses stored with the Setup as a reference
when computing the velocity in m/s, so if the Time between Pulse does not correspond to the actual time
between two image recordings, the displayed velocity is scaled incorrectly.

15.12 Curve fit processing


This method is used to fit experimental data such as XY-plots and probability (distribution) plots to
non-linear functions. To use this method, select the file(s) of interest and call the method "Curve fit proc-
essing" located in the 'LIF Signal" category.
Content:

281
l Curve fitting procedure
l Open data fit as numeric
l Overview on non-linear fit models

15.12.1 Curve fitting procedure


To fit a set of (X, Y)-data to a given function:

1. Select the X- and Y-data columns to used


2. Specify the model to use and if not in the list of "most-commonly used functions", press the
'User-defined fit' checkbox and write down the equation to fit as shown in the example below.
3. (When necessary, set initial values to the tolerance and max. iteration coefficients)
4. Press the 'Apply' button to check the output and modify the fit function type if necessary.

Dialog window for data fit.


To help assess the goodness of fit, press the 'Display' button to edit parameters of the resulting figure and
select/unselect the 'Raw data', 'Fitted data' and 'Error (%)' (e.g. weighted residuals) checkboxes for quan-
titative assessment.

282
Dialog window for control over quality of data fit and figure display.

Example of curve fit output including raw and calculated data.


Click on 'OK' to accept the final results. The data are then stored in the database and labeled with the icon

to facilitate its location among other type of calculations.


< Back to the top

15.12.2 Open data fit as numeric


Once the fit data are stored in the database, direct access to the coefficients, errors (%) etc. is made

using the shortcut . Selecting the 'Copy to clipboard' or 'Export All as file…' options (click on the right-but-
ton of the mouse), data are easily retrieved.

283
Typical output of a 'curve fit' file when opened as numeric.
< Back to the top

15.12.3 Overview on non-linear fit models


Polynomial
n =5
Gauss function:
 1  x − a 2
function:
y = Σ an x n y = a0 exp−  
 2  a 2  
1
n =0
 
Exponential
y = ao exp(a1 x ) Exponential y = a0 + a1 exp(−x ) + a2 x ⋅ exp(−x )
function: Expansion fct
Beta func-
tion:
y = a0 x a1 (1 − x ) a 2 User-defined: User-defined function

< Back to the top

15.13 Define Mask


This method is used to define a mask that can subsequently be used by See "Image Masking" on page
313 or See "Vector Masking" on page 579, to mark regions of specific interest.
To define a mask, select an image ensemble and bring up the analysis selection dialog as shown below.

284
Selecting the Define Mask option will display the dialog below:

285
The view and appearance of the image can be adjusted by using the color map (for details see See "Using
the display from within an analysis method" on page 633).

15.13.1 Adding shapes to the mask

The mask can be composed of three types of shapes (rectangles , polygons and ellipses ). To add
a shape to the mask, click on the desired mask shape in the toolbar, and select the appropriate mask type
(reject, outside, disable or transparent).

After the shape has been selected, click on the image to start specifying the shape location and size.
Once the shape has been created its appearance can be further adjusted as described in See "Adjusting
the rectangle" on page 638, See "Adjusting the polygon" on page 636 or See "Adjusting the ellipse" on
page 635.
To change the mask type of an existing shape (or a group of shapes) select the shape and specify a new
type in the selection box.

15.13.2 Deleting shapes


To delete a shape, simply select the shape(s) and press the <Delete> key.

286
15.13.3 Selecting multiple shapes
Multiple shapes can be selected by holding down the <Shift> key while dragging a rectangle that contains
the shapes to be selected, or by selecting the shapes individually while holding down the <Ctrl> key.

15.13.4 Ordering shapes


The final mask will be composed of all the specified shapes, if shapes are overlapping the topmost shape
will determine which mask type will be used.
The ordering of shapes can be adjusted by right-clicking the individual shapes and selecting "Bring to
Front"/"Send to Back" from the context menu.

15.13.5 Preview the final mask


If the "Preview Mask" checkbox, located in the tool bar, is selected, a preview of the final mask will be
superimposed onto the image. From this preview it is easily verified that the correct ordering of shapes is
met.

15.13.6 Vector map Overlay


If a vector map is selected in the database, prior to entering the define mask dialog, the vector map will be
displayed as an overlay to the image and the mask. The vector map display can be toggled on/off by the
check box "Show Vectors", and the scaling of the vectors can be adjusted by the slider "Vector Scaling".

15.14 Diameter Statistics


The histogram display refers directly to the underlying histogram dataset, which is a subset of the IPI data-
set or Shadow dataset.

When selecting Shadow Histogram, the following window appears.

287
Histogram setup
Type the minimum, maximum diameter and number of bins to be used.

Process
In double frame mode, select which Image to process: A for the first frame, B for the second frame or both.

Region
Check "use entire area" to process the entire image or type the coordinate (in pixel) to the region of interest

Click on Apply or OK to execute processing of the histogram. The software will run through all the
selected datasets and then display the final result in a plot window

288
The histogram display refers directly to the underlying histogram dataset, which is a subset of the first
dataset in the IPI dataset series. The data can be displayed in either tabulated or graphical form.
The graphical display of the histogram shows one of three possible histograms:

l Diameter histogram
l Area histogram
l Volume histogram

For a description on how to change the graphical setup of the display, please refer to "XY Display" (on
page 644).

Below the graph (in the info box), are shown the diameter statistics:

l D10: diameter mean.


l Counts: the number of valid particles used in the histogram.
l D StdDev: standard deviation of diameters.
l D20: area mean

289
l D30: volume mean
l D32: Sauter mean.
l D43: De Broukere mean

The Dpq diameter statistics mentioned above are defined as follows:

15.15 Extract
The Extract analysis method (formerly known as Extract to XY line plot) makes an extract of multiple data-
sets in selected positions (i, j). This is particularly useful in connection with time resolved data, but can
also be used in other connections.
The result from an extract can either be opened as Numeric (exported or copied via the clip-board to i.e.
Excel) or it can be view in DynamicStudio with the Open as XY plot (Short cut Ctrl+X).

l Example: making a time series and phase plot


l Extracting gray values from a series of images

15.15.1 Examples

Making a Time Series and a Phase Plot


Time resolved data are extracted from 250 vector maps in 5 points. The points are indicated in the flow
with circles. The flow is a flow over a cylinder generated by a slightly larger jet below the cylinder.

The points are selected by the position tab with index numbers and the tools given here.

290
In this example, we have extracted the V-velocity component as indicated in the Quantity window. This
gives us a plot of the five points as a function of the vector map index.
We have additionally indicated that we would like to use the elapsed time (index multiplied with recording
time), by checking this in the X candidate.
Finally, to generate a phase plot of the V-velocities, we press "Add" and select V. This will give us the V
component, which can then be used as an X axis candidate.

The time series plot, showing 3 of the points (right click on the plot to select other point, un-check, change
the plot format)

By right clicking on the plot, it is also possible to select one of the V-components along the X-axis. Here
the V from (38, 22) is plotted along the X-axis and four of the points are plotted along the X-axis. The one
from (38, 22) obviously providing a straight line.

291
Local Grayscale Variations
Use this Extract analysis method to investigate local variations in the grayscale values in images in mul-
tiple positions. If the image is a double image grayscale values are extracted from both frames. (formerly a
resampling was necessary for extracting grayscale values from image maps, this is not required any-
more).

Linearity of LIF Measurements


If you are making LIF measurements using the analog input to sample the laser energy, an extract of the
LIF signal from the images plotted as a function of the analog input, could show the linearity of the two sig-
nals.

15.16 Feature Tracking


Feature Tracking is an optical flow analysis tool, where flow structures are tracked from frame-to-frame.
Feature tracking is analogous to correlation techniques in that the user specifies areas of interest and iter-
atively solves for the result. However, tracking techniques do not search the same place on each image,
rather, the structure found on a previous image is tracked and if found its new position is recorded. In this
way a structure can be followed for many frames, perhaps the entire length of the recording sequence.
The KLT tracker works as follows: good features are located by examining the minimum eigenvalue of
each 2 by 2 gradient matrix, and features are tracked using a Newton-Raphson method of minimizing the
difference between the two windows. Multi-resolution tracking allows for relatively large displacements
between images. Each feature being tracked is monitored to determine if its current appearance is still sim-
ilar, under affine distortion, to the initial appearance observed in the first frame. When the dissimilarity
exceeds a predefined threshold, the feature is considered to have been lost and will no longer be tracked.
At least two frames are needed for the operation of the algorithm.

292
Required input: an ensemble of single frame images.
Number of features to find: the maximum number of features to find per image.
Test: this will determine the maximum number of features to find by analyzing several images in the input.
Window size: the size of the feature window.
Maximum iterations: the number iterations that the tracking algorithm will utilize to find a feature. A fea-
ture is considered lost if it is not found within this criteria.
Minimum eigenvalue: the minimum allowable eigenvalue for new features to be selected.
Maximum residue: the maximum residue, averaged per pixel, when tracking.
Minimum displacement: the minimum displacement, in pixels, necessary to stop the iterative tracker and
declare tracking successful.
Search range:
Minimum distance: the minimum acceptable distance between each feature.
Image smoothing: apply image smoothing.
Auto-contrast: Normalize images for gain and bias.
Border x,y: size of the border in pixels that is not analyzed. This is necessary due to the nature of a
Gaussian convolution (much of the image is unknown). Set to –1 for automatic setup.
Calculate: This will calculate a recommended border size on basis of the setup.
The data can be displayed in several ways:

293
A. Display all tracks for all frames:

B. Partial tracks:

294
Set an interval of features to display and tracks with histories equal to or longer than the specified range
will be displayed. Use ‘N’ as next and ‘P’ as previous keys to navigate.

C. Individual features:

To see individual features frame-by-frame, set the interval start and end to the same value. Use ‘N’ as
next and ‘P’ as previous to navigate up and down the dataset.

295
D. Evolution of a track:

The evolution or progressive history of all features can be displayed by setting “Show sequence”. Use the
keys ‘M’ to move forward in time and ‘R’ to reset to the beginning.

15.17 FeaturePIV
The Feature PIV module takes as input the feature table from the feature-tracking module and uses the
positions found to provide the points of interest for cross-correlation vector analysis. The advantage of
exploiting feature tracking is that regions in an image that contain useful information are used to seed the
cross-correlation, rather than specifying a regular 2D mesh.
There are three forms of output:

l The direct displacements from the feature table are converted to vectors.
l The positions specified in the table are used to provide interrogation positions for cross-
correlation analysis.
l Same as above but using the adaptive cross-correlation.

296
A) Direct feature processing of vectors:
Process only feature displacements: vectors are determined directly from the feature translations
frame-by-frame.
Apply coherence filtering: additional filtering of vectors derived from feature displacements.
Radius: coherence filtering radius. Coherence filtering takes a vector within a local neighborhood that
best represents the local flow.

B) Cross-correlation processing of images. The positions of features are used to locate


the origin of interrogation areas used for cross-correlation.
Select correlation: select the cross-correlation algorithm: cross- or adaptive.
Interrogation areas: the size of the interrogation areas.
Moving average validation: apply moving average validation of the resulting vectors using the supplied
Iterations and Factor.
Peak height validation: validation based on height ratio between the first two dominant peaks of the
cross-correlation.
Additional options:
Include non-tracked features: this will include new tracks that do not have a corresponding particle in a
previous image.
Since the positions are located as specified by the feature tracking and not in any regular format, the Flex-
PIV engine is used to process the data. The advantage of converting the data in this way is that the data
can then be compared to other methods and processed by other analysis modules found in Dynamic Stu-
dio. A vector result is shown below:

297
15.18 FlexPIV processing
FlexPIV processing is a very flexible way of performing PIV analysis. Conventional PIV produces velocity
vectors positioned in a rectangular grid, but FlexPIV will allow you to calculate vectors in more or less arbi-
trary positions, and thus adapt to the flow-field at hand. Furthermore conventional PIV normally apply the
same analysis to all points, while FlexPIV will allow you to use different analysis settings in different
regions of the flow-field.
In practice FlexPIV comprises two elements:

l FlexGrid is used to define grid points and determine analysis settings


l FlexPIV Processing is used to perform the actual analysis.

To define a grid, select an image ensemble and bring up the analysis selection dialog as shown below.

298
The definition of both vector grid and analysis methods is too complex for a simple recipe, so FlexGrid is
launched as a separate program. exchanging information with DynamicStudio.
From DynamicStudio v3.20 the grid object is now saved in the data base, but the traditional way of han-
dling grid object, via so-called grid-files with extension '.grd', is still supported.

Select the grid object saved in the database as input for the FlexPIV method. If you do so you will not, as
described bellow, have the possibility to edit the grid object from within the recipe, but have to make
changes to the grid object via the Define Flex grid method. This is done by selecting Show recipe on the
grid object saved in the database.

From the DynamicStudio recipe you have the option to either create a new grid file or to load and/or edit an
existing one:

299
Pressing 'Apply' or 'OK' you can of course also activate FlexPIV Processing to perform PIV analysis
according to specifications in the grid file.
Please remember to save the grid file whenever you've made changes from inside the FlexGrid software.
If you fail to update (save) the grid file DynamicStudio will not be aware of your changes.
You can store the grid file anywhere the PC can get access to it, but it is recommended to store it together
with the data you intend to process. If you plan to use the same grid file for processing of data from dif-
ferent databases you might consider local copies of the grid file instead of one global file. If a grid file has
been used for processing, later changes to the file will cause loss of traceability for the old PIV results.
With local copies of the grid file this is less likely to happen.
In the following you will find a brief introduction to the fundamental concepts in FlexPIV. For detailed expla-
nations please refer to the written manual supplied with your system. A full and complete documentation
of all features in FlexPIV is beyond the scope of this help-file.

15.18.1 Defining grid points


In FlexGrid points are defined on the basis of grid objects. There are three fundamental objects available;

Rectangles ( ), Ellipses ( ) and Polygons ( ).


Grid objects can be drawn directly on the screen by using the mouse; For rectangles and ellipses you
simply click and drag the mouse, for polygons you left-click once for every vertex point and double-click
on the last vertex point to finish it. Subsequently you can modify the objects using the mouse or the prop-
erty editor in the right-hand side of the FlexGrid working screen.

Each object can be one of four possible types; Grid ( ), Hole ( ), False hole ( ) or Wall ( ).
The default type for new objects is 'Grid', meaning that grid points will be generated inside the object
boundaries. Objects can however be changed easily to any of the other object types. A 'Hole' is a region
where you do not wish to calculate velocity vectors, whereas a 'False hole' is an area where you wish to
change local settings for vector distribution and calculation. Holes and false holes have to be linked to a
grid object in order to have any effect (the 'hole' has to be in something). The link is created by selecting

the grid and the hole object and then clicking the 'Group'-button ( ).
Within grid objects grid points are automatically generated according to one of four principles. The grid can

be Cartesian ( ), Polar ( ), Elliptic ( ) or Delaunay triangulation ( ). The last two are only avail-
able when you switch the user interface to 'Expert mode'. For each of the grid types point spacing and sim-
ilar are controlled from the property editor. By default the grid origin is in the lower left corner of the object,

but clicking in the toolbar you can move it to the center of the object. This is useful for the polar and ellip-
tic grid types.

300
15.18.2 Defining vector analysis
Apart from defining where you want velocity vectors, you also need to specify how to calculate them. All
grid points within a grid object share a common set of processing parameters, but using multiple grid
objects you can have different processing options in different regions of your flow-field. For vector proc-
essing you can choose either conventional cross-correlation or adaptive correlation. For both of these you
must choose interrogation area size (the IA will be centered around each of the grid points), and you can
also choose various other processing parameters including validation and sub-pixel interpolation
schemes:

15.19 Grid Interpolation


Grid interpolation reconstructs an irregular or regular grid of scalar or vector data onto a user specified reg-
ular grid. The method of Thin Plate Splines (TPS) is used to interpolate data. TPS is an algorithm for inter-
polating and/or fitting 2D data. As the name implies, TPS essentially takes as input 2D data and "bends" a
flat plate until all the points pass through it. In the event of too much noise and the possibility of sin-
gularities, the user can apply "relaxation". Zero relaxation forces the plate (surface) to pass through all the
input points, a large value reduces the result to a least squares approximation.

301
Automatic scaling implies that the resulting scalar or vector map adopts the limits of its parent. Alter-
natively the user can enter X- and Y- min/max limits.
Interpolation is controlled by the following two parameters:

1. Radius: the distance in pixels about a point of interest when collecting data points for an inter-
polation.
2. Relaxation: the degree of relaxation, that is, requirement, that all points pass through the resulting
surface created during interpolation.

302
The irregular grid above (created by FlexPIV) can be interpolated onto a regular grid and produce the fol-
lowing result.

15.20 Histogram
The Histogram method is used to extract statistical information about an vector and scalar datasets. The
result from the method is a histogram showing number of counts of a selected variable of the dataset. It is
possible to define a Region of Interest and a temporal window length on which the Histogram information
is to be calculated.
The recipe is able to compute an histogram based on the floowing inputs:
• 2D-2C Vector Maps, calculated from any 2D PIV method
• Stereo-PIV 2D-3C Vector Maps
• Volumetric Volecimetry 3D-3C, obtained through the 3D LSM technique
• Scalar Maps, gradients or vorticity for example.

303
15.20.1 The Recipe dialog

Selecting component
The top area of the recipe dialog is for selecting what component. All available components will be listed
and it will then be possible to select a specific component. Please ensure that one variable is selected
prior to start of the computation.

Include vectors
It is possible to select if all, only valid, or only valid and non-substituted vector will be included as an input
for the calculation.

Histogram Settings
Minimum and Maximum values specifies the limits for the resulting histogram. If a value is outside the
valid rage between Minimum and Maximum, this value is out of range. "Out of range" values will be added
to "Out of range bins", that can be included in the resulting histogram by checking “Add out of range bin’s”
Number of bin’s specifies how many bins the range between Minimum and Maximum will be divided in to.

304
ROI settings (specified in grid index)
It is possible to have the result based on the full vector map (by checking “use full vector map”), or to spec-
ify a Region of Interest (ROI) in the vector map. Enter a starting point (i,j) and the widht and heigth of the
desired ROI, specified in grid index. The grid indexes are shown at the bottom left of the data display and
corresponds to the location of the mouse (see the following figure). If a ROI is specified only vectors
inside the ROI will be included in the calculation.
The recipe dialog above shows how the recipe will look if the input vector map is a 2D vector map. If the
input vector map is a 3D vector map a input for ‘k’ and ‘Depth’ will be added to the recipe.
"Temporal window length" determines how many datasets are included in the calculation. "Include all" will
set the contents of Temporal window length to “Auto”. This means that all datasets in the parent will be
included in the calculation and only 1 histogram will be computed (N-1 output). If more than 1 is selected
then the number of result will be equal to “Number of dataset" in the parent ensemble, and each Histogram
will be computed using a number of snapshots equl to the specified "Temporal window length" (N-N out-
put).

Results from Histogram


An example of a result from Histogram can be seen below:

305
As default the histogram displays each bin as a percentage of all counts. The user can choose to have the
result displayed with counts instead using the display options menu available by right click -> "Display
options". More general information on the display can be found here.
The statistical information below the histogram is based on the vectors include in the range specified by
the recipe entrees minimum and maximum value. The RMS value is computed on the mean-substracted
data.

15.21 Image Arithmetic


As indicated by the name, the method enables arithmetic on pixel values. Operations can be performed on
any type of images (for example 8-, 10- or 12-bit images as well as floating point images), and can be
applied to both single- and double-frame images.

306
Four types of operations can be performed:

l Addition and subtraction


l Multiplication and division

And two operand types are available:

l With an image as operand


l With a constant value as operand

It is possible to combine the two operands so you can for example subtract another image and then add a
constant value as shown in the example below.
Finally you have the option to perform data clamping on the result before returning it. This is useful to limit
the output to a certain range.

15.21.1 Image arithmetic with another image


The operand image has to be pre-selected, i.e. selected before you enter the Image Arithmetic recipe. To
learn more about selecting data for analysis, please refer to the section User Selected Data in the help doc-
ument 'Working with the Database'. Having selected the operand image enter the Image Arithmetic recipe
and the selected image will be listed as operand image in the top section of the recipe. Choose whether to
subtract, add, multiply or divide by the operand image and then click 'Apply' to pre-view the result or
simply click 'OK' to calculate and accept the result.

Please note that the option 'Divide by' involves the risk of division by zero in case the operand image
includes pixels with a value of zero. For these pixels the analysis will produce output pixel values of zero.

15.21.2 Image arithmetic with a constant


Select the operation between (Addition/Subtraction) and (Multiplication/Division) and enter the value. As
global information, the average pixel value (or rounded scalar value with floating point images like LIF
images) is reported in the recipe window.

307
The option 'Not in use' allows you to use only an image operand or a constant operand. If you leave both of
these checkboxes unchecked and thus perform both operations, the image operand will be applied first
and the constant operator second.

15.21.3 Data clamping


The calculated result can be clamped to user specified limits if so desired. This is set up in the lower sec-
tion of the recipe, where upper and lower clamp limits and values can be enabled assigned individually.
For the upper clamping, pixel values that exceed the specified clamp limit will be assigned the specified
clamp value. Similar for the lower clamping, pixels with a value smaller than the specified limit will be
assigned the specified lower clamp value.
Please note that whether or not you enable data clamping 8-bit images are always clamped to the limits 0-
255, 10-bit images to 0-1023, 12-bit images to 0-4095 and so on. In these cases data clamping is relevant
only if you wish to limit output values further.

15.22 Image Balancing


The Image balancing module corrects light sheet non-uniformities that affect the outcome of other anal-
ysis routines. The user selects as input an ensemble of image maps. The output is a correction map the
user can employ to adjust images. The program flow is as follows:

Image balancing is a two-step process. The first step is to create an image balance map that consists of
factors determined from an ensemble of input images. The map is then applied onto individual image
maps, correcting for any strong variations in laser intensity.

Step 1: Image balance map


Select an ensemble of image maps that you want to correct. There should be enough images in the ensem-
ble so that a mean image generated would show relatively soft variations in light and limited noise activity.
Select “Image Balance Map” from the list of analysis methods. The image balance map analysis will then
process this data and produce a correction map that you can display.

308
Smooth cell size: the size of the smoothing matrix. Use larger values for sparse data.

Step 2: Apply balance map


Select an image balance map as fixed input. Select the input dataset you wish to process and select the
analysis "Image Balance Processing".

Pair of unbalanced images (double image):

Correction maps (for frame 1 and frame 2):

309
Same images after correction (application of correction map):

15.23 Image Dewarping


Images recorded with an off-axis camera will be distorted (warped) due to perspective.
With an imaging model fit (IMF) describing the distortion, images can be de-warped.
Image map dewarping is done by imposing a re-sampling grid in the object plane (i.e. the lightsheet plane).
Using the IMF each of the grid points in the re-sampling grid is mapped to a corresponding point in the
image plane (i.e. the surface of the image sensor). From this calculated pixel position a grayscale value is
derived from the original image and assigned to a pixel in the de-warped image.

l Recipe dialog: Imaging Model Fit


l Recipe dialog: Re-sampling grid
Note: De-warp using Polynomial IMF's
l Recipe dialog: Re-sampling scheme
l Recipe dialog: Fill color outside image
l Recipe dialog: Z-coordinate

15.23.1 Dewarping image maps


Apart from dewarping of measurement images this method is also useful to validate and/or verify the
parameters of an IMF, since dewarping one or more of the calibration images should produce a de-warped
image where calibration markers are well aligned. (See below)
Original
(warped)
image with
perspective
distortion.

310
De-warped
image with-
out per-
spective
distortion -
Use rulers
to verify
that (0 , 0)
is aligned
with the
zero
marker,
and that.
horizontal
and vertical
marker
spacing is
correct.
Dewarping of other images is done in exactly the same manner, provided of course they are recorded with
the same camera and the same recording geometry (i.e. neither camera nor lightsheet has been moved
since calibration images were acquired and IMF calculated).
To de-warp an image, select the relevant IMF using the "select method". Move to the image that you wish
to de-warp and select 'Dewarping of Image Map' among the list of possible New Datasets. You should get
a recipe like the one shown below:

Selecting several calibrations and image ensembles will dewarp all images to the same world space area.
Use this feature when the images need to be dewarped to the same world space coordinate system like
for example 2D PIV.
Top

311
15.23.2 Recipe dialog: Imaging Model Fit (camera calibration)
The topmost entry in the recipe identifies the imaging model fit chosen. If you failed to choose one before,
or chose the wrong one by mistake, click the 'Select' button and identify the IMF that you wish to use for
dewarping. Please make sure that the IMF corresponds to the camera and geometry actually used when
acquiring the image(s) you want to de-warp.
Top

15.23.3 Recipe dialog: Re-sampling grid


The re-sampling grid can be created automatically or defined by the user. When a user defined grid is
selected, clicking the button labeled 'Suggest' will calculate and show the values used for the automatic
grid.
The grid settings used for the automatic grid will include all of the original in the de-warped image, and
choose the grid spacing so the total number of grid points match roughly the number of pixels in the orig-
inal image, not counting grid points that maps outside the original (See example at the top of this page).
User defined grids can be useful to either zoom in on regions of specific interest (see below), and/or if you
wish to compare simultaneous results from two or more cameras, in which case you should use the exact
same grid settings to de-warp each of the images in question.

User defined Re-sampling grid with


-15 mm <= X <= +15 mm and -15 mm <= Y <= +15 mm
Grid spacing 0.15 mm/pix (Left) and Grid spacing 0.75 mm/pix (Right)
In the example above the effect of changing the grid spacing is also shown; The left hand image is de-
warped with the suggested grid spacing of 0.15 mm/pix, while the right hand image is de-warped with a
grid spacing of 0.75 mm/pix, -i.e. 5 times higher producing a de-warped image with 5^2=25 times fewer pix-
els, and correspondingly coarser representation of the original image (Images above are scaled to display
at equal size).
Reducing the grid spacing below the value suggested will of course produce a de-warped image with more
pixels, but very small steps will in general not improve image quality, since the original image does not con-
tain any more information. As a rule of thumb grid spacing less than half the value suggested is a waste of
computer RAM, and only increases processing time of any subsequent analysis without producing any
additional information.
Please Note: Be careful when using polynomial imaging models for dewarping with an automatic or sug-
gested re-sampling grid; Imaging model fits rely on calibration images with calibration markers, which are
not always found or even available near the edge of the images. Strictly speaking the resulting IMF is thus
only valid in the center of the image, and using it near the edge requires extrapolation. With a linear model
such as the DLT this is usually gives correct values, but the polynomial model extrapolation may produce
incorrect results.
Since the automatic or suggested re-sampling grid attempts to include all of the original image, it will
attempt to process the image edges and thus extrapolate from the IMF, which may not always work as
expected.
Top

312
15.23.4 Recipe dialog: Re-sampling scheme
Each of the points in the re-sampling grid is mapped onto the surface of the image sensor using the IMF
chosen. The resulting pixel coordinates will usually NOT be integer, but positioned somewhere inside a
2x2 pixel area on the original image. Grayscale value for the resulting pixel in the de-warped image is then
determined either by bi-linear interpolation, or simply by taking the nearest neighbor within the 2x2 pixel
neighborhood.
Top

15.23.5 Recipe dialog: Fill color outside image


Each of the points in the re-sampling grid is mapped onto the image plane, but some of these may fall out-
side the surface of the image sensor. This will for example be the case when using an automatic re-sam-
pling grid, which attempts to include all of the original in the de-warped image, but may of course also
occur with a user defined re-sampling grid.
Since no information is available to interpolate between, some fixed grayscale value will have to be
assigned, and the user can choose Black (=0, -Default), White (=255 for 8-bit images, or 4095 for 12-bit
images), or an average grayscale value calculated from the original image.

Different fill colors outside image; Black, White and Average gray.
Top

15.23.6 Recipe dialog: Z-coordinate


When mapping points from the re-sampling grid onto the image sensor, X- and Y- coordinates are gen-
erated based on min/max values and step sizes, while the Z-coordinate is normally assumed to be zero.
This corresponds to the center of the lightsheet with most normal IMF's, but if this assumption does not
hold the user may enter a non-zero Z-coordinate either in the log entry of the image properties, or in the
dewarping recipe.
There are two typical reasons for using non-zero Z-coordinates;
-The image being de-warped is actually recorded in front of or behind the lightsheet.
-The calibration target was poorly aligned with the lightsheet, so Z=0 does NOT correspond to the center
of the lightsheet.
Please Note: Non-zero Z-coordinates will affect the resulting de-warped image only if the IMF used is a
3D-model, i.e. generated from an image of a multilevel calibration target, or from a series of images, where
a plane target has been traversed through several Z-positions. Using a 2D-IMF, the Z-coordinate does not
affect the result.
Top

15.24 Image Masking


This method is used to mask images by assigning specific gray-values in regions defined by the user as
being of no interest.
To apply masking you must first define a Mask, using either the analysis method "Define Mask" (on page
284) or a regular image with the Custom Property 'Mask' enabled (See "Custom Properties" (on page
232)).

313
The mask ensemble must contain either one static mask or N dynamic masks, where N equals the
number of images. Dynamic masks are typically derived from the parent images themselves e.g. using
the "Image Processing Library (IPL)" (on page 329). If you use regular images for masking, nonzero pixels
in the Mask image will identify pixels in the parent image that are to be left untouched, while Zero-valued
pixels in the Mask image identify pixels in the parent image that will be modified.
To mask single-/double-frame images, you must first pre-select the mask as 'User Selection' by selecting
it and pressing the 'Space bar' "Selection (Input to Analysis)" (on page 233). Then you can select the
ensemble with images to be masked and select the analysis method Image Masking. No matter what kind
of mask is used, the image masking recipe is the same:

In the recipe you can specify the gray-value to be assigned to masked pixels in the parent image:

l Black-out areas
l White-out areas
l Mean pixel value (calculated by the software)
l User-defined (fixed) value

The result image is labeled with the icon , clearly showing that a mask was applied to the parent image

( ).
The following examples are based on a top-down view into a square water tank with a magnetic stirrer at
the bottom. The light sheet is horizontal and just above the spinner. In each image the (static) tank walls
can be seen as well as the (moving) spinner at the bottom of the tank:

314
Using "Define Mask" (on page 284) we can create a mask to remove the walls and anything beyond them:

...please note there is only one mask, which is applied to each of the parent images successively.

315
Using the "Image Processing Library (IPL)" (on page 329) we can create a series of masks to remove the
spinner:

...please note that a separate mask is created and applied to each of the parent images.

We can apply the static mask to the dynamic ones to create a series of hybrid masks with which both
walls and spinner can be masked out:

316
... use option 'Black-out areas' in the masking recipe to merge the two masks.
With this 'Hybrid' mask we can remove both walls and spinner:

When creating dynamic masks please note that mask images derived from a double-frame parent will also
become double-frame, while masks derived from a single-frame parent will become single-frame.
A single-frame mask can be applied to both single- and double-frame images and even used for "Vector
Masking" (on page 579), but a double-frame mask can be applied to double-frame images only.
If you've made a double-frame mask, and wish to apply it to a vector map or single-frame image, you must
first extract either frame 1 or 2 using "Make Single Frame" (on page 451), check custom property 'Mask'
again and then use the resulting single-frame mask.

15.25 Image & Volume Math


(In the following "pixel" may also mean "voxel")
This analysis method enables mathematical operations on pixel values. It is slower than "Image Arith-
metic" (on page 306) and "Image Processing Library (IPL)" (on page 329), but it is more flexible. Oper-
ations can be performed on any type of images (for example 8-, 10- or 12-bit images, as well as floating
point images), and can be applied to both single- and double-frame images.
Image Math allows you to manipulate grayscale values in an image by specifying mathematical formulas
to be applied:

317
The Image Processing to apply is defined in the Output tab.

15.25.1 Inputs
The available input images are referred to by designated names, ParentImage, Image1, Image2, and so
on.
You may type them in or pick from the drop down list of available 'Inputs':

The ParentImage will always be available, whereas other images will be available only if marked as User
Selection before entering the Image Math Recipe (See "Working with the Database" on page 58).
To aid the choice, the database ensemble name is shown on the right after a vertical bar separating the
name and description. The list may also contain intermediate results named Result1, Result2, etc, if such
are computed within the Image Math Recipe (see below).

15.25.2 Scalars
'Inputs' are images with varying gray values across the field of view, but there are also scalars available:

318
For each of the available input images, you have access to the minimum, mean, and maximum gray scale
value as well as the rms of grayscale values within the image. Scalar Value 'i' describes the frame number
(1 or 2 in a double-frame image), whereas 'x' and 'y' contain the coordinate of each pixel ((x,y)=(0,0) is the
lower left corner regardless of offset and scale factor that may have been defined for the camera in ques-
tion).
Independent of the input image, natural constants 'Pi' and 'e' are available, and you may define and use
scalar variables named V0, V1, V2, etc. You can assign a value to each of these either by specifying that
value directly or by assigning the result of some calculation to the variable. Afterward the variable can be
used as part of the expression defining the output.

15.25.3 Functions
Predefined functions can be applied to all pixels (or numbers derived from them):

The available functions are …


Function Name Arguments. Explanation Example
sin(a), cos(a), tan(a) 1 sine, cosine & tangent sin(pi/3)=sqrt(3/4)
asin(a), acos(a), atan(a) 1 inverse sine, cosine & tangent

319
sinh(a), cosh(a), tanh(a) 1 hyperbolic sine, cosine & tangent
asinh(a), acosh(a), atanh(a) 1 inverse hyperbolic
sine, cosine & tangent
log2(a) 1 binary (base 2) logarithm
log10(a), log(a) 1 common (base 10) logarithm
ln(a) 1 natural (base e) logarithm
exp(a) 1 exponential function, exp(x)=ex exp(1)=e
sqrt(a) 1 square root
abs(a) 1 absolute value
rint(a) 1 round to nearest integer
sign(a) 1 = 1 if x>0
= 0 if x=0
=-1 if x<0
pow(a;b) 2 raise a to the power of b; ab
rnd(a;b) 2 random number between a & b
min(…) var min of all arguments min(a;b;c;d;e)
max(…) var max of all arguments max(a;b;c;d)
sum(…) var sum of all arguments sum(a;b;c)
avg(…) var mean of all arguments avg(a;b;c;d;e;f)
ParentImage_GetPixelValue 4 gray value of spatio-temporal neigh-
bor pixel
(boolExpr)? valueIfTrue : valu- 3 if (Boolean Expression)
eIfFalse then (valueIfTrue)
else (valueIfFalse)

The regular and hyperbolic trigonometric functions are self explaining as is the exponential, the logarithms
and other functions with a single argument.
When there is more than one argument in a function call they must be separated by semicolons ';'.
Power function and random number generator both take two arguments, but require no further explanation,
and min, max, avg, and sum are also obvious.

Parent_GetPixelValue extracts pixel values from the parent image (available also for other input images in
the form Image#_GetPixelValue, #=1,2,3,…).
GetPixelValue takes 4 arguments (T, Frame#, x, y):…

l T is the image sequence number and relative to the current image, so T=0 refers to the current
image, T=+1 to the next image in the ensemble and T=-1 to the previous image.
l Frame# specifies whether to extract the grayscale value from frame 1 or 2, use 'i' or the value '0'
to access the current frame, or use '-1' to indicate that the other frame in the double frame is to be
used.
l (x;y) specifies the location of the pixel, you can simply use (x;y) to access the current or con-
structs such as (x-1;y+1) to access neighbor pixels.

Conditional execution can be performed with the 'if … then … else' construct in the form (boolExpr)? valIfT-
rue : valIfFalse.
For example, thresholding could be done with the expression…
(ParentImage<1)? 1: ParentImage … corresponding to … if ParentImage <1 then 1 else ParentImage
(…using max(1;ParentImage) would give exactly the same)

15.25.4 Operators
Operators are applied between two variables, the following are supported:…

320
Operator Meaning Example Input Output
+ addition a+b Numerical Numerical
- subtraction a-b Numerical Numerical
* multiplication a*b Numerical Numerical
/ division a/b Numerical Numerical
^ raise a to the power a^b Numerical Numerical
of b
<= less than or equal a <= b Numerical Boolean
>= greater than or a >= b Numerical Boolean
equal
!= not equal a != b Numerical Boolean
== equal a == b Numerical Boolean
> greater than a> b Numerical Boolean
< less than a<b Numerical Boolean
&& logical AND a && b Boolean Boolean
|| logical OR a || b Boolean Boolean
'Numerical' input and output could be pixel grayscale values (integer or floating point), whereas 'Boolean'
input will interpret 'zero' as 'false' and nonzero as 'true'. Boolean output will use '0' for false and '1' for true
and can of course be used as an argument in e.g. 'if … then … else …' constructs.

15.25.5 Error description


If the expression in the formula entry field cannot be evaluated an error description will be shown.

15.25.6 Output Selection


(Voxel volumes are always floating point volumes, therefore it is not possible in Volume Math to select
Output type)
Image Math takes image(s) as input and generates image(s) as output. The analysis is performed in float-
ing point, but the final output can be chosen freely. You will typically use 'Same as parent', but may spec-
ify Floating point images or choose a specific bit depth:

321
15.25.7 Example
Here's an example of nonlinear scaling of grayscale values that can be accomplished using Image Math:
Each pixel of a 12-bit image is replaced by the square root of itself multiplied by 4095 (=maximum gray
value for a 12-bit image):

322
A similar visual effect can be accomplished by using a nonlinear lut (lookup table), such as 'Hyperbolic' or
'Gamma' in the display options of the image. This does however not change the numerical contents of the
image, but only affects the on-screen display of the image. Using Image Math as shown here will change
the grayscale values of the image and thus affect subsequent analysis also.

323
15.25.8 Advanced processing
The default recipe for Image Math contains tabs, named 'Output' and 'New'. The text above describes the
content of 'Output'.
It is possible to create and store intermediate result images from within the Image Math recipe. This is
done using a tab for each of the intermediate image results that you need.
Clicking 'New' will create a new tab named 'result1', clicking 'New' again will create yet another tab
named 'result2' and so on:

Each of these tabs looks and works exactly the same way as the 'Output' tab described above, except
that 'Output' is replaced by an intermediate image named 'result1', 'result2', etc.
The tabs are processed from left to right, and each tab has access to the results from all previous tabs.
With two intermediate results as shown above, 'result2' will have access to 'ParentImage' and 'Image1',
'Image2' etc (if any other images are 'User Selected'). In the tab 'result1' you will have access to the same
data PLUS 'result2' and in the final 'Ouput' tab you can access the usual data as described above, PLUS
'result2' & 'result1'.

15.26 Image Mean


Calculates the average intensity of corresponding pixels in all the selected images. A minimum selection
of two images is required. The resulting output image is placed under the highlighted dataset (the one
selected last).
Notes

l "Corresponding pixels" means "pixels with identical x- and y-coordinates" starting with pixel(1,1)
in the lower left corner
l Make sure that the input images are compatible in terms of image dimensions and grayscale res-
olution. When trying to average images of different dimensions a warning will be issued. The out-
put image inherits the dimensions from the parent dataset; input of different dimensions are
cropped or zero-padded
l The analogue input values displayed in the "Mean Pixel Values" image are the average over the
analogue input values of the corresponding channels in the individual images.

15.26.1 Application example


In PIV "Mean Pixel Values" is very useful for generating an image of the common background in inde-
pendntly acquired images.

324
(A) Single image as acquired. (B) The average of 50 images using
Mean Pixel Values. Note the apparent
"hot pixel" CCD-sensor damage in the
lower right part of the image.

(C) Single image (A) with the common back-


ground (B) removed using Image Arithmetic's

15.27 Image Min/Mean/Max


The 'Image Min/Mean/Max' method is located in the "Image Processing" category. It is used to compute
power mean greyscale values from a series of images.
The Power Mean (or generalized mean) Mp with exponent 'p' of the positive real numbers x1, ..., xn is
defined as:

-which for p approaching minus infinity will return the minimum of all x-values and for p approaching plus
infinity will return the maximum.
M1 (p=1) is the conventional (arithmetic) mean, and in the limit of p approaching 0 we get the geometric
mean:

The recipe supports predefined p-values of:


p = +∞ (Maximum)
p = 2 (Quadratic Mean)
p = 1 (Arithmetic Mean)
p = 0 (Geometric Mean)
p = -1 (Harmonic Mean)
p = -∞ (Minimum)

325
The Power means for a given series of values can be ordered as follows:
Maximum >= Quadratic >= Arithmetic >= Geometric >= Harmonic >= Minimum
The formula for power mean is defined with positive x-values in mind, but Maximum, Quadratic, Arithmetic
and Minimum can be computed for negative values as well.
The Geometric and Harmonic mean has been designed to return zero if just a single non-positive grey-
value is found among the input greyvalues.

326
Example of input image of a flame

327
Power means of a series of flame images as the on shown above.
From top left to bottom right:
Maximum, Quadratic, Arithmetic,
Geometric, Harmonic, Minimum

Histograms of the power mean images above (with logarithmic y-axis)

328
15.28 Image Processing Library (IPL)
The filters featured in the IPL module can be used to smooth images (Low-pass), detect edges (High-
pass), enhance image contrast (Low-pass & Morphology) as well as for non-linear calculations (Signal
processing). It also includes various image-processing tools (Utility and Threshold). Finally a Custom
Filter is available to allow filtering with user defined filter kernels.

15.28.1 Low-pass filters


Mean filters
The mean (NxN) filter is the simplest linear, local filter used to smooth images. This filter does not take
spatial gradients inside the kernel into consideration. Thus, for applications related to fluid mechanics, ker-
nel sizes of (3x3) or (5x5) are recommended. Larger kernel sizes may significantly increase numerical dif-
fusion.

The NxN mean filters use very simple convolution kernels:


3x3: 1 1 1
1 1 1 /9
1 1 1

5x5: 1 1 1 1 1
1 1 1 1 1
1 1 1 1 1 /25
1 1 1 1 1
1 1 1 1 1

7x7: 1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1 /49
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1

9x9: 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 /81
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1

329
Median filters
Median filters are non-linear filters that sorts the (NxN) elements by intensity (grayscale or scalar value
such as concentration or temperature) and replace the center pixel of the kernel by the median value. The
median filter thus eliminates high-frequency noise but preserves edges. For applications related to fluid
mechanics, the median filter is better suited than the mean filter. On the other hand, for applications
related to solid state physics (e.g. surface deformation), mean filters are generally recommended

Minimum & Maximum filters


Maximum or minimum filters are another kind of nonlinear filters used to remove localized high-frequency
noise.

Gaussian filters
Gaussian-filter is another type of linear filter. With this operator, the new value at the center pixel of the ker-
nel is calculated as the two-dimensional Gaussian distribution. As opposed to other linear low-pass filters,
this filter weighs the grayscale value at the center of the kernel higher than those near the edges.

 The NxN Gaussian filters use the following convolution kernels:


3x3: 1 2 1
2 4 2 /16
1 2 1

5x5: 2 7 12 7 2
7 31 52 31 7
12 52 127 52 12 /571

330
7 31 52 31 7
2 7 12 7 2

15.28.2 High-pass filters


All high-pass filters available in the module are used for edge identification. The result obtained depends
on the numerical recipe applied:

High-Pass filters

The NxN High-Pass filters use the following convolution kernels, to subtract the local mean (albeit without
the divisor used for the NxN kernels described above):
3x3: -1 -1 -1
-1 8 -1
-1 -1 -1

5x5: -1 -1 -1 -1 -1
-1 -1 -1 -1 -1
-1 -1 24 -1 -1
-1 -1 -1 -1 -1
-1 -1 -1 -1 -1
The lack of a divisor may lead to grayscale overflow and the 5x5 high-pass filter will in general produce
brighter images than the 3x3 high-pass as can be seen in the examples above.

Prewitt filter

The Prewitt filter is in fact a combination of two filters, each using a 3x3 convolution kernel to estimate hor-
izontal and vertical gradients respectively:
Horizontal, Gx : Vertical, Gy :
-1 0 1 1 1 1
-1 0 1 0 0 0
-1 0 1 -1 -1 -1
Having estimated the (signed) gradients in both horizontal and vertical direction, the final result is cal-
culated as the total magnitude of the local gradients by taking the square root of the sum of the squared
gradients:
Prewitt = Sqrt( Gx 2 + Gy 2 )

Roberts

331
The Roberts filter also calculates local grayscale gradients in two orthogonal directions and returns the
total magnitude of local grayscale gradients. The gradients are however determined in directions of ±45
degrees using 2x2 convolution kernels:
+45°, G+45: -45°, G-45:

0 1 -1 0
-1 0 0 1
Again the final result is calculated as the total magnitude of the local gradients by taking the square root of
the sum of the squares:
Roberts = Sqrt( G+452 + G-452 )

Sobel

The Sobel filter also calculates local gradients, but returns to 3x3 convolution kernel to estimate horizontal
and vertical gradients respectively. The Sobel filter differs from the Prewitt filter by assigning higher
weight to the center of the kernel than to the corners:
Horizontal, Gx : Vertical, Gy :
-1 0 1 1 2 1
-2 0 2 0 0 0
-1 0 1 -1 -2 -1
Having estimated the (signed) gradients in both horizontal and vertical direction, the final result is cal-
culated as the total magnitude of the local gradients by taking the square root of the sum of the squared
gradients:
Sobel = Sqrt( Gx 2 + Gy 2 )

Laplacian

The 3x3 Laplacian filter is in fact identical to the 3x3 High-Pass filter, while the 5x5 Laplacian can be inter-
preted as a 3x3 Gaussian followed by a 3x3 Laplacian:
3x3: -1 -1 -1
-1 8 -1
-1 -1 -1

332
5x5: -1 -3 -4 -3 -1
-3 0 6 0 -3
-4 6 20 6 -4
-3 0 6 0 -3
-1 -3 -4 -3 -1
Again the lack of a divisor means that results from a 5x5 kernel is in general brighter than those from a 3x3
kernel.

15.28.3 Morphology filters


Morphology filters is a class of nonlinear filters, which in their most basic form corresponds to the mini-
mum and maximum filters. Combining these in different ways can however produce more advanced
results.

Dilation & Erosion filters


A Dilation filter will let bright pixels flood darker neighbors within a 3x3 neighborhood. This is basically the
same as the Maximum filter, but applying the filter repeatedly you can effectively produce kernels sig-
nificantly larger than supported by the Maximum filter.
An Erosion filter works the opposite way by letting dark pixels flood brighter neighbors within a 3x3 neigh-
borhood. This is basically the same as the Minimum filter, but applying the filter repeatedly you can again
produce much larger filtering kernels.
Iterations 1 2 3 4 5 ... N
Kernel size 3x3 5x5 7x7 9x9 11x11 ... 2N+1 x 2N+1

Opening & Closing filters


The Opening filter is a combination of N Erosions followed by N Dilations. Bright areas smaller than the
kernel will disappear and neighboring dark areas will tend to merge. (Kernel size depend on N the same
way as explained above for Dilation and Erosion filters). For images with small bright particles on an other-
wise dark background, the opening filter can be used for background estimation, since a sufficiently large
kernel will ensure that all (isolated) particle images are removed.
The Closing filter does the opposite by starting with N Dilations followed by N Erosions. Dark areas
smaller than the kernel will disappear and neighboring bright areas will tend to merge. This can also be
used for background estimation if you're looking at dark objects on a bright background.

Tophat & Blackhat filters


The Tophat and Blackhat filter computes the difference between the original image and its Opening or
Closing respectively:

333
o Tophat(Image) = Image - Opening(Image) ... also known as "White Tophat" filter
o Blackhat(Image) = Closing(Image) - Image ... also known as "Black Tophat" filter

As described above the Opening filter can be used for background estimation when applied to images of
bright particles on a darker background. The Tophat filter subtracts the opening from the original image, so
in effect it is used for background removal. This requires of course that no particles remain after the open-
ing, since otherwise they will disappear when subtracting the estimated background from the original
image.
The Blackhat filter does exactly the same, but looks for dark objects on a brighter background. As a side-
effect the output is inverted compared to the original so it will show bright objects on a black background.

Shock filter
The Morphological Shock filter is also known as morphological sharpening. In its most basic form the filter
compares each pixel to the minimum and the maximum greyscale value in its neighborhood and returns
whichever is closer. If the current pixel value is exactly midway between the local minima and maxima the
pixel keeps its current value. A "raw" shock filter is very sensitive to noise and to reduce this sensitivity a
median filter is applied first. The min/max (erosion/dilation) filter is then applied to the median filtered
image and the local min or max returned as described above.

Gradient filter and Distance Transform


The morphological gradient filter is calculated as the difference between a 1-pass Dilation and a 1-pass
Erosion.
For all pixels in the image the Distance transform will measure the distance to the nearest zero-valued
neighbor pixel. Pixels that are already zero will thus remain zero, while nonzero pixels will have their grays-
cale value replaced with the approximate distance to the nearest zero-valued pixel.

334
15.28.4 Thresholding
Remove Outliers
The thresholding filter Remove Outliers will allow you to set upper and lower limits on the accepted grays-
cale values in the image. Grayscale values outside the specified limits can be set to the limit values or set
to the minimum and maximum values supported by the image (f.ex. 0 and 255 for an 8-bit image or 0 and
4095 for a 12-bit image as in the example below).

15.28.5 Utility filters


Image analysis often requires image manipulation like rotation, scaling and pixel shift among many other
operations. The IPL module contains many methods to help you manipulate single- and double-frame
images.

Rotate
The method 'Rotate' will rotate the parent image around it's own center as specified in the properties for
the analysis:

Positive angles rotate the image counterclockwise and negative angles rotate clockwise.
The grayscale values of the resulting image are computed by interpolating between neighboring pixels in
335
the parent image and you may choose interpolation methods 'Nearest Neighbor', 'Linear', 'Cubic' or 'Cat-
mull-Rom'.

By design the image size is maintained, meaning that corners of the original image will often be cut, while
corners of the derived image will be black since they originate from outside the parent image. Both of
these effects can be seen in the example above.

Scale
The method 'Scale' will scale the image horizontally and/or vertically by user defined scaling factors:

Horizontal and Vertical Scale Factor are set independently and values above one will increase image size
while values below one will reduce it.
As for 'Rotate' output pixel values are computed by interpolating between neighboring pixels in the parent
image and you may choose interpolation methods.

As opposed to 'Rotate' 'Scale' will in fact change the size of the image so the derived image will typically
be larger or smaller than its parent.

Shift
The method 'Shift' will move pixels in the parent image left/right and/or up/down according to user spec-
ifications:

Horizontal and Vertical translations are set independently and may both be positive or negative, but must
be integer pixel counts.

336
The image size is maintained and as shown above part of the parent image will be lost as it moves outside
the frame while parts of the child image will become black as it originates from outside the parent.

Mirror
The method 'Mirror' will mirror the parent image Left/Right, Up/Down or both as specified in the properties:

If you fail to check at least one of the options you will get an error message when you try to apply the anal-
ysis.

The top left image shows the parent image, the rest are mirrored Left/Right, Up/Down or Both.

So far the 'Utility Filters' have manipulated the position of pixels, but tried to change grayscale values as
little as possible.
The remaining methods in this group will leave pixels where they are, but change the grayscale values:...

Invert Pixel Values


The method 'Invert' will produce a negative image, where black pixels become white and vice versa:

The method adapts to the grayscale depth of the parent image and has no user defined settings.

337
For display purposes a similar effect can be accomplished via the "Color map and histogram " (on page
676), but without changing the grayscale values of the image.

Pixel Normalization
Normalizing grayscale values is a useful preprocessing step if for example you wish to perform thresh-
olding, but cannot find a suitable threshold value to apply across the entire image.
Classic normalization is performed by subtracting the Mean and dividing by the Rms derived from some
meaningful neighborhood around each pixel. Traditional Mean and Rms calculations are however quite
sensitive to noise, so in practice the Mean is replaced by the Median (MED) and the Rms replaced by
Median Absolute Deviation (MAD), both of which are much more robust statistical quantities:
gIn − MED Ω
gOut = ⋅ϵ
Max (ϵ, MAD Ω)
where gIn and gOut are input and output grayscale values of the pixel in question and MED and MAD
W W
are the Median and Median Absolute Deviation in the spatial neighborhood W around this pixel. The mini-
mum noise level e is included to avoid division by (almost) zero in areas with more or less constant grays-
cale values.
The output image will inherit the grayscale depth from its parent, floating point images are not supported.
Since the input is divided by e or higher we scale up the result by e to preserve dynamic range as much as
possible.
If the background follows a Gaussian distribution the standard deviation s is related to the MAD as:
s @ 1.4826 × MAD
The classical interpretation of S/N-Ratio assumes implicitly that noise is Gaussian and that data has been
normalized by division with s, meaning in this case that output pixels with grayvalue 1.4826 ×e can be con-
sidered to have a S/N-Ratio of 1. For practical purposes you can multiply by 1.5, so if for example e=4,
grayscale values of 1.5 × 4 × 6 = 36 are 6 times higher than typical fluctuations in the neighborhood.
Depending on the experiment , grayvalues significantly above a certain threshold can either be interpreted
as shot noise or identify a signal (e.g. a particle) that rise significantly above the noise floor.

The neighborhood over which the Median (and the MAD) is computed is chosen as 'Kernel Size' in the Nor-
malization properties:

A median filter with a square kernel may produce horizontal and/or vertical artifacts in the output and to
remove those the median filter is in fact applied twice. This means that we are in fact computing the
median of medians.
Kernel Size must be odd and can be chosen from 3x3 to 15x15 pixels. Statistically bigger is better, but
especially for 10- or 12-bit images median filtering with large kernels is time-consuming, and most often
Kernel Sizes of 9x9 or 11x11 is sufficient. The smallest kernel size of 3x3 is not recommended except for
sparsely seeded flows where particle images are on the order of 1 pixel in diameter, and even there a 5x5
median filter will be much more reliable and only marginally slower.

Please note that pixels darker than the median (background) will be truncated at zero. This is OK if you're
looking at bright particles on a dark background, but if you're looking at dark objects (shadows) on a bright
background you should invert the image before normalizing it.

The minimum noise level e can be chosen as 'Auto', 2, 4, 8 or 16.


'Auto' will pick the value 4 for an 8-bit parent image and 8 for images with pixel depth larger than 8 bits.

338
Peak Search
As implied by the name the method 'Peak Search', looks for greyscale peaks in the input image. Tech-
nically 'Peaks & Plateaus' would be more correct since the method identifies pixels that are brighter than
or equal to their neighbors in a defined neighborhood:

The neighborhood is octagonal with a diameter of 3, 5, 7, 9, 11, 13 or 15 pixels (Ø3 is in fact 3x3 square).
To avoid detecting bumps in the noise floor the grayscale value must also be greater than or equal to the
specified threshold.
A threshold value of 0 means 'Automatic' and the code will choose 24 for an 8-bit parent image and 48 for
images with greater bit depths.
For floating point images there is no logical 'Automatic' choice, so the user must actively choose a value
and Threshold=0 actually means 0 in this situation.

For floating point input images the output will be 8-bit, otherwise the output image inherits the grayscale
depth from its parent.
Please note that output is binary containing only 0's and 1's, so the image will typically appear all-black
until the "Color map and histogram " (on page 676) is adjusted.
Note also that many of the 1's will be isolated pixels, that may not show if the image is displayed at less
than 100% zoom.

339
Above we've zoomed in on part of an image from a PIV experiment. Topmost is the input image, inverted
for display purposes showing gray particles on a white background. In the middle output from the Peak
Position analysis is shown and at the bottom the two images are overlaid.
As shown clusters of 2 or more neighboring 1's may occur for example in regions where the camera has
been saturated, post-processing may be required if this is a problem.
In many cases preprocessing of the images will be required before a peak search is attempted. You could
for example apply Pixel Normalization as described above. With PIV images of reasonable quality a sim-
ple (and much faster) high-pass filter might suffice.

15.28.6 Signal processing


Discrete Cosine Transform (DCT) are used to transform intensity levels into (Amplitude, frequency)
domain.
Inverse Discrete Cosine Transform performs the opposite transformation.

15.28.7 Custom filter


Finally the image processing library offers you the possibility to apply a linear filter with a convolution ker-
nel of your own design. Square, odd-sized kernels from 3x3 to 15x15 are supported and filter coefficients
can be either floating point or integer values. In the former case processed images will also be floating
point, while the latter will produce images with the same grayscale depth as the parent image. Grayscale
values will be truncated at zero and the upper limit (f.ex. 255, 1023 or 4095 for 8-, 10- or 12-bit images
respectively). To avoid or reduce overflow you can specify a filter divisor when you've chosen integer filter
values.

The example above is of little practical use, but illustrates well how the custom filter is used. A very sim-
ple filtering kernel with just two nonzero filter coefficients, 9 pixels apart will shift the originial image 4

340
pixels left and right, add the two shifted images and finally divide by two as specified in the custom filter
properties.
A number of examples are shown below, illustrating how many of the built-in filters are implemented using
linear filtering kernels...

The 3x3 Mean filter simply adds grayscale values in a 3x3 neighborhood and divides the sum by 9:

The 3x3 Gaussian filter multiplies each grayscale value in a 3x3 neighborhood by 1, 2 or 4, adds the result-
ing numbers and divides the sum by 16:

The classic 3x3 Gaussian above has a variance of 0.5 pixels, i.e. a standard deviation of sqrt(0.5)=0.71.
Applying the Gaussian repeatedly the variance adds up, so using the 3x3 Gaussian twice should cor-
respond to a single Gaussian with a variance of 2x0.5=1.0, i.e. standard deviation of 1.0. You must how-
ever anticipate roundoff errors to accumulate when applying the same (Gaussian) repeatedly and you can
accomplish the same more accurately by using a single convolution with the following 5x5 kernel:

341
Applying this kernel repeatedly, the variance also add up, increasing by one each time. Again roundoff
errors will accumulate and using the 5x5 kernel above twice is the same as applying this 9x9 kernel once:

You can in principle keep increasing the size of the kernel, but the largest kernel supported is 15x15 and
already there it becomes impractical to type in each of the coefficients. Please note you can copy/paste
them to/from Excel or similar.

The 3x3 High-Pass filter multiplies the central grayscale value by 8 and subtracts the grayscale value of
all remaining pixels in the 3x3 neighborhood. This corresponds to subtracting the local mean, so below it is

342
suggested to divide by 9. That is not done by the built-in filter so there is a risk that the filter might produce
results exceeding the grayscale depth of the parent image and thus lead to truncation.

Finally the example below is an example of a filter that is not built in. It is in fact the result of a 3x3 Gauss-
ian followed by a 3x3 Laplacian filter, which can be implemented as a single 5x5 filter as shown below.
The filter divisor is set to the sum of all positive filter elements to make 100% sure that no grayscale over-
flow will take place. If applied to f.ex. particle images that are often quite small, it might produce very dark
images and the divisor can be reduced to f.ex. 28 (= the core value of the filter) in order to produce brighter
particle images.

This filter will reduce low-frequency background variations, while highlighting high-frequency grayscale
variation from f.ex. particle images or edges.

15.29 Image Resampling


15.29.1 Re-sampling window
Using re-sampling option, scalar images can be sampled according to a grid, which cell size is defined by
the user (Parameter 'Stepsize'). Select the 'Default' button to get min/max pixels values with the camera
used and refine the grid geometry with the option available; namely the type of grid (square/rectangular),
the size of the cells (in pixel) and the region of interest (X and Y coordinates).

This re-sampling calculation is particularly interesting for the overlapping of two camera; e.g. for cross-cor-
relation calculations with PIV/LIF set-up (see the Reynolds flux method). Simply select the vector map

343
describing the view correction and always remember to correlate from the PIV-camera to the LIF-camera
so the transformation is performed in the proper direction when re-sampling LIF images. For further details
and "how to do" help on this topic please refer to the LIF manual.

15.29.2 Re-sampled maps


Scaling on the resulting re-sampled scalar maps can be added: Right-click with the mouse on the map and
select the option 'Info box'. The coordinate system (Pixel or SI) can be added in a similar way, selecting
the appropriate 'Rulers' option.

15.30 Image Resolution


This method is used to modify the pixel (bit) resolution of raw single- or double-frame images (e.g. 10-bit to
8-bit). Utilization of the method is simple: Select or multi-select the image(s) of interest and call the
method 'Reduce pixel resolution' located in the category 'Image conversion'. Set the new pixel bit res-

344
olution, press the 'Apply' and 'Display' buttons to pre-view the result and then the 'Ok' button to accept it
(and extend the re-sampling to every images when using multiple selections).

Image bit resolution is changed easily: Set the new bit-resolution and press the 'Ok' button to accept the
re-sampling.

15.31 Image RMS


Gives a measure for the statistical spread of the pixel values for independently acquired images; the
method calculates the standard deviation of corresponding pixels from their mean value. The result is deliv-
ered as an integer, since the dataset is treated as a an image.
A minimum selection of two images is required. The resulting output image is placed under the highlighted
dataset (the one selected last).

Notes

l "Corresponding pixels" means "pixels with identical x- and y-coordinates" starting with pixel (1,1)
in the lower left corner.
l Make sure that the input images are compatible in terms of image dimensions and grayscale res-
olution. When trying to calculate the RMS pixel values for images of different dimensions a warn-
ing will be issued. If you proceed, the output image will inherit the dimensions from the parent
dataset; input images of different dimensions are cropped or zero-padded accordingly.

345
15.32 Image Stitching
Image stitching combines images of the same dimensions into a single large image map. The resulting
image map is a matrix of the input image maps. The advantage of stitching is that a customer can com-
bine images from several fast lower-resolution cameras, as opposed to acquiring with a single slow high-
resolution camera.
Setup is as follows:

l User selects image maps as input.


l The number of column and row images is specified.
l For each element in the matrix the selects the input image map to position.
l Offsets are applied if necessary, relative to the boundaries of each image element in the
matrix.

The interfaces for images differs from vector stitching in that stitching images in the literal sense means to
align images side by side in a matrix fashion, whereas vectors are merged (technically speaking you can’t
stitch vectors). This is, of course, a vast simplification since no mention is made of overlaps, rotation or
translation when aligning cameras. A simple image matrix up to 4 x 4 images where the user can apply
small correction offsets. Images “own” the matrix element they are assigned to. Any image overlaps is
automatically cropped.

346
Final image layout: setup governing the layout of images in the final image map.
Row layout: number of rows in the image map matrix.
Column layout: number of columns in the image map matrix.

Applying offsets (x=+50, y=+20) as seen in the diagram below for a 1 x 2 element image matrix:

15.33 Imaging model fit


Content:

l Normal use and background


l Target Library & Custom Targets
l Acquiring calibration images
l The recipe for Imaging model fit
l Displaying imaging model parameters

15.33.1 Normal use and background


The Imaging model is a mathematical model that describes how points in "Object Space" (typically using
mm-coordinates) is transferred to the "Image Plane" (where positions are measured in pixel coordinates).
Depending on the choice of imaging model the transformation may describe perspective, lens distortion
(e.g. barrel or pincushion distortion) or some other arbitrary spatial distortion. All imaging models have a
number of parameters and determining the value of these parameters in a specific image acquisition setup
is referred to as "Imaging Model Fit", since parameters are fitted to give the best possible match between
corresponding coordinates in object space and image plane.
The purpose of the Imaging Model Fit is to facilitate later measurements in real world metrics (e.g. mil-
limeter in object space) on the basis of pixel coordinates in acquired images.
An Imaging Model Fit is thus a required input to the following numerical methods: Stereo PIV Vector Proc-
essing, Image Dewarping, Vector Dewarping, IPI Particle Sizing

A full 3D model describes how points in the object space (world) are mapped onto the image plane (sen-
sor-chip) of the camera, while a 2D model describes only the mapping of points from the object plane
(light-sheet) to the image plane.
Both types of imaging models include a number of parameters, which uniquely determines the object-to-
image mapping for a particular image recording setup. Moving the camera or the light-sheet or otherwise
changing the image recording setup will require the imaging model parameters to be recomputed.

347
DynamicStudio supports a number of imaging models, each with their strengths and weaknesses.
Below is a short description of the basic principle of each imaging model.

Imaging model Pros Cons

"Direct Linear Transform (DLT)" Simple to use, requires only one Strictly linear transform, unable to
(on page 359) image for 2D. model any non-linear distortion.
Direct linear transform is a linear For 3D calibration a precise traverse
affine transform to describe the posi- system is needed.
tion, rotation, scaling and per-
spective of the object space. 

"3'rd order XYZ polynomial imag- Simple to use, requires only one For 3D calibration a precise traverse
ing model fit" (on page 360) image for 2D. system is needed.
A polynomial in 3D that will trans- Capable of handling severe dis- Fails with a multilevel target since 2
form the object space to the image tortion caused by the lens or curved Z-levels is not enough.
plane. This model does not relate to windows. Poor extrapolation beyond the target
physical measures like translation markers found.
and rotation of the object space. 

"Pinhole camera model" (on page Does not require a precise traverse In general this model requires more
361) system as it does not need knowl- images than the other two.
Is sort of a combination of the other edge of the Z-coordinates of the tar-
two. Hence it uses a DLT in com- get plate.
bination with a polynomial, the latter The model relates to physical prop-
is used to model the lens distortion erties of the imaging system. (Focal
by using a simplified lens model length, optical axis, translation, rota-
(a.k.a. the pinhole model with radial tion, etc...)
and tangential distortion) Can to some extent extrapolate
beyond the target markers found.

"Telecentric camera model" (on Capable of describing Telecentric Requires multiple images from mul-
page 363) Lenses without perspective. tiple cameras/views and thus avail-
Mathematically identical to the Pin- able only via 'Multi-Camera
hole camera Model, with some con- Calibration'
straints to handle Imaging without
perspective

For some imaging models the parameters can in principle be calculated from known angles, distances,
etc, but in practice this approach is not feasible. In the actual laboratory setup it is often difficult if not
impossible to measure these angles, distances and so on with sufficient accuracy. Therefore all imaging
model fits require images of a calibration target, which are analyzed to determine the imaging model param-
eters.
Internally the Imaging model fit method consists of two separate tasks. The first task is to identify the
pixel coordinates of known points in object space. The second task is to estimate or fit the model that will
transform the 3D object points to 2D image points. The first task is achieved by placing a calibration target
with identifiable target markers. The software will automatically detect these markers in the image, but the
position of the target markers (in object space) need to be known. The second task of fitting the imaging
model is carried out using a modified least square method.
2D imaging models requires that the calibration target is aligned with the light sheet and usually a single
image is sufficient.
For a full 3D imaging model you must either use a multilevel target or acquire multiple images of a plane tar-
get positioned in front of, centered in and behind the light-sheet. In at least one of these images the target
should be aligned with the lightsheet and with the origin at the desired position.

348
The calibration target contain markers that defines the object coordinate system and proper illumination is
required to ensure that all markers are clearly visible on the recorded image(s). Please verify also that the
calibration markers covers as much as possible of the cameras field of view.
The figures below illustrate different calibration targets that the imaging model fit can use. Most of them
are available as both single- and double-sided targets (a double sided target has markers on both sides
and is required for image acquisition setups, where cameras are on opposite sides of the lightsheet).

Plane dot matrix target


The larger center dot (zero marker) is used to identify the origin of the coordinate system.

Multilevel dot matrix target


Half the markers are in the same level as the zero marker, the other half are in a 2nd level, closer to or
farther from the viewer.

Checkerboard target (plane)


Origin and orientation of the coordinate system is shown in red

349
15.33.2 Target library & Custom targets
DynamicStudio maintains a library of known calibration targets. This library can be accessed from the
'Tools' menu:

The Target Manager is a small program launched from inside DynamicStudio, listing all known targets and
offering the possibility of adding new (custom) targets to the list or edit the targets already in it:

It is possible to edit specifications for predefined targets, but instead it's recommended to create a custom
target by pressing 'New':

350
Specify a name that you will be able to recognize, preferably describing the target so you can discriminate
between different ones. Two target types are supported, 'Dotted' and 'Checker board', and wen you press
'OK' the resulting dialog will depend on the target type chosen.
If you choose the target type 'Dotted', you will be
asked to provide information about dot spacing, dot
sizes etc:
The zero marker should always be larger than all the
other markers and ~50% of the dot spacing will
usually work well.
The main markers must not exceed 50% of the dot
spacing and ~33% of the dot spacing is normally a
good choice.
The axis markers (the four nearest neighbors of the
zero marker) must be smaller than or equal to the
main markers. Try f.ex. 25% of the dot spacing.
For multilevel targets you must of course put a
checkmark in 'Enabled' and specify what the level
distance is. This parameter describes the z- (out-of-
plane-) distance from the level where the zero
marker is to the level of the other markers. This
parameter is signed in order to distinguish whether
the second level is closer to or further from the cam-
era than the zero marker level. Depending on the
camera viewing angles you intend to use 20% of the
dot spacing is normally a good choice. For very
steep angles acceptable level distance becomes
smaller, for modest viewing angles larger level dis-
tance may be OK.
Finally you must specify whether the (custom) tar-
get contain black dots on a white background or
white dots on a black background.

If you choose the target type 'Checker board' there


are fewer parameters to specify, just the number of
tiles horizontally and vertically and of course the tile
size, describing the spacing between crossings in
the checker board pattern. The number of tiles hor-
izontally and vertically need not be the same, but
both should be odd numbers.
Please note the circles inside 3 of the checker board
tiles that identify both the zero position and the orien-
tation of axes. They are not specifically mentioned
in the target definition, but the zero marker MUST be
in the center of the target and it MUST be a white cir-

351
cle in a black tile. Nominally the circle diameters
should be half the tile size, but experience indicate
that slightly smaller diameters work OK, while larger
diameters cause the calibration routine to fail.

15.33.3 Acquiring calibration images


Acquiring images for calibration is no different from any other image acquisition, but the images are saved
in the database in a slightly different way in order to include, for example the Z-coordinate, and let the sys-
tem know that these images are special. If in doubt about how to do this, please refer to the general
description of this topic.

Using Z-axis traversing


If you wish to make a 3D imaging model fit based on the DLT or the 3'rd order polynomial model while
using a plane target, you will have to traverse the target through a number of Z-positions, typically 3-5. For
each new position acquire new image(s) and 'Save for Calibration'. You will have the option to save the
images in separate ensembles or put them all in a common ensemble. Dantec Dynamics can supply a spe-
cial traverse unit, along with the calibration target.

Specifying Z-coordinates
Unless calibration images were acquired using free hand positioning of the target (see below) you will
need to specify the nominal Z-coordinate for each of the images. This is done using a dedicated custom
property named simply 'Z'. The custom property may be enabled when storing the calibration images or
later by right-clicking the image ensemble and selecting 'Custom Properties...' in the context menu. In the
resulting dialog put a checkmark beside the property named 'Z':

In the 'Record Properties' of the calibration ensemble you will now see a property group, 'Coordinates',
containing the custom property 'Z':

352
By default custom properties such as the the Z-value will apply to all images in the ensemble, which may
be OK if you stored images from different positions in separate ensembles. If a single ensemble contains
images from several Z-positions you must however specify a Z-coordinate for each individual image. This
is done by opening the ensemble (Ctrl-Shift-Enter or 'Show Contents' from the context menu) and then
browsing through the images one at a time entering nominal Z-values for each.
You are strongly advised to enter the nominal Z-coordinates as soon as possible after acquiring the
images while you still remember the positions used and the order in which images were acquired.
Please remember that all subsequent processing that use the imaging model fit will assume that Z=0 cor-
responds to the center of the lightsheet. You should keep this in mind both when aligning the calibration tar-
get with the lightsheet, when traversing and when entering nominal Z-coordinates for each of the
calibration images acquired.
When using multilevel targets you must specify not only the Z-coordinate of the zero-marker level, but
also the coordinate of the other level of markers. This is done by means of a Z-offset specified as part of
the target definition. Typically there will be two target definitions for each target, identical except for the
sign of this offset. The examples below illustrate when to choose one or the other of these two target def-
initions:

Back view 2nd Level – D 2nd Level + D


Orientation
of
Z-axis & tar-
get

Front view 2nd Level – D 2nd Level + D

"Front view" is here defined as the view, where the Z-axis is pointing towards the camera (camera position
will have positive Z-coordinate), whereas "Back view" is defined as the view where the Z-axis is pointing
away from the camera (camera position will have negative Z-coordinate).

Using free hand positioning (Pinhole and Telecentric models only)


The Pinhole and telecentric models can be used with traversed target plate locations as described above,
but in addition offer a more convenient method, where the Z-position of the target plate does not need to be
known, with the exception of the view that defines the coordinate system (a.k.a. reference view). When
using these methods the target plate must be placed at different positions and orientations, that covers as
vide a range of possible poses as possible, in order to achieve the best results.
The recommended procedure for acquiring free hand calibration images is as follows:

l Acquire the reference view. Target plate located in XY plane and Z=0.
l Acquire four slightly rotated (10°.. 20°) (+X-axis,-X-axis,+Y-axis,-Y-axis) views

353
l Rotate the target plate 90° around Z-axis and acquire a single view.
l Again acquire four slightly rotated (10°.. 20°) (+X-axis,-X-axis,+Y-axis,-Y-axis) views.

15.33.4 The recipe for Imaging Model Fit


The ensemble with the acquired calibration images (with or without the Z-coordinates specified) are
selected in the DynamicStudio database tree. From the Context menu select 'Calibrate' and in the result-
ing dialog, select 'Imaging Model Fit' from the 'Calibrations' group.

The recipe for Imaging model fit is shown below. Three user selections are required before the imaging
model fit can be performed:

354
l Identify the calibration target used.
l Select an imaging model.
l Specify the orientation of the coordinate system

Coordinate system orientation These four buttons identify the orientation of the (object) coordinate sys-
tem as seen from the camera's point of view. Many other combinations are possible, but for simplicity the
X-axis is always assumed horizontal and the Y-axis always vertical.
(Coordinate axes need not be exactly horizontal/vertical: A slight rotation of the camera and/or the target
is normally not a problem).
Only the positive direction need to be specified (i.e. whether X is positive to the left or to the right, and
whether Y is positive upwards or downwards). The default setting will usually be OK in a single camera
system, but multiple cameras may require different settings if results are to be compared later on. Imagine
for example an experiment in which two cameras are positioned on opposite sides of the light-sheet: For
later comparison of data we want the two cameras to share a common coordinate system, and although
they may agree that Y is positive upwards, they cannot both have X positive to the right, since they are
looking at the light-sheet from opposite sides.

355
In the example above both cameras are on the same side of the lightsheet and thus see the same orien-
tation of the coordinate system.

356
In the example above cameras are on opposite sides of the light sheet and thus see different orientations
of the coordinate system.

The orientation of the X-and Y-axes is specified during calibration while the orientation of the Z-axis is
specified indirectly through the Z-coordinates supplied by the user for each and every calibration image.

Save imaging model data as text file Imaging model parameters are stored in the DynamicStudio data-
base in binary form, but if you wish to investigate them outside DynamicStudio, you have the option to
specify a text file in which to store a copy of the imaging model parameters.
View/Edit (This button is only available when a dot matrix target is selected). The target analysis should
run smoothly if the calibration markers (dots) are clearly visible against an evenly illuminated background.
If however the target analysis fails to find sufficient markers, it is possible to adjust the algorithm. See the
section Adjusting parameters for finding the dot matrix target for more information on this topic.

15.33.5 Displaying imaging model parameters


Once the imaging model fit has been completed, the calibration result can be overlaid on top of a cal-
ibration image, by drag and drop from database tree. Hereby a grid is displayed on the calibration image to
show marker positions predicted by the imaging model fit and it can be verified that it matches the actual
markers in the image. The info box below the image display shows the imaging model parameters:

357
The first line identifies the type of imaging model used, and the next four lines specify whether it is a 2D or
a 3D model as well as the range of X-, Y- and Z-values covered by the calibration. The upper and lower lim-
its simply corresponds to the highest and lowest marker-coordinates found during image analysis, and
thus represent a coarse measure of the range of validity not accounting for the effects of perspective dis-
tortion.
If markers on the calibration images do not cover the entire field of view of the camera, later use of the
imaging model will probably require extrapolation beyond the range actually covered by the model fit.
Extrapolating is normally not a problem for the Direct Linear Transform or the Pinhole camera model, but
may cause problems in connection with the Polynomial model, especially if the original images are
severely distorted.
An important number to look at when evaluating the quality of the calculated imaging model fit is the aver-
age reprojection error. This number describes the average pixel distance from every marker found to the
predicted image location (in the figure below it is the distance from a yellow circle to the green grid cross-
ing). The smaller the average reprojection error is the more accurate is the imaging model fit. For a normal
system equipped with a low distortion lens the average reprojection error should lie well below one.

358
The figure above shows a zoomed view of the graphical display for the imaging model fit. The red lines rep-
resent the object space coordinate system. The yellow circles are the target markers found in the image
by the automatic target marker detection. Finally the green grid is based on the target points (in object
space) mapped to the image through the fitted imaging model.

15.33.6 Direct Linear Transform (DLT)


Derived from geometrical optics, the Direct Linear Transform (DLT) is based on physics, but cannot
describe non-linear phenomena such as image distortion due to poor camera lenses or complex refrac-
tions that may occur for example when measuring through a window from air into water.

In matrix representation a DLT is described in the following way.

Uppercase symbols (X, Y, Z) represent the object coordinates (millimeter), and lowercase symbols (x,y)
represent corresponding image coordinates (pixel).
The same formula is used for both 2D and 3D models, but in a 2D model all coefficients relating to the Z-
coordinate are set to zero.

The window below is displaying a DLT imaging model fit in the numeric view.

359
15.33.7 3'rd order XYZ polynomial imaging model fit
The polynomial model is strictly empirical, and there are thus no physical arguments to justify its use, but
in experiments where significant non-linearities are present or expected, the polynomial imaging model
may prove superior to both the See "Direct Linear Transform (DLT)" on page 359 andSee "Pinhole camera
model" on page 361.

The 3'rd order XYZ polynomial that will transform a point in object space (Uppercase X, Y and Z) onto a
point in the image plane (lowercase x and y) is given by the formula below:

Please note that the x- and y-polynomials are indeed 3rd order in X and Y, but only 2nd order in Z.

The polynomial coefficient of a given imaging model fit can be the viewed in the numeric view of the imag-
ing model fit. An example is shown below:

360
Please note that the DLT-parameters are also calculated and shown, even when a polynomial model has
been selected. This is because inverse mapping (from image to object) requires iteration when using the
polynomial model, and results from a corresponding inverse DLT-mapping is used for the initial guess.

15.33.8 Pinhole camera model


This section provides detailed information about the mathematics behind the Pinhole camera model.
The pinhole model is based on a simplified lens model, where the lens is substituted with a projection
center (a.k.a. the Pinhole) as illustrated below:

In order to transform a point in an arbitrary object space coordinate system onto the image sensor, the
object point (Xp Yp Zp) is initially transformed from object space into the coordinate system of the camera

361
and normalized. (The camera coordinate system has the Z-axis normal to the image plane and its origin
placed in the projection center.)
The transformation from object to camera coordinates is given by a rotation matrix (R) and a translation
vector (T). (Xc  Yc  Zc ) represents the point in camera system coordinates.

To account for lens distortion (radial and tangential) the normalized point (xn yn) is then adjusted with the
distortion of the lens system. The distortion is given by the parameters K1 and K2 describing the radial dis-
tortion and P1 and P2 describing the tangential distortion.

Finally the distorted image point (xd yd) can be mapped onto the sensor by applying the pinhole projection.
Here fx & fy is the focal lengths along the X and Y axis. cx & cy is the point on the image sensor where the
optical axis intersect with the image sensor. The pinhole projection is given by the equation below. (xp yp)
is the point in image pixel coordinates.

The parameters of a given pinhole imaging model fit can be viewed in the numeric view. An example is
shown below:

362
Above an imaging model fit is shown, in which five images/views of a calibration target have been used.
The rotation matrix is described by a rotation vector (a.k.a. Rodrigues rotation vector). The relationship
between a rotation vector and the rotation matrix is given by the Rodrigues rotation formula given below:
Rotation in 3D space can be described by a vector (wx wy wz )*q, where the unit vector (wx wy wz ) defines
the axis around which rotation take place and q determines the angle of rotation. From a known rotation
vector the 3x3 rotation matrix R can be determined according to:

15.33.9 Telecentric camera model


This section provides detailed information about the Telecentric camera model and the mathematics
describing the Telecentric lens.

363
Optics
A Telecentric lens can in principle be made by mounting a pinhole one focal length behind a conventional
lens and before the image plane:

Following light rays from points in the object space the pinhole will block all but rays that are "almost par-
allel" to the optical axis. Shaded areas in the figure above illustrate light that will reach the sensor, while
dotted lines illustrate rays that are blocked by the pinhole and thus cannot reach the sensor. The smaller
the pinhole diameter is, the closer to parallel the accepted rays become, but of course it limits the amount
of light collected also. Parallel rays mean also that the size of the lens itself limits the field of view in
object space. The main reason for using a telecentric lens is however that the resulting images have no
perspective: The image of an object will remain the same size no matter the distance to the lens (but still
blur when too far from the focal plane).

Please Note:
The absence of perspective means also that you cannot calibrate a single-camera system using the Tel-
ecentric Model, since a mirror image of the image acquisition setup would (in theory) give the exact same
images, so there is no way to find out which system is used. This ambiguity can be partly resolved by com-
bining simultaneous views of several poses as seen from multiple cameras. This means that the Tel-
ecentric Imaging Model is only available in 'Multi Camera Calibration', where multiple cameras are
calibrated simultaneously.

Mathematical description
Parallel rays are mathematically described by an orthographic projection. It is mathematically identical to
the Pinhole Model, but some of the parameters are locked at certain values. As with the Pinhole Model
object coordinates are first transformed from the (arbitrary) object coordinate system to a camera-aligned
coordinate system with origin in the center of the lens, Z-axis along the optical axis and X- and Y-axis
aligned with rows and columns of the image sensor. Afterwards these (metric) coordinates are trans-
formed into pixel coordinates on the sensor via a camera matrix, containing intrinsic parameters.
Given a point P=[X Y Z]T in object coordinates the orthographic projection to the image point p=[u v]T is
modeled almost the same way as in the Pinhole model:

364
where p=[u v 1]T and P=[X Y Z 1]T are described using homogeneous coordinates by appending an addi-
tional 1 at the end and the matrices K and E contain intrinsic and extrinsic parameters of the camera
respectively.
The extrinsic parameters E of a telecentric lens are exactly the same as the ones for a conventional per-
spective projection, including a 3x3 rotation matrix R and a 3x1 translation vector T. Together they
describe a mapping from world to camera coordinates (both metric, but the latter aligned with the image
plane). The fact that Z does not influence object positions in the image means also that the calibration
C
procedure cannot make a reliable estimate of T , so it is simply set to zero (meaning also that we cannot
3
estimate how far the camera is from the scene).
The intrinsic parameters Alfa and Beta are horizontal and vertical scale factors handling the conversion
from metric to pixel coordinates, while Gamma describe the skewness. In a conventional (and ideal) cam-
era Alfa & Beta would be identical and Gamma would be zero. In practice the use of a Scheimpflug mount
is enough to justify Alfa and Beta being different and if the Scheimpflug mount allows tilting in both direc-
tions (or if the image plane tilt is not perfectly aligned with sensor rows or columns) a nonzero Gamma
becomes relevant also. Also the principal point, defined by u & v is the same as for the pinhole model,
0 0
but for the telecentric model they cannot be estimated and are simply set to half the sensor width and
height.
In a conventional (perspective) lens K =1, describes the perspective, but in the orthographic camera
33
matrix it is zero. This means that the distance Z from camera to object point does not influence its posi-
C
tion in the image. That is exactly the main property of a telecentric lens.

Requirements
In the following the term "Pose" is used to describe a specific location and orientation of a calibration tar-
get in object space.
The term "View" refers to a given target "Pose" as seen from a specific camera. This will in general cor-
respond to a specific image.
A certain camera is said to "see" a certain target Pose, when at least 4 (non-collinear) calibration markers
can be correctly identified on the corresponding image.
(4 is an absolute minimum, a "good" calibration image should contain a lot more than 4 detectable markers
evenly distributed across the cameras field of view).
A view or image fulfilling this requirement is said to be "valid" below.

The first target pose is used to identify the coordinate system referred to by the final camera calibrations.
Obviously this reference needs to be valid and aligned with the experimental setup if end results are to be
of use. The remaining target poses can be chosen freely and the Telecentric model works best with more
or less random poses, so "Freehand" target orientation is recommended. If/When using a traverse the tar-
get location changes, but orientation remains the same for all poses, limiting the amount of new infor-
mation added with each new set of images.

For the Multi-Camera Calibration to succeed using the Telecentric model a number of requirements must
be fulfilled:

365
l Estimating intrinsic parameters of a specific camera require at least 4 valid calibration images
(/views) preferably with the target oriented differently.
l Estimating extrinsic parameters require each target pose to be seen from at least 2 cameras.
(If not fulfilled, the target pose and corresponding views/images will be excluded from the cal-
ibration).
l Resolving the orientation ambiguity inherent to a telecentric system require the following:
Whenever two (or more) cameras can "see" a specific target pose, the same two cameras must
be able to "see" at least one other common pose.

Results
The parameters of a fitted telecentric model can be seen in the numeric view. An example is shown below:

The average reprojection error indicate how well the model matches the location of the calibration markers
found. Please note that the algorithm to find/locate calibration markers will itself introduce noise/inac-
curacies, that will increase the apparent "error" shown.
The camera matrix listed above is the intrinsic parameters in the K-matrix, but without the third column
(which will be all-zero anyway). Note that u and v are both 1024; As explained above they are half the
0 0
sensor width and height, so this particular example comes from a camera with a total of 2048x2048 pixels.
Lens distortion parameters are inherited from the Pinhole model, but all zero, since it requires X and Y
C C
to be normalized by Z . Since Z cannot be determined we set it to zero and thus cannot normalize with
C C
it.

366
The reference frame identifies which of the images define the coordinate system. For each of the views
the Extrinsic parameters are shown as Rotation and Translation vectors. Note that the third component of
the translation vector is always zero, since it cannot be estimated when using the telecentric model.
Last the reprojection error is shown for each individual view. If one of the views show significantly higher
error than the others you may consider excluding it from the calibration to perhaps get a more accurate cal-
ibration.

15.33.10 Adjusting parameters for finding the dot matrix target

The dot matrix calibration target includes a large dot (the zero marker) surrounded by four smaller ones
(the axis markers) to identify the origin of the coordinate system, and based on known dot spacing (X, Y) -
Coordinates of the remaining markers can be determined.

If the model fit fails, try cleaning windows and optics, remove bubbles caught in the calibration target, and
anything else you can think of to improve the quality of the images. This may take a while, but it is worth-
while doing it, since poor image quality will at best produce a poor imaging model fit, which is of little use
anyway. If you are attempting a 3D model fit, try fitting a 2D model to each of the images involved to see if
all or only some of the images fail. In the latter case examine the faulty images closely to see if you can
identify the problem. Especially the region around the Zero and Axis markers is important.
Here is a little more about how the software find the dots
If everything else fails, you may need to modify parameters in the target analysis setup:

367
When analyzing the images they are first thresholded to produce pure black/white images, and all black
pixels touching each other are grouped in objects. For each object the area (pixel count) and the position
(centroid) is calculated. Ideally these objects should correspond to calibration markers, but in practice
they may include small noise "spots", large "stripes" along the edges of the target and so on. Such erro-
neous objects should of course be removed before performing the actual imaging model fit.
Below is a description of the parameters that are used to reject/accept individual objects (marker can-
didates).
Minimum dot area: Objects with an area below the Minimum dot area are discarded prior to calculating
the mean object area. This removes isolated high frequency noise.
Border: Objects touching the image boundary are also discarded, while objects that are "close", but not
touching the image boundary may be discarded depending on the parameter Border. For example Bor-
der=0.10 will discard all objects outside the central 90% of the image (5% of image width is discarded left
and right, and 5% of image height is discarded top and bottom).
Dot area tolerance: The mean area (pixels) of all remaining objects is calculated and then objects sig-
nificantly smaller or larger than the mean area are discarded as erroneous. The following formulas is used
for marker classification:
AMin = (DAxis / DStd)2 * AMean / (Dot Area Tolerance)
AMax = (DZero / DStd)2 * AMean * (Dot Area Tolerance)
Objects with an area smaller than AMin or larger than AMax are discarded and on the basis of remaining
objects AMean is recalculated for later use. Nominal dot diameters DAxis , DStd & DZero are read from the
target library and included to account for the fact that not all markers have the same size. Beyond this the
Dot Area Tolerance accounts for variations due to perspective as well as normal variations in the images.
Dot area Tolerance must be larger than one.
Among the remaining objects the largest one is assumed to be the zero marker and it is verified that the
zero marker candidate is big enough to fulfill the condition:
AZero > (DZero / DStd)2 * AMean / (Dot Area Tolerance)
The algorithm then searches for the four nearest neighbors in direction up, down, left & right of the zero
marker. These objects should correspond to the axis markers and it is verified they are all small enough to
fulfill:
AAxis < (DAxis / DStd)2 * AMean * (Dot Area Tolerance)

Zero/Axis ratio tolerance: Provided Zero and Axis marker candidates pass the abovementioned 'global'
size tests, a local test is finally performed to compare them directly:

ABS( 4 * (DAxis / DZero)2 * AZero / SUM(AAxis ) - 1 ) < Zero/Axis ratio tolerance

Based on nominal marker diameters, the expected ratio between zero and axis marker areas can be pre-
dicted. This test verifies that the actual ratio does not deviate too much from the expected value.
Dot position tolerance: Finally the algorithm verifies the position of the assumed Zero marker relative to
the assumed Axis markers: The distance from the Zero marker to each of the Axis markers is determined
and the average distance calculated. The distance to each of the axis markers is compared to the average
distance and Dot position tolerance determine the acceptable deviation that no axis marker candidate
must exceed:
ABS( 1 - dAxis / dMean ) < Dot position tolerance.

Minimum dot count: All remaining objects are assumed to be calibration markers and it is verified that
there are at least Minimum dot count within each calibration image. The default value 25 corresponds to a
grid of 5x5 calibration markers.

15.33.11 Image Processing Parameters


Mathematically camera calibration is the task of modifying the parameters of an imaging model to make
the model match sample observations as best possible. Different imaging models have different param-
eters and different procedures to optimize the fit, but they all rely on sample observations where

368
corresponding object- and image-coordinates are known.
Obviously these coordinates originate from the calibration markers and identifying markers in an image
and determining their coordinates requires image processing.
The first step in identifying markers is to binarize the calibration images using a threshold value deter-
mined from the grayscale histogram of each image.

Note
The histogram of the image gray values must have two distinct peaks, and only two.

369
First: Original image.
Second: Histogram with peaks and threshold.
Third: Binarized image.

The thresholding is shown. The leftmost peak is assumed to correspond to the dark calibration markers,
while the rightmost peak is assumed to correspond to the bright background of the calibration target. The
threshold is determined as the grayscale value midway between these two peaks. (The multilevel targets
have white markers on a black background and the marker and background peaks thus swap, but the
threshold is still midway between them).
To remove noise the histogram is smoothed before peak-finding. If more than two distinct peaks remain in
the histogram only the two outermost will be used and further peaks in the central part of the histogram will
be ignored.
This is particularly important in cases where the calibration target does not cover the cameras entire field
of view: If very dark or very bright areas are visible outside the target edges the histogram may contain
additional peaks to the left or right of the ones representing target markers and target background. This will
affect the threshold calculation and possibly cause markers and background to merge as all-black or all-
white.
To overcome such problems masking can be used to remove the areas outside the calibration target: If
these areas are made gray by masking they will still produce a distinct peak in the histogram. With careful
selection of a grayscale value it can be ensured that this peak is positioned somewhere between the two
peaks representing target markers and background. This extra peak will thus no longer influence the
threshold calculation.

370
Disturbing background effects removed by masking.

Note the varying background intensity in the original image. The background in the upper right-hand part of
the image is darker than in the left and central part of the image. This is why the right-hand "bright" peak
has a plateau on its left instead of dropping off smoothly. Even so the peak remains (barely) on the right-
hand side of the threshold value and the image can be binarized successfully. In the upper right-hand part
of the binarized image some scattered black pixels are present in areas that should have been part of the
background and further image processing is required to get rid of these, so only calibration markers
remain.

15.33.12 Imaging model fit equals Camera calibration


In the computer vision society, finding the correct imaging model parameters is commonly referred to as
camera calibration. In the scientific community users may however have a different understanding of the
term "calibration", so in DynamicStudio the process of finding imaging model parameters is referred to as
"Imaging model fit": Given a set of corresponding object and image coordinates, find the parameters which
gives the best possible fit for the imaging model chosen.

15.34 Multi Camera Calibration


The Multi Camera Calibration method is used to generate an Imaging Model Fit (IMF) using standard or
non-standard calibration plates with markers laid out in a quadratic grid. Instead of calibrating each camera
individually, the Multi Camera Calibration method creates an IMF from all cameras involved in the imaging
set-up. The method can also be used to calibrate each camera individually, if desired.
For a detailed description about camera calibration in general, please refer to See "Imaging model fit" on
page 347.
The Multi Camera Calibration method provides a fully automatic or alternatively a semi-automatic cal-
ibration, which automatically detects the individual cross-hair markers but the user still has the possibility
to manually specify the origin of the calibration plate.
The input requirements for the method is either single a calibration image ensemble or multiple calibration
image ensembles from multiple cameras, which are then selected as fixed inputs. For detailed information
regarding acquisition of calibration images please See "Acquiring calibration images" on page 352
One or multiple calibration image ensembles with acquired calibration images (with or without the Z-coor-
dinates specified) must be selected in the database tree. From the Context menu select 'Calibrate...' and
in the resulting dialog, select the method 'Multi Camera Calibration' from the 'Calibrations' group.

371
372
The Multi Camera Calibration recipe is divided into four frames:

l Browse Datasets
l Image view
l Target Info
l Calibration model

373
By using the slider inside the 'Browse Dataset' frame, it is possible to inspect the individual images in the
ensemble(s). The image is displayed in the image view with a graphic overlay that shows the detected
markers. For general information on how to manipulate the image view please refer to See "Using the dis-
play from within an analysis method" on page 633.
Before starting the calibration, information about the calibration plate must be entered into the frame 'Tar-
get Info'. If 'Use Standard Target' is selected, the parameters of the grid and the markers are locked. If
'Use Standard Target' is unselected, the parameters of the grid and the markers can be changed. Once
the Marker Identification is started it will not be possible to change these settings.
If 'Target Identification Type' is set to 'Automatic', all the markers are detected based on the selected cal-
ibration plate. If 'Target Identification Type' is set to 'Semi-automatic', the markers are detected after
pressing 'Start Marker Identification' on the 'Calibration Model' tab.
Once the marker identification has been initiated by pressing the button 'Start Marker Identification', the fol-
lowing frame will be displayed.

374
The reference marker (origin of the calibration) is indicated by red coordinate axes in the image view, the
position of the reference marker can be altered by left clicking any of the detected markers. Once the
desired reference marker is set, the button 'Accept' will store the position and move on to the next image
in the calibration ensemble. Since the reference marker will define the origin of the calibration, it is vital
that the same reference marker is selected in all images.
If the image is not to be used in the calibration, press the button 'Skip', which will disregard the current
image and move on to the next image in the ensemble. The green or red circle (as seen in the picture
above) indicate whether or not the current image is used in the calibration - Green indicate that the image
is used, red that it is not.
To complete the calibration process the desired imaging model and the orientation of the coordinate sys-
tem must be selected from the 'Imaging Model' menu. For calibration of multiple cameras only the Pinhole
camera model is available. For calibration of multiple cameras it is also necessary that at least two cam-
eras observe the calibration from the same side with the same coordinate system orientation all the time.
For a detailed description about the imaging models and coordinate system orientation please refer to See
"Imaging model fit" on page 347.
Finally press the 'Apply' or 'Ok' button, to calculate the calibration.

375
15.35 Imaging Model Fit Import
The Imaging model fit import is used to create a new Imaging Model Fit (IMF) on the basis of a exported
IMF1 text file or by manually entering the IMF parameters.

The screen shot below is picturing the recipe dialog for the method.

In the frame "Range of validity" the user must enter the interval in which the IMF is valid.
The user must also specify which calibration target plate to simulate. The entered values for the range of
validity and the simulated target are only informative and is only used when displaying the IMF as a
graphic overlay on an image.

To actually import the imaging model fit the proper imaging model should first be selected from the drop
down list. Afterwards pressing the "Adjust imaging model fit ..." button will bring up a dialog in which the
IMF parameters can be entered or automatic imported from a file. The look of the dialog is dependant on
the choice of imaging model.
Below is a screen shot of the dialog used to enter the Pinhole model parameters.

1An exported IMF file can be created by selecting the calibration record in the database tree and choosing "Export -
> Export as numeric..." from the File menu.

376
The desired values can be entered into the relevant text boxes or alternatively the "Import from file ..."
menu option available in the file menu can be used to specify the location an exported IMF file.

15.36 IPI Processing


IPI processing determines the size of spherical, transparent particles through the fringes patterns
observed in a defocused image. To determine diameter only, the image can be single frame (For velocity
information, both images need to be double-framed.). To accurately determine the position of a particle,
two overlapping images are required; a focused image and a defocused image. Performing IPI data anal-
ysis, requires Calibration images to ensure overlap between the focused and defocused images. Cal-
ibration have to be performed before IPI analysis.

15.36.1 Content
User Interface
Calibration
Recipe <IPI processing>
Advanced Settings
Processing and Presentation
Post Processing
Example
Trouble shooting

15.36.2 User Interface


Select the two target images as show below, do a right click and select Analyze or from the tool bar press
the Analyze icon

377
Important: Make sure Camera 1, Camera 2 and calibration images are highlighted before clicking the right
mouse button and selecting "Analysis…".To select images press simultaneously
shift keyboard key and right mouse button. Make sure the analysis is applied from the first camera
folder (Camera 1) as above.

Select the category “Particle Characterization” and then in the adjacent list select “IPI Particle Sizing”.
The text below the selection highlights the necessary inputs to correctly process the data.

15.36.3 Calibration
Acquire a set of images with the defocused and focused camera and save the images as Calibration
image. From the first image. do a right mouse click and select Calibrate. 

378
From the calibration window, select the calibration method. For IPI processing, it is recommended to use
Imaging Model Fit method. :

Repeat the same for the second image.

For velocity measurement, the scale factor has to be determined. One way of doing that it to use the cal-
ibration image on the focused image and by right clicking on the mouse, select measure Scale factor.

For further information about calibration methods, refer to the IPI reference manual, See "Imaging model
fit" on page 347 and Description of the numerical method "Imaging Model Fit - Extended".

379
15.36.4 Recipe <IPI Processing>
Initially, the recipe for IPI Particle Sizing appears reduced showing only the most important parameters as
below. To see all available parameters, click on "Show advanced settings" at the bottom of the recipe and
refer to Advanced Settings description.

15.36.5 General

Select image order - Select the image that represents the focused camera and the defocused camera.
Image mask limits - Enter the limits for the size of the mask used to determine the defocused circle size.
Changing the size of the maximum value effects the maximum particle size that can be measured. Make
certain the values coincide closely with those observed in actual images. The step size controls the inte-
gration frequency. The size of the FFT should always be larger than the maximum circle size.

380
15.36.6 Optical Setup
Enter the optical parameters that best reflect the actual setup. The scattering angle is normally left at 90
degrees to avoid the effects of image warping. The aperture diameter is focal length divided by the F-
number. Since the defocused camera aperture is usually left at fully opened, the aperture diameter,  for a
60 mm lens with an aperture of 2.8, is 60/2.8=21,5 mm.
The calculated diameter range is based on the input optics parameter and the maximum mask size (see
the section General).

381
Hint: To establish the maximum particle size in a measurement, enter the maximum defocused circle size
(see previous page) that best represents the particle size range. By switching back and forth between the
two pages you can determine the size of the circle required, hence the amount of defocusing required, to
measure in the range specified.

15.36.7 Velocity Setup


The PTV analysis requires a cross-correlation to be carried out on the focused image pair to determine the
general flow direction. Careful selection of the interrogation area and offset is required, since seeding in
sprays or other particle ladened flows is often less than what it is in PIV flows. Too small an interrogation
area and the PTV analysis will yield unreliable results.

Interrogation size is the dimension of interrogation area in pixels: 16, 32, 64, 128, 256.
Overlap percentage is the amount of overlap (%) to use incrementing to the next area: 0, 25, 50, 75%.
Noise removal in percentage is the amount of background noise to remove of the camera resolution. It
operates much like a threshold. Note that it affects the detection of particles (sizing) even when no veloc-
ity analysis is performed.
Maximum particle number is the maximum number of particles to accept per image. It can be selected
between 1-10000.
Tip: Evaluate the focused image pair first using the cross-correlation analysis that is built-in into the
DynamicStudio software. The size of the interrogation area will depend largely on the seeding quality of
the focused images. For flows with few particles, use a large area. A rule-of-thumb is a minimum of 10-11
particles per interrogation area.

15.36.8 Advanced Settings


To enable the advanced settings, click on the check box at the bottom of the recipe.

382
15.36.9 Region Of Interest (ROI)/ Validation
In some measurements it is only required to process a portion of the measured area. You can limit the
areas of interest by entering the dimensions or selecting one of the pre-defined areas. There are six prede-
fined areas. Uncheck the checkbox labeled "Use entire region” if you wish to set your own limits.

383
The peak level validation rejects particles based on the percentage peak height of the maximum peak
determined. The overlap will reject particles with too little useable area. Setting a value of 70% would
mean that any particle with more than 70% of its area overlapped would not be accepted. · The frequency
ratio in the x- and y-dir is another validation tool. Fringes in the x-dir will exhibit small frequency peaks in
the y-dir, and therefore a high fringe ratio. Images without fringe information usually exhibit poor fringe
ratios.

15.36.10 Window Setup


The processing of the defocused image yields the fringe frequency, which in turn yields the particle dia-
meter. The frequency information is determined by applying a 2D FFT over the selected area and iden-
tifying the dominant frequency peaks. It is often useful to apply a window or filter over the input data prior
to processing to smoothen the peaks. The built-in window is a familiar type known in signal processing as
a Hanning Window. While this window has a clear definition and fixed parameters, a strength factor has
been built-in that effects the “smoothness” of the output data. In addition, the window can be applied hor-
izontally, vertically or in both directions. Since fringes are oriented according to the optical configuration, it
is advantageous to apply the window in the direction perpendicular to the orientation of the fringes.

384
The built-in window is a familiar type known in signal processing as a Hanning Window. While this window
has a clear definition and fixed parameters, a strength factor has been built-in that effects the “smooth-
ness” of the output data. In addition, the window can be applied horizontally, vertically or in both direc-
tions. Since fringes are oriented according to the optical configuration, it is advantageous to apply the
window in the direction perpendicular to the orientation of the fringes.

Note: A window strength of 0 % implies the default Hanning window.

15.36.11 Filter
In high concentration particle flows the number of particles can be so high as to reduce the overall val-
idation simply because the overlap is too large. A workaround to this is to artificially reduce the detection
such that particle neighbors within a user-specified bound are not accepted. By default, the filter is dis-
abled. To apply the filter, check the checkbox and select the minimum distance between particles.

385
Tip: A useful rule-of-thumb is to take the largest circle size and divide by five; i.e. for a defocused image
size of 150 pixels, use a particle filter with a minimum separation distance equal to 30-40 pixels
Alternatively, the spacing between particles may be such that areas of high concentration suffer through
validation while neighboring areas of lower particle density are left unaffected. Here you may wish to
adjust the distance parameter to improve the overall validation.

Sometimes the camera is advertently moved after calibration (resetting the aperture, for example). In this
case the calibration is affected and therefore particles in cameras A and B will not line up as expected. In
extreme cases it is recommended to redo the calibration. In cases where the offset induced is minor and
can be visibly measured on the images, the user can apply an offset on the defocused image in both X and
Y directions.

15.36.12 Laser setup


Laser settings are also used to set validation criteria. The position of the origin of incident light plays a role
in how the fringes are rotated as a function of particle position in the image. The position of the laser
source is measured from the front lens of the lightsheet optics to the front lens on the camera.

386
15.36.13 Processing and Presentation
After pressing Apply or OK the IPI processing will execute, adding an IPI record beneath the selected
image datasets. The resulting data may then be displayed in tabular form by clicking on the spreadsheet
icon in the toolbar, or in graphical form by double clicking the mouse directly on the IPI record.

Tabular presentation of IPI data


The tabular format of IPI data will depend on whether single or double frame images were acquired and
processed. Single-frame images yield only size and particle position information. Double-frame images
will yield velocities also:

l X, Y , Z: Position of the particle in the image (pixel).


l Cs: Size of the defocused image (pixel).
l D: Diameter of the particle (micron).
l U, V, W: Velocity of the particle (m/s).
l Fx: fringe frequency in the x-dir (1/pix).
l Fy: fringe frequency in the y-dir (1/pix).

387
Note: A circle size of zero indicates a detected particle that did not have a corresponding image on the
defocused image. This usually indicates that the particle was not transparent. A diameter of zero for a
non-zero circle size marks an invalid particle.
Double image datasets yield the above information for each image plus velocity in X and Y. Velocity is
determined through particle displacements from image A to image B. The velocity determination is inde-
pendent of whether the particle was validated or not. Therefore, vectors of non-transparent particles are
also visible. Refer to the DynamicStudio manual for options related to formatting or exporting tabular data.
Graphical presentation of IPI data
The graphical display by default displays the data as circles over each particle detected. The positioning
of the circles coincides with the position of particles on the defocused image. The diameter of the circle
coincides with the diameter of the defocused particle images. Green circles indicate validated particles,
the red ones invalid particles. The color scheme is user selectable.

The particle display can be dragged over the raw defocused images to test the calibration. The circles
should overlap the raw particle images, though small deviations are common and are often due to irreg-
ularities in the particle images themselves. Raw defocused images are often ellipsoidal and therefore intro-
duce small deviations in the positioning of the resulting particles. Normally this is of little concern, though
large deviations may be the result of inadvertently moving the camera lens between calibration and meas-
urement. If the deviations are reasonable an offset can be applied to “move” the particles back into place.
Large deviations will require recalibrating the cameras.

Display options for IPI data


Clicking the right mouse button over the IPI data display will bring up the context menu. Here you are pre-
sented with several possibilities. Selecting the menu item “Display Options…” will display a tab dialog for
controlling the appearance of the data.

388
The display options dialog allows you to control the way the IPI data is displayed. As mentioned above,
the default presentation is to present circles over each particle detected.

The Colors tab of the Display Options page enables the following items to be displayed: ·

l Valid particles: by default these items appear as green circles that circumscribe the defocused
particle image. Color is user selectable by clicking on the color felt to the right of the item. ·
l Invalid particles: by default displayed as red circles as in above. ·
l Vectors: in cases where double images produce valid vectors, vectors or particle displacements
are displayed as arrows of a user selected color. ·
l Particle size: circles based on particle size, normalized by the software. The user can increase or
decrease the default scaling by clicking on the size scaling tab.

389
In addition, the user can select to have vectors from invalid particles displayed or not. The background
color can be changed by clicking on the color selector to the right. The Size Scaling tab of the Display
Options allows the user to control the size of the circles used in representing particle size.

Scaling options:

l Auto-scaling: scaling is normalized by the software, the user can increment or decrement the rel-
ative size of the diameter circles. ·
l Fixed scaling: scaling is normalized by the user.

In the case of double images, the Vector Scaling page is displayed. Here the user can adjust the size of
the vectors in much the same way vectors plots found in other parts of DynamicStudio.

390
The Validation page of the Display Options dialog gives the user control over two validation criteria:

l Peak validation: adjusting this parameter the user can see whether the validation level selected
during processing was sufficient. Increasing this value will reject particles with insufficient peak
height.
l Vector length: by adjusting this parameter vectors with incorrect lengths due to poor correlations
can be rejected.

15.36.14 Post processing


Once the IPI datasets are processed, the user can apply the following post-processing procedures.
To process a series of datatsets, follow the steps below: · Select the IPI datasets you wish to include in
the histogram. You can do this by selecting similar datasets or pressing the CTRL key over each dataset.
Refer to the DynamicStudio manual regarding dataset selection. · Click the right mouse over the first data-
set and select “Analysis…”. Select from the particle characterization category, Histogram or Spatial His-
togram.

391
For further information about Diameter histogram and Spatial Histogram see See "Diameter Statistics" on
page 287 and See "IPI Spatial Histogram" on page 400.

15.36.15 Example
Here is an example of a spray measurement: Raw image on the top and processed data below.

392
15.36.16 Trouble shooting guide
This section will present several cases, which typify problems associated with IPI measurement. Solu-
tions are then offered after each case. For problems related to setup of the hardware please refer to the
DynamicStudio User’s manuals.
Message: “Too many particles”
The DynamicStudio IPI detects particles on the focused image based on pixel intensity and distribution.
The removal of background noise in the setup may adversely affect this detection by accepting too many
particles, or too few. Saturated images require care in setting the amount of noise to remove. The noise
level can be adjusted in the menu as shown below:

393
Since the noise level is uniform for all images under a particular setup, adjust the first image to determine
the proper filtering and then reprocess all the other images afterward. Reducing the noise filter means
higher detection rate, increased chance of accepting noise as a particle. Increasing the noise filter level
reduces detection, increasing the likelihood of missed particles. There is a tradeoff.
If there are many particles, increase the maximum particle level (Velocity setup page).

Poor validation or incorrect particle size


One of the hallmarks of the DynamicStudio IPI is that you can see quite clearly if a particle is correctly
measured or not. Depending on the system configuration, the resulting images may lack the proper con-
trast, either through lack of sufficient laser power or weak signal from particles with low refractive index
ratios when compared with the medium. There are several ways to enhance the quality of the image or
processing method to produce better results.

1) Use the window


The IPI processing module provided by the DynamicStudio IPI has a built-in Hanning window. This win-
dow will refine the input data prior to FFT analysis to yield clearer and more distinct peaks.

394
2) Pre-process the image using the IPL module
If you have purchased the DynamicStudio IPI together with the IPL option, you can filter and enhance the
input image in a variety of ways. Refer to the IPL Users Manual for further guidance on using the IPL mod-
ule.

395
l For weak fringes or fringes that show intensity variation, use a high pass filter to remove the low
frequency information. The laplacian 5x5 is a useful filter for enhancing fringes.

l Use a low pass filter to remove extraneous background noise.

Effect of filtering the raw defocused image. A Laplacian (5x5) filter was used:

Too much rejection due to too many particles


In the case of high concentration flows, the size of the defocused circles and the particle count effect the
overlap validation and thus many particles are rejected outright. There are two solutions:

396
1) Reduce the size of the measurement volume
A change in lens and/or addition of an extension-ring, or the repositioning of the camera will increase the
magnification and thereby increase the spacing of particles on the defocused image. The magnification
should just be enough to validate the particles.

2) Use a particle filter


A particle filter will remove otherwise detected particles that are spaced too closely to one another,
thereby artifically decreasing the detection rate and improving the overall validation.

Particle filtering where the minimum distance is set to 20 pixels:

397
Particle filtering where the minimum distance is set to 40 pixels:

398
Poor matching between focused and defocused images
There can arise a situation where the results do not overlap on the defocused image, as shown in the fig-
ure below. This is often the result of the camera being moved after calibration. If the offset between the par-
ticles and circles is uniform throughout the image then the user can apply an offset to counteract the effect
and produce better overlap.

Poor overlap between results and raw defocused image:

Calibration offset fix in IPI Recipe dialog:

Application of offset increases validation and improves centering of circles:

399
15.37 IPI Spatial Histogram
The histogram display refers directly to the underlying histogram dataset, which is a subset of the Shadow
dataset.

The spatial histogram is defined as a 2D scalar map of a user-selected quantity. The user can select the
binning in both X and Y directions. The following scalar quantities can be binned:

l Particle counts
l Diameter mean
l Area mean
l Volume mean
l Sauter mean

When selecting Shadow Histogram, the following window appears.

Calculation

400
Select the histogram type from the drop down list. Histogram type can be selected bewtween Particle
count, Diameter mean, particle area mean, particle volume mean, Sauter mean (D32), U mean and V
mean where U is the vertical velocity and V the horizontal velocity.

Number of cells
The image is divided in spatial cells. The mean value will be calculated from the particles detected in each
cell.

15.37.1 Process
In double frame, mode, the data can be processed from the first frame (check Image A), from the second
frame (check B) or from both.

The 2D scalar plot that results is of a type that is built-in into DynamicStudio. This type of plot is usually
displayed as a contour plot. Data can also be viewed in tabulated form.

15.38 Least Squares Matching


15.38.1 Introduction
When performing reconstruction based 3D velocity measurements, the analysis procedure consists of
two parts:

1. Volumetric Reconstruction see "Voxel Reconstruction" (on page 604)


2. Velocity analysis (Least Squares Matching)

The Least Squares Matching (short LSM) can perform both methods. But sometimes it might be handy to
store the voxel spaces in order to save computational time for LSM settings testing, or for large voxel

401
volumes (for more Information see "Voxel Reconstruction" (on page 604)). To analyze the velocities, the
Least Squares Matching performs translation, deformation and rotation of the interrogation volumes (later
referred as "cuboids"). More details on the computational method can be found in the references. See also
"2D Least squares matching (LSM)" (on page 234).

The recipe is located in the Volumetric category of the Analysis Method menu :

If the method is not applied onto a reconstructed voxel space,it needs at least 3 ensembles of images (i.e.
one for each camera that is used for the acquisition) and the corresponding number of calibration. All the
needed datasets needs to be selected by a checkmark (by pressing the spacebar when a dataset is
selected), as seen in the image above. Moreover, the LSM recipe can only be applied to double frame
images. If the dataset consists in ensemble of single frame images (i.e. Time-Resolved measurement),
please use the dedicated recipe to obtain double frame images (see "Make Double Frame" (on page 450)).

15.38.2 The Least Squares Matching Recipe


The Least Squares Matching recipe, presented in the following picture, is divided by 4 tabs into 4 sections
that correspond to the 4 steps of the analysis procedure. Please note, when the LSM is applied on an
already reconstructed voxel space, there are just the 3 tabs necessary for the LSM method:

o Voxel space reconstruction


o Volume Pre-Processing
o Vector Grid
o LSM algorithm

Each of these point will be addressed in the following of this document.

402
Voxel space reconstruction
The voxel Voxel Space reconstruction tab is the same as in the voxel reconstruction method for the dif-
ferent setting and more information about the reconstruction itself please se "Voxel Reconstruction" (on
page 604)

Volume Pre-processing
The tab for the volume pre-processing can be seen in the following image:

403
If "Use Pre-processing" is selected, one can choose between different thresholds and a smoothing of
the voxel space, before the LSM is applied.
Lower threshold: The lower threshold defines a gray value from which all values smaller then the thresh-
old value are set to 0.
Upper threshold: The upper threshold, if a non 0 value is selected, clamps all gray values in the voxel
space exceeding the threshold to the selected gray value.
Gaussian Blur iterations: The Gaussian Blur defines the amount of Gaussian smoothing iterations are
applied on the volume. For the Gaussian filter a domain of 3x3x3 voxel is used for each iteration specified.

Vector grid

The tab for the Vector grids looks like the one given in the image below:

404
The LSM setup are used to analyze the voxel space and compute the particle displacements in between
two consecutive voxel space in time.

o IV size
The Interrogation volume size defines the size used for the analysis of a single vector, similar to
an interrogation window for 2D PIV but here it is extended into the 3D space. The user defines the
size of the desired cuboid that will be used in the analysis (in odd number of voxels).

The size is depended of the seeding density, it is therefore possible to estimate a minimum
cuboid size
to be defined with the knowledge of :
- Cppp: the seeding density, i.e. the number of particles per image divided by the number of pixels,

- Np : the number of particles desired in an Interrogation volume,

- R : the voxel resolution,

- ∆z : the measurement volume physical depth.

Assuming a cubic interrogation volume, the IV size can be expressed as :


1/3
 R ⋅ ∆z 
N c = N p 
 C ppp 
The 3D LSM needs at least 8-9 particles inside the cuboid of an IV in order to perform well. Thus,

405
the following figure indicates what cuboid size to chose depending on the
seeding density Cppp and the number of voxels present in the depth of the volume R* ∆z .

o IV step
The shift defines the overlapping of two consecutive cuboids. Enter half the size of a cuboid for an
overlapping of 50 %

o Start position
Start position (in voxel) of the analysis in the voxel space. Note that first voxel will be on the IV
size + 1.
o End position
End position (in voxel) of the analysis in the voxel space.Note that last voxel can at maximum be
the maximum number of voxel - the IV size - 1.

LSM algorithm

The LSM algorithm tab includes different parameters the tab is shown below:

406
o Iterations
The iterative procedure is stopped when all parameters have converged or when the number of
Maximum iterations has been reached. The maximum amount of iterations can be set here.
o Use pyramid Scheme
The pyramid scheme can be used to accelerate the computation. The analysis is first performed
on a coarse grid, which results are used as an initialization for the finer one..

o Apply Significance tests


A validation scheme can be computed on the results in order to detect outliers. This is the same
scheme as a universal outlier detection.
o Affine parameters to determine
By enabling or disabling these checkboxes, the user can choose to compute all the LSM param-
eters or not. The more parameters to be determined the more intensive is the computation, but the
more precise are the results.

15.38.3 Results of the analysis


Results of the analysis are plotted in the 3D vector display allowing the plott iso-surfaced and Iso-con-
tours. See "3D Display" on page 664

407
Further analysis can be performed using the instantaneous vector fields, for example the Vector statistics
Recipe supports 3D-3C vector fields.

15.38.4 References
[1] J. Kitzhofer, P. Westfeld, O. Pust, H. G. Maas and C. Brücker. Estimation of 3D deformation and rota-
tion rate tensor from volumetric particle data via 3D Least squares matching. In: Proceedings of the 15th
Int Symp on Applications of Laser Techniques to Fluid Mechanics. Lisbon, Portugal, 05-08 July, 2010.
[2] T. Nonn. Application of high performance computing on volumetric velocimetry processing. In: Pro-
ceedings of the 15th Int Symp on Applications of Laser Techniques to Fluid Mechanics. Lisbon, Portugal,
05-08 July, 2010.
[3] P. Westfeld, H.-G. Maas, O. Pust, J. Kitzhofer and C. Brücker. 3-D least squares matching for vol-
umetric velocimetry data processing. In: Proceedings of the 15th Int Symp on Applications of Laser Tech-
niques to Fluid Mechanics. Lisbon, Portugal, 05-08 July, 2010.

15.39 LIEF Processing


15.39.1 1. LIEF spray analysis
Dantec Dynamics’ LIEF processing for fuel spray analysis strives to determine two images of a fuel
spray, one showing only the liquid phase of the spray and the other one showing only the vapor phase.

1.1 LIEF concept – Laser-Induced Exciplex Fluorescence


The theory of Laser-Induced Fluorescence (LIF) is the basics for Laser-Induced Exciplex Fluorescence
(LIEF) and describes the interaction between light (photons) and matter (atoms/molecules). Light of spe-
cific wavelengths can be absorbed by matter, leaving it in an electronically excited state, and after a short
period of time the matter releases the excess energy again by emission of light, which is called flu-
orescence. A detailed description of the theory is beyond the scope of this text and will not be covered.
Laser-Induced Fluorescence is commonly used for diagnostics of combustion and fluid dynamics phe-
nomena. Usually the fluid under investigation is either a liquid (e.g. liquid mixing processes) or a gas (e.g.
air flow or combustion processes). However, in the case of fuel spray diagnostics the fluid appears in both
liquid phase and vapor phase, and Laser-Induced Exciplex Fluorescence (LIEF) is a technique that can be
used to image the liquid phase and the vapor phase separately, but simultaneously in a single acquisition.
The technique relies on achieving fluorescence in different spectral regions depending on whether the fluid
is in liquid or in vapor phase, and have two cameras acquire images of the spray. Two different filters are

408
placed in front of the cameras, one transmitting the fluorescence from the liquid and the other one trans-
mitting the light from the vapor.

In practice this is achieved by mixing a small fraction (well-known proportions) of two compounds into the
liquid fuel before injection: one fluorescent dye, or monomer (M), and one ground-state partner (G). A sheet
of laser light of a suitable UV wavelength is then used to illuminate the spray to excite the tracers in ques-
tions.

Vapor phase:
In vapor phase the monomer can absorb the UV light from the laser, which leaves the monomer in an
excited state M*. Shortly afterwards the excited monomer M* will return to a lower energy state again by
emission of fluorescence which is red shifted with respect to that of the laser light.
M + hνLaser —> M*
M* —> M + hνM

Liquid phase:
Also in liquid phase is the monomer M excited by UV light from the laser. Just like in the liquid phase the
excited monomer can relax to a lower energy state by the emission of a photon. But in addition M* can
also react with the partner G to form MG* which is a molecule that is stably bound in the excited state but
not in the ground state. The newly formed species MG* is called excited state complex, or exciplex. The
fluorescence from this exciplex is red shifted with respect to that of the excited monomer M* itself.
M + hνLaser —> M*
M* + G —> MG*
MG* —> M + G + hνMG
The subsequent fluorescence is recorded by two cameras looking at the same field of view by means of a
dual camera mount or beam splitter arrangement. The camera filters have a transmission corresponding to
the fluorescence spectrum of the exciplex and the monomer, respectively.
The formation of MG* by M* and G is in principle possible also in vapor phase, but since the distance
between the molecules in vapor phase is much greater than in liquid phase this process is very unlikely to
occur. Therefore, the images acquired by the camera with the filter transmission corresponding to exciplex
fluorescence are concidered to show only the liquid phase of the spray.
The images acquired by the camera with the filter transmission corresponding to monomer fluorescence
on the other hand, will not only show the vapor phase of the flow, but will also have some influence from
the liquid phase. This is referred to as cross-talk and cannot be eliminated completely by the experimental
setup. The effect of the cross-talk is instead minimized by image processing in the software, described in
the sections below.

409
Note:
In order for the technique to work properly the generated fluorescence should come only from the added
tracers and not from the fuel itself. Real fuels for internal combustion engines, such as gasoline or diesel
fluoresce strongly if illuminated by UV light, and can therefore not be used. Real fuels have to be replaced
by non-fluorescent reference fuels such as Iso-octane, N-hexane or N-dodecane. An example of a suit-
able reference fuel and exciplex tracer system along with the corresponding excitation and detection wave-
lengths are given in the table below, for Diesel and Gasoline applications respectively.

Diesel Gasoline

Fuel N-Dodecane N-hexane

Ground state partner Naphthalene Diethylmethylamin


(DEMA)

Monomer TMPD Fluorobenzene

Laser excitation wavelength 355 nm 266 nm

Camera filter, liquid phase 550 nm 355 nm

Camera filter, vapor phase 380 nm 295 nm

Table. The table gives an example of possible reference fuel and tracer species along with the cor-
responding excitation and detection wavelengths for Diesel and Gasoline spray applications, respectively.

15.39.2 1.2 Spatial calibration of two cameras


To be able to do the image processing to remove the cross-talk between the vapor phase and the liquid
phase in the acquired images, it is important to ensure a good pixel-to-pixel overlap between the two cam-
eras. First of all it should be noted that the two views will likely be mirrored with respect to oneanother.
This is taken care of by simply fliping one of the images in Image Format BEFORE the acquisition, so that
left and right (and up and down) in the physical experiment are the same for both camera views.
Once you have ensured the same orientation of the two camera views a spatial calibration can be made in
the following way:

1.2.1 Multi Camera Calibration:


By using a calibration target plate, a multi camera calibration can be created. For further information,
please refer to the Multi Camera Calibration section.
If no calibration target is available, a correspondence vector map can be used to bring the two cameras
into alignment.

1.2.2 Alignment Vector Map


The alignment vector map can be generated using the Cross-Correlation or similar correlation analysis
method. For best results, the source images must contain a texture that has a sufficiently large set of iden-
tifiable features that can be matched between the cameras. A plate covered with a somewhat random tex-
ture can be used. In order to generate the cross-correlation, select one image from each camera using the
space bar, and right click one of the images, choose: Analyze->PIV Signal->Cross-Correlation. In the rec-
ipe set the correlation options. Larger interrogation areas create more stable vector maps, whereas
smaller interrogation areas will be able to describe local warping better.

15.39.3 1.3 Launching the LIEF Processing analysis method


The LIEF Processing analysis method can be launched in two ways:
· It can be launched using two calibrations and two source images

410
· It can be launched using a displacement vector map and two source images.

15.39.4 1.4 Determining and correcting for the cross-talk


As mentioned above, the images from the camera with the filter for exciplex transmission show only the
liquid phase of the spray, whereas the images from the camera with the filter for monomer transmission,
will not only show the vapor phase, but will also have some influence from the liquid phase.
The purpose of the processing described below is to determine and compensate the vapor images for this
cross-talk.
Since the liquid phase images are considered unaffected by cross-talk, the task will be to determine how
much of the liquid phase image is recorded on the "vapor phase" camera. Once this is known, this portion
(K) of the liquid phase image can be subtracted from the vapor phase image.
The cross-talk compensation factor K can be set in three different ways.
To determine the cross-talk one needs to acquire images of fluorescence with both cameras, either in a sit-
uation where only liquid phase is present (e.g. by placing a glas cell filled with liquid fuel in the field of
view) or in a well known spray in which it is known that at a certain location only liquid phase is present. In
the recipe this area of pure liquid phase shall be selected, using the red box. This is done in the left hand
(Vapor Image) or center image (Liquid Image) in the recipe dialog.
1.The K value is the calculated as the scaling factor between the average image intensity within the area
in each respective image. A preliminary result of the Vapor Image compensated for the cross-talk from the
Liquid Image is shown immediately on the right hand side of the recipe dialog.
2. The desired K value can also be entered, using a numeric value in the text box, this method is rec-
ommended for fine-tuning of the K value.

411
3. For a rougher but more intuitive way of entering the K value, the slider can be used to select the K value.

Adjust the K value, until the cross-talk has been completely removed from the target image, using one of
the three methods highlighted. Once this is done press OK and the software will run through the entire
selected data set and compensate all vapor images for the cross-talk from the corresponding liquid image,
based on the cross-talk K value determined in the recipe.
If the analysis method was launched by executing the analysis method on the wrong source image, the
images can be swapped from within the analysis recipe, using the "Swap Images" button, located below
the source images.

15.40 LIF Calibration


15.40.1 Custom properties of the calibration images
To calibrate a LIF setup, images must first be acquired at known experimental conditions and stored as
calibration images (See "Calibration Images" on page 62). The user must specify the experimental con-
ditions as custom properties of the images in the database. Finally all this information is processed by the
LIF software to calibrate the camera pixel by pixel, so LIF signal images can later be transformed into sca-
lar images describing the corresponding scalar quantity (f.ex. concentration or temperature).
For a successful LIF calibration it is mandatory to supply at least two images acquired with different, but
known scalar quantities such as concentration, temperature or whatever the LIF experiment is intended to
provide information about. In the following we will assume that the aim is a concentration measurement,
but it might as well be temperature, pH-value or some other scalar quantity.

412
It is recommended to include more than two known concentration values in the calibration and also to
acquire multiple images for each of them in order to reduce the effect of random fluctuations.
Optionally the calibration may also include variations in laser pulse energy in order to later compensate for
shot-to-shot variations when processing the actual LIF measurements. If you have a Pulse Energy Mon-
itor connected to an Analog input you need not specify a nominal energy.
When storing the calibration images you may merge them all into a single ensemble or store each series of
images as separate ensembles. If you merge them into a single ensemble nominal concentration values
and so on will have to be entered for each individual image ('Show Contents List' and browse through the
images one by one, entering custom properties as described below).
In the following example calibration images have been stored in separate ensembles, which have been
renamed to clearly indicate nominal concentration and laser energy of the images contained.

Please note:
DynamicStudio does not use the ensemble names for anything, you may leave the default ensemble
names or assign a more descriptive name of your own choice. The latter improves readability of the data-
base and makes navigation easier, but is not mandatory.

You are strongly advised to enter nominal concentration and optionally laser energies as soon as possible
after acquiring them. You can quickly acquire a lot of images and easily lose track of which images cor-
respond to which experimental conditions. Ideally you should do this already while saving (See "Cal-
ibration Images" on page 62), but you may of course do it afterwards.
If you haven't already selected the appropriate custom properties, right click each calibration ensemble
and pick 'Custom Properties...' from the context menu:

This will bring up a dialog, where you can specify which custom properties to include with the images:

413
When the appropriate custom properties have been included, press OK and they will now appear in the cal-
ibration image properties under the heading 'Variables':

Enter the corresponding property values for each ensemble or image.

Please note:
The units for both concentration and laser energy are arbitrary, but you should of course be consistent.
Whatever units you apply here will be used in subsequent LIF Processing.

15.40.2 Performing the calibration


When all nominal concentration values have been entered along with laser pulse energies (if appropriate),
the actual calibration can be performed: Select (Mouse-Click + Space-Bar) all the ensembles containing
calibration images, then right-click one of them (usually the first or the last of them) and pick 'Calibrate...'
from the context menu:

414
Select 'LIF Calibration in the 'Calibrations' group and click 'OK' to bring up the LIF Calibration Recipe:

415
First you can specify which scalar quantity the LIF experiment is measuring, Concentration, Temperature
or pH. This affects only the textual description of results and has no effect on the numerical values. You
must also specify which custom property the calibration should fit to when performing the calibration:

You can in principle pick any available custom property, including user defined properties, but normally
you will choose a custom property matching the scalar type, i.e. Concentration, Temperature or pH.

In the Scalar setup you specify the range of scalar values that you wish the calibration to cover. Initially
you will typically include all the calibration images, but later you may discover that the LIF response is out-
side the linear range (typically due to saturation and/or re-absorption of the fluorescent light). In that case
you may wish to modify the calibration (or make a new one), such that the highest concentrations are
excluded (in which case the subsequent LIF experiments should of course also be limited to this reduced
max concentration).
Similarly 'Laser power compensation' is used to specify the range of nominal laser energies that you wish
to include in the calibration. In general the highest energies will also produce the strongest response to
fluctuating concentrations or temperatures, but you should of course avoid saturating the camera or the flu-
orescent dye. In the worst case scenario too high laser pulse energies may cause photo bleaching, where
the fluorescent molecules break apart under the influence of too high local laser intensity. If you decide to
reduce the max laser power used you need not make a new calibration, you can simply modify the recipe
of an existing calibration such that the highest laser pulse energies are excluded.
In 'Emission input' you need to tell DynamicStudio where to find information about the laser pulse energy;
If you have a pulse energy monitor, specify the analog input channel (0-3) to which it is connected, if not,
select 'Variable', to activate the next drop down selection and pick the appropriate custom property there
(typically 'Emission').

Please note:
It is not mandatory to supply information about the laser pulse energy. If there are no significant shot-to-
shot fluctuations in the laser pulse energy, you can exclude this from the calibration and subsequent LIF
processing.

The last entry in the LIF Calibration Recipe will be enabled only if the calibration images are acquired in
Double-Frame mode, in which case you may choose to calibrate on the basis of the first (A) or second (B)
frame from each. Whatever you choose the subsequent LIF measurements should of course be acquired
and processed in the same way for the calibration to remain valid. It is generally recommended to use
frame 1 since fluorescence in frame 2 might be disturbed by the first of the two laser pulses (especially
important if the time between the two pulses is short).

You can now click 'Apply' or 'OK' and a calibration dataset will appear in the DynamicStudio database:

416
Opening this calibration will show one or more curves illustrating the typical response of an average pixel
to variations in the scalar quantity (concentration, temperature or pH):

The actual calibration includes such curves for each and every pixel in the entire image, some of these will
have stronger response to varying concentrations, some will have weaker response.
Each of the lines in the figure above illustrates the response at varying laser intensities and you will nor-
mally see multiple lines only if you manually entered nominal laser pulse energies as custom properties.
The vertical bars illustrate the level of variation in grayscale values when nominal concentration and laser
energy is otherwise constant Click any of these and the corresponding line will be highlighted in red and
nominal laser energy E will be listed along with correlation coefficient R for the linear fit. R-values close to
1 indicate good linearity, but in practice you should evaluate linearity visually by looking at the curve and
the bars. To properly estimate whether or not you remain in the linear regime you will probably need the cal-
ibration to cover more than the 3 nominal concentration values used in this example. In practice you
should probably aim at minimum 4-5 nominal values.

15.41 LIF Processing


Using a LIF Calibration dataset together with LIF (Laser Induced Fluorescence) images allows you to
determine the spatial distribution of scalar quantities such as concentration, temperature or pH.
When performing LIF experiments most of the work lie in careful calibration (See "LIF Calibration" on page
412). When performing the actual LIF measurements you of course need to make sure you stay within the
calibration's valid range of concentrations, temperatures, laser energies etc, but the actual processing of
the acquired images is quite straight forward:

417
Preselect the LIF-Calibration dataset (Click + Space) and then select the ensemble(s) containing the LIF
images that you wish to process

This will open the recipe for LIF processing:

418
The main thing to consider when performing LIF Processing is how to compensate for fluctuating laser
pulse energies.
If the chosen LIF Calibration does not cover fluctuations in Laser Pulse energy, the recipe section labeled
'Laser power compensation' will be grayed out and not accessible.
Otherwise you may have four options regarding how to measure and compensate for fluctuating laser
pulse energy:

l Region of interest
Choosing region of interest will allow you to specify one or more regions within the cameras field
of view, where the scalar value (concentration, temperature or pH) is known and constant:

419
In the example above a jet of clean water is injected sideways into a flow of Rhodamine seeded
water. Upstream of the jet a ROI of constant concentration is chosen.
l Custom property value
The recipe option 'Use input value (E) from variable' require that you have entered nominal Laser
Pulse energy as a custom property of the measurement ensemble.
l Nominal value
The recipe option 'Use nominal E-value of' allows you to enter nominal laser pulse energy directly
in the recipe thereby overriding pulse energy values from other sources (if available).
l Value from analog input (Pulse Energy Monitor)
The recipe option 'Use analog channel' should be used if you have a pulse energy monitor
attached to the laser and connected to an analog input in the measuring system. You must spec-
ify which of the available analog inputs the Pulse Energy Monitor is connected to.

Finally you may choose to 'Clamp to calibration limits' and if parent ensembles contain double-frame
images, specify whether to process frame 1 (Image A) or frame 2 (Image B).
It is recommended to always 'Clamp to calibration limits', which will truncate computed scalar values at
the upper and lower limits covered by the calibration. It is not recommended to extrapolate beyond the cal-
ibration's range of validity and doing so will also allow extreme values to pass unhindered. Extreme values
can be caused by unusually dark or bright areas (f.ex. air bubbles) and will at best disturb the display rou-
tine such that nothing can be seen.
If double-frame images are used it is generally recommended to process on the same frame as the one
used when the calibration was performed (assuming this was done using double-frame images also).

Finally press 'OK' or 'Apply' to perform the actual LIF processing and obtain a floating point image describ-
ing concentration, temperature, pH or whatever the experiment is designed to measure:

420
15.42 1. Mie-LIF SMD - General
Both SMD Calibration and Process require a data structure as below:
Project
---Calibration
---Camera 1# (Calibration Target)
---Camera 2# (Calibration Target)
---Camera 1# (Dye=ON)
---Camera 2# (Dye=ON)
---Camera 1# (Dye=OFF)
---Camera 2# (Dye=OFF)
---Run Z=Zmin
---Camera 1#
---Camera 2#
---Run Z=Zmin + Zstep
---Camera 1#
---Camera 2#

---Run Z=Zmax
---Camera 1#
---Camera 2#
Please refer the Reference and User Guide of Mie/LIF SMD measurement for details regarding how to
acquire images.
NOTE:

421
The calibration method and SMD process expect that camera 1# is the LIF camera and camera 2# is the
Mie camera. This must not be changed by the user.
To determine the quantitative SMD a PDA is used as a reference to correct for any serious deviations. Dia-
meter and SMD data are collected over a 3D traverse grid enclosing the spray of interest. The math-
ematical model used to fit the PDA data to the SMD imaged result is:

(a)
where :
X = Reference SMD (PDA data)
Y= Mie-Lif ratio
γ= Absorption
a0, a1, α ,ß , and are the coefficients to be determined.
The coefficients are determined by calculating over a range of those four parameters mentioned above,
using linear regression then and determining the values that give the smallest residues.
The absorption due to the spray between the measurement plane and the camera is also considered in the
SMD calibration and process, and will be calculated by comparing the signal acquired at different ‘Z’ posi-
tions.
In DynamicStudio, the dataset is first calibrated by SMD calibration to determine the parameters shown in
Equ. (a). Once the parameters are calculated, the SMD calibration results can be applied to the dataset to
generate a 2D SMD distribution map by the SMD Process.

15.43 2. SMD Calibration


The typical input data is introduced in section 1, please follow the procedure below to find the right coef-
fients in Equ. (a)
1. Select all the dewarped images as input in Dynamic Studio (Fig. 1, left), including background images
with and without dye as well as spray images at acquired at all ‘Z’ position.
2. Right click one of images residing in the ‘Calibration’ category, select ‘Calibrations’ and enter the inter-
face to select ‘SMD Calibration’, as shown in Fig. 2 (right).

422
Figure 1 Select ‘SMD Calibration’
3. Press ‘OK’ and a recipe dialog will be displayed consisting of six tabbed pages of input:
· Mie optical setup

Figure 2 Mie optical Setup in SMD Calibration


The Mie optical setup (shown in Fig. 2) contains the efficiencies for all the components related to the trans-
mission of laser light into the measurement volume for the Mie process. The user can select the effi-
ciencies for the mirror, beam splitter, laser and windows. If the image is a double imaged the user can
select which image is relevant. Any component that is missing or deactivated is simply marked with unity
(1.0).
· LIF optical setup

Figure 3 LIF optical setup in SMD Calibration


The LIF optical setup (shown in Fig. 3) is identical to the Mie setup and in most setups it will have inden-
tical parameters.
· SMD setup

423
Figure 4 SMD setup in SMD Calibration
The SMD setup (shown in Fig. 4) contains four adjustment parameters that affect the filtering of the final
SMD result:
Mie coefficient: low pass filter based on the percentage of the Mie mean image value – mainly to remove
strong reflection.
LIF coefficient: low pass filter based on the percentage of the LIF mean image value – mainly to remove
strong reflection as well.
Minimum SMD: the minimum acceptable SMD value.
Maximum SMD: the maximum acceptable SMD value.
· PDA input setup

424
Figure 5 PDA setup in SMD Calibration
The PDA input setup (shown in Fig. 5) assists in the loading of one or more PDA statistics datasets. The
user can select the amount of acceptable deviation in position and the assignment of coordinate axes. The
user can either press ‘Add’ button to import data from .lda project (data file acquired from Dantec Dynam-
ics PDA system), or press ‘Import’ to import data from .txt or .xls file.
Note
If the PDA results are imported from text or excel file, it should contain data needed as the same layout;
no headline is allowed in this file.
· Coordinate setup

425
Figure 6 Coordinates setup in SMD Calibration
In this page (shown in Fig. 6) the user needs to define the relation between the coordinates used for imag-
ing and the one used for PDA measurement. The user can also define where the nozzle is located in the
acquired images.
Once everything is defined, click ‘Apply’, the SMD calibration will be performed. The calibration results
can be displayed either as Fig. 7 or as numerical (Fig. 8), which will also show the value of those coef-
ficients in Equ. (a).

Figure 7 SMD calibration results display

Figure 8 Numerical display of SMD Calibration results


X, Y, Z: coordinate of each point for comparison;
SMD ref: SMD results measured by PDA;
SMD Mie/LIF: SMD results measured by Mie/LIF (without calibration);
A0, A1, α, β: correspond to parameters shown in Equ. (a) – They are displayed as identical for all points;
Corr. & Residual: Correlation & Residual between calibrated SMD Mie/LIF results to SMD PDA results. –
They are displayed as identical for all points;

15.44 3. SMD Process


Once the SMD calibration is performed, the calibration results can be applied on spray images with similar
conditions (mainly regarding ambient pressure, ambient temperature and spray optical density) by SMD
processing, which will provide the SMD 2D distribution in the end.
Similarly to the calibration the user is required to select the dewarped background images (both Dye ON
and Dye OFF) for all cameras, dewarped spray images at all ‘Z’ position and the calibration results. Right

426
click one of the selected image residing within the ‘Run’, and select ‘Analyze’; the interface to select Anal-
ysis Method will pop up, as shown in Fig. 9.

Figure 9 Select required inputs and choose ‘SMD Processing’ from ‘LIF Signal’ Categories in the ‘Select
Analysis Method’ interface.
By selecting ‘SMD Processing’, the SMD process interface (Fig. 10 – a-c) will pop up, which is similar as
the interface of SMD Calibration:

427
a)

b)

428
c)
Figure 10 Interface of SMD Processing
Please refer to the section of ‘SMD Calibration’ for how to setup these parameters. If the hardware setup
is identical with the one used for calibration, just check ‘Use optical parameters from calibration data’;
then the software will automatically read the setup from calibration and used for SMD Processing.
Once SMD Processing is configured, press ‘Apply’, the dataset will be processed and the results will be
displayed.

15.45 LII Calibration


This method is used to calibrate LII images using only 2 reference regions on the images. The interest is
obvious; The calibration is fast and simple as it relies on 2 regions with known soot equivalent carbon con-
centration. Naturally, the signal inside the regions is corrected by light sheet, global energy and local
fluence before it is related to concentration values.
To use this method, select first the file of interest and call the method <LII Calibration by ROI> located in
the 'LII Signal' category. Once the dialog window is available:

1. Select the emission channel; i.e. the analogue input used for recording the laser energy. When no
energy pulse monitor is connected, select the 'N/A' option.
2. Select the calibration type. Typically, the Log-Log calibration is used but when the clean com-
bustion process is clean, a Linear-Linear calibration can be used.
3. Write down the equivalent carbon concentration for Region_1 and Region_2 (in A.U. or typically
g. Carbon / m3)

429
Dialog window for LII image calibration by ROI methodology.
4. And press the 'Region of interest 1...' to define the location of the calibration zone no. 1, which cor-
responds to the reference signal 1. Run the same operation with the reference zone no. 2

ROIdefinition for calibrated LII image processing.


5. Switch to the 'Light sheet setup' Tab and complete the data requested.

430
Dialog window for the definition of the light sheet specifications.
6. Press the 'OK' button to calibrate the LII image. Make sure the setup is not modified when using
this calibration file for data analysis.

15.46 LII Gas composition calibration


This method is used to calibrate LII signal according to the gas used when using image processing based
on Line-of-Sight (LoS) methodology. Naturally, the signal inside the region of interest is corrected by the
light sheet characteristics, global energy budget and local fluence before it is related to concentration
values. The result is a calibration file containing detailed information on LII signal decay time and gas-com-
position dependent coefficients that are used with the 'LII Processing' method with Line-of-Sight cal-
ibration.
To use this method, complete first the 'Properties...' of the calibration files with the exposure time (gate
time) used for each conditions (i.e. E = x.xx;). Multi-select then these files and call the method <LII Gas-
calibration calibration> located in the 'LII Signal' category. Once the calibration dialog window is available:

1. Select the emission and transmission channels; i.e. the analogue input channels used for record-
ing laser energy levels. (Make sure the channels have been calibrated and the analogue signal
rescaled to mJ before the method is used)
2. Write down the aperture size (diameter) of the transmission energy pulse monitor (in mm)
3. Specify the region on the image in which LII signal decay time and gas composition parameters
are calculated and preferably check the 'Non-linear compensation' option for gradient com-
pensation in sooty conditions (See Application Manual for additional information).

LII Gas-composition dialog window/ ... Tab for set up definition.

4. Switch to the 'Light sheet setup' Tab and complete the laser wavelength (532 or 1064 nm), the
direction of light propagation on the LII images recorded (for signal correction before calibration)

431
and the light sheet characteristics (namely thickness, center position, height, energy profile qual-
ity and optical transmission level - See LII Application Manual for further details).

LII Gas-composition dialog window/ ... Tab for Light and light sheet specifications.
5. Press the 'OK' button to calibrate the gas-composition.

15.47 LII Processing


This method is used to process LII images either using a calibration map defined by Region-Of-Interest
(ROI) methodology or directly using on-line Line-of-Sight (LoS) extinction coefficient as calibration value.
For further information of these approaches, please read the LII Application Manual.
To use this method, select first the file(s) of interest and call the method "LII Processing" located in the 'LII
Signal" category. Once the <LII Processing> dialog window is available (see picture below), select the
emission and transmission channels. When no transmission channel is used, select the 'N/A' option. At
that step, the software automatically switches to the adequate processing mode and enabled/disabled
options and the 'Light sheet setup' Tab accordingly.

Dialog window for LII image processing.


Content:

432
l LII processing procedure using calibrated Region-Of-Interest (ROI) methodology
l LII processing procedure using Line-of-Sight (LoS) methodology

15.47.1 LII data processing by Region-of-Interest <ROI> methodology


When selecting the region-of-interest processing method, most of the options are disabled as the set-up is
read from the calibration file defined with the numerical method <LII Calibration by ROI>. Follow the
instructions given below to complete the data processing sheet:

1. Press the 'Region -of-interest...' button, click on the right button of the mouse and draw the region
to consider during data processing. When needed, adjust the (X1, Y1) and (X2, Y2) coordinates of
this region of interest manually. (Note it is not necessary that this region corresponds to the light
sheet position; It can be smaller or even larger.)

433
2. Select the reference LII calibration file...

3. ... and press the 'Apply' button to preview the results and 'OK' to accept it.

Example of calibrated LII image.

15.47.2 LII data processing by Line-of-Sight (LoS) methodology


LII processing by on-line Line-of-Sight measurement relates the absorption coefficient to the extinction
coefficient, which by definition is a correct physical assumption when the soot aggregates size is smaller
than app. 120-140 micrometer.
Complete the sections in the 'Image processing' Tab that have been enabled by the selection of this proc-
essing type:

1. Report the aperture of the transmission monitor (typically 5-10 mm) .


2. Specify the gate time value used on the intensifier during the measurements. (When using double
MCP intensification technology for which the effective gate time is not linearly related to the time
set on the digital synchronize, refer to the Intensifier Unit user manual.)
3. Check the 'Non-linear compensation' option to enable compensation for energy gradients .

434
4. Press the 'Region -of-interest...' button, click on the right button of the mouse and draw the area
to consider during data processing.
5. Select the composition-calibration file (see related help document)

Switch to the 'Light sheet setup' Tab and complete it as follows:


6. Set the wavelength (532 or 1064 nm) for soot complex refractive index calculations.
7. Specify in which direction the light sheet is propagating; namely from the right-to-the left of the
image or vice-versa. When necessary, use the method <Rotate image> available in the Image
Processing Library. (Note that this parameter is fundamental when calculating and compensating
for light extinction, energy budget calculation and energy absorption compensations.)
8. Complete the light sheet characteristic; namely thickness, peak position, height, energy profile
type and optical transmission of the optics used - Make sure the proper dimensions are used!

Tab 'Light sheet setup' for LII/LoS data processing.


9. And press the 'Apply' button to preview the results and 'OK' to accept it and store the LII
map in the database.

15.48 Light-Field Calibration


15.48.1 Image acquisition procedure for calibration
Calibration of the light-field camera requires acquisition of two images:

l Acquired images of the calibration target


l White images - images taken while placing a diffuse, white filter in front of the main objective. The
white filter image is used to generate the mask that removes the vignettes between the micro-
lenses.

For illumination an ordinary white light source or lamp is sufficient.

Note
The calibration steps have to be repeated every time the focus or aperture is changed.

Adjusting the camera and lens


Follow the steps below:

435
l To correctly calibrate the light-field camera, a calibration target is oriented so that the depth of the
volume to be analyzed is included.
l Increase the exposure time of the camera so that the calibration target is uniformly lit but not over-
exposed.
l Begin acquiring images in online mode.
l Adjust the focus of the main objective so that the farthest objects in the target are nearly in focus.
The raw images consists of micro-images so objects (dots) nearly in focus appear as small blobs.

Raw image of calibration target.

Close-up of micro-images for the nearest dots.

436
Close-up of micro-images of the most distant dots.

Setting the lens aperture


The aperture of the main objective must be the same as that of the micro-lens, which is typically f8. Using
other apertures will cause either extreme overlap between neighboring micro-lenses or reduce the size of
the micro-images (excessive vignetting). Make certain that the lens you are using has a round or hex-
agonal aperture.

Not correct (Aperture too large).

Ideal (minimal overlap, this vignettes).

437
Not correct (Aperture too small).

Acquiring images of the calibration target


Once the camera and lens settings are fixed a series of images can be acquired and thereafter stored to
disk under a Calibration folder.

Acquiring the white image


To distinguish between the micro-lenses and vignettes between them a white image needs to be acquired.
This is done by placing a white filter in front of the main objective.

White filter.

The white filter is a diffusion filter that distributes ambient light uniformly to create a uniform white back-
ground. The white image is used to generate a mask that removes the vignettes between the micro-
lenses.
When satisfied with the calibration raw image quality, then acquire the white image and store as a sep-
arate run in the same calibration folder in DynamicStudio.

15.48.2 Calibration procedure


The calibration procedure requires three tasks to be completed before any data processing is possible:

l "Micro-lens adjustment" (on page 439)


l "Coordinate system calibration" (on page 440)
l "Depth calibration" (on page 441)
l "Test image focusing" (on page 442)

Select the white image as fixed input and then over the calibration image select "Light-field calibration"
method under "Select calibration method".

438
Selection of input datasets and selection of calibration method.

Light-field calibration dialog.

By default the raw unprocessed target image is shown and almost all of the fields are disabled.

Micro-lens adjustment
Next to the selection box "Select image" select "Micro-lens alignment image". The micro-lens alignment
ensures that the light analysis software knows the position of each micro-lens. The alignment corrects for
any displacement, scaling or rotation.

439
Procedure:

l Change the zoom of the image to 400% (Press right mouse over image)
l Scroll the image to the middle so that the thick green horizontal and vertical lines cross.
l Enter values for the "Len grid offset X, Y" so that the colored rings sit on top of the micro-lens
boundaries.
l Enter a "Len border" value that closely covers the edge of the vignettes of the micro-lenses.
l Press "Auto-adjust MLA". This process fine-tunes the fit and usually takes 10-15 seconds.
l Scan around the image to make certain the colored reticals cover each micro-lens. There should
not be any visible deviations.

Coordinate system calibration


The target plate calibration creates a transform that converts from image coordinates into object coor-
dinates. The object coordinate system lies on the calibration plate but is rotated internally to match the
camera's coordinate system.

l Select "Calibration image".

440
Procedure:

l Select the target under "Target". This enters the target parameters in the calibration and provides
scaling information. You can modify the target parameters by running the target editor (See
DynamicStudio User's guide).
l Click on "Calibrate target". This will apply calibration on the selected target. If successful a cross-
hair will appear over the middle marker (identifying the 0, 0, 0 point) and green circles around each
successfully found dot.
l Moving the mouse around target will update the 3D positions of each dot.

Depth calibration
The depth calibration is the final step in the calibration process and must be carried out after MLS align-
ment and target calibration. The depth calibration uses the depth values determined from the target cal-
ibration and calibrates against the measured depth values from the light-field optics. The fit created here
converts the gray level values from the light-field camera to actual depth measurements during processing
of the raw data.
Procedure:

l Select "Depth image"


l Drag the mouse and create a rectangle that covers an area with a smooth gradient. The color
image displayed is an uncalibrated depth map.
l Press the "Calibrate depth" button. This will create a first-order fit that converts the depth maps
values to physically correct values.

441
l Move the mouse around the image and compare the calculated depth with the corresponding
depth in the target calibration. There should be a close agreement. The average depth error gives
a quantitative measure of the deviation between measured and calculated values.

Test image focusing


Once calibration is complete the software "focus" can be tested to reveal the quality of the calibration.

l Select "Focus image".

442
The focus can be affected in two ways:

l Moving the sliding bar horizontally.


l Click the mouse on the image

The focus control moves the focus plane to the depth position selected by the user.

15.49 Light-Field Conversion

Data conversion converts raw micro-lens images into the following formats:

l Focus images (Gray-level) - user specified focus plane.


l Full focus - entire image is focused
l Depth image - floating point image with of converted depth values.
l Depth image - 8-bit depth map with 0 = farthest, 255 = nearest.

Requires light-field calibration and raw image as input. Select light-field calibration as fixed input.

Light-field conversion dialog box.

15.49.1 Procedure

l Select image format for output


l In case of focus image, select focus plane.
l Select frame if double-imaged.
l Select depth algorithm (standard, depth-path, depth-path-PIV)
l Select lens (all, type 1, type 2, type 3)
l In case of depth map, select Z limit values to include in output.

See section "Advanced Light-Field Settings" (on page 444) for more info on light-field options.
443
Export: exports dataset to a ray file or ensemble for analysis in another light-field software package.

15.49.2 Advanced Light-Field Settings

Advanced depth settings for standard and depth-path algorithms


The default settings for depth calculation for each provided algorithm is usually sufficient. In some sit-
uations it may necessary to modify one or more settings. Use the apply button to monitor the effect of
changes.

Nearest resolution: selection of the resolution (2-5)


Pixel step: step size used when traversing micro-images. Shorter steps give higher resolution results,
longer processing times

Note
On some GPU cards selecting a short step may invoke a watchdog timer and cause the card to reset.

Minimum correlation: correlation between two patches. A Patch is a small area selected from within the
micro-image.
Minimum standard deviation: minimum acceptable std. dev. between steps.
Curvature: maximum curvature between two maxima
Patch diameter: size of patch used in analysis. A patch is a small area within a micro-image.
Patch stride:

Advanced depth settings for depth-path PIV algorithm


A more advanced algorithm for particle detection and depth identification. The depth-path-piv searches for
specific particle geometries using image patches the represent likely particle images. This algorithm
works best for low to medium seeded applications.

444
Nearest resolution: selection of the resolution (2-5)
Pixel step: step size used when traversing micro-images. Shorter steps give higher resolution results,
longer processing times

Note
On some GPU cards selecting a short step may invoke a watchdog timer and cause the card to reset.

Minimum correlation: correlation between two patches. A Patch is a small area selected from within the
micro-image.
Base patch standard deviation: minimum acceptable std. dev. between steps.
Maximum delta:
Curvature: maximum curvature between two maxima
Patch diameter: size of patch used in analysis. A patch is a small area within a micro-image.
Base patch stride:

15.50 Light-Field LSM


Least-squares processing is an algorithm that determines the transformation of a cuboid in the meas-
urement volume from time-step 1 to time-step 2. Each cuboid defines a vector that best represents the
deformation of a cube of data. Returns vector and gradient information for each cuboid.

445
Least-squares matching setup dialog.

15.50.1 Parameters:
Cuboid size: the size in voxels of each cuboid.
Shift: the shift in voxels between adjacent cuboids. A shift of 10 indicates that the next cuboid will begin
after an offset of 10 voxels from the current position.
Search factor: the size factor to increase the search volume.
Start position: start of first cuboid from edge of voxel volume.
End position: the last position for a cuboid in the voxel volume.
Maximum iterations: the maximum number of iterations to allow.
Zmin and Zmax: selected depth limits of the voxel volume.
Depth algorithm: select algorithm for Z determination
Lens: select lens usage (all, type 1, type 2, type 3)
The voxel volume dimensions for the light-field cameras is currently fixed at 1004 x 672 x 255.
See section "Advanced Light-Field Settings" (on page 444) for more info on light-field options.

446
Output of LSM on light-field data (3D vector set).

15.51 Light-Field PTV


Determine the track histories of particles for both time-resolved and double-framed datasets. For best
results the particle translations should be kept to a minimum.

447
Particle tracking setup dialog.

Threshold: Background gray level to remove.


Particle filtering: limit particle area sizes (blob area)
Search area: 2D searching distance of particle pairs
Minimum particle per track: minimum track length (use 2 for double images)
Track deviations: Maximum displacement between particles
Zmin and Zmax: selected depth limits of the voxel volume.
Depth algorithm: select algorithm for Z determination
Lens: select lens usage (all, type 1, type 2, type 3)
See section "Advanced Light-Field Settings" (on page 444) for more info on light-field options.

15.52 Line Integral Convolution (LIC)


LIC (Line Integral Convolution) is a flow visualization technique proposed in 1993 by Cabral and Leedom
[1].
It takes a flow field (vector map) as input and generates a synthetic image, showing path segments that
randomly located particles would follow:

448
The example above shows the input vector map and resulting LIC image with option 'Emboss' enabled.

Examples below use the same input, but with 'Emboss' unchecked.

Default seeding density 'Sparse' will seed the image with a limited number of large bright 'particles' on a
dark background, while Seeding density 'Normal' will seed the image with random white noise. In both
cases the underlying flow structure can be clearly seen:

449
Recipe option 'Auto contrast' will stretch the histogram to better exploit the range of available grayscale
values:

Please note that the streak lines generated show flow direction only, they do not indicate speed.

15.52.1 References
[1] Brian Cabral and Leith (Casey) Leedom, "Imaging Vector Fields Using Line Integral Convolution," Pro-
ceedings of ACM SigGraph 93, Aug 2-6, Anaheim, California, pp. 263-270, 1993.

15.53 Make Double Frame


This method is used to transform single-frame images into double-frame images by combining them 2 and
2. To use this option, select an ensemble with single frame images and call the method 'Make Double
Frame' located in the category 'Image Conversion'.

As the recipe indicates you will always combine image 1 & 2 into a Double-Frame, but after that you may
continue with image 3 & 4 or combine also image 2 & 3 into a Double-Frame. If the input ensemble con-
tains a total of N single frames, the output ensemble will contain N-1 or N/2 double-frames depending on
your selection (if N is odd you will get N-1 or N/2-1).
Press the 'Apply' button to preview the result and, if needed, check the 'Reverse frames' option to reverse
the frame order. Click on the 'Ok' button to accept the processing and extend it to the rest of the images
selected.

450
15.54 Make Single Frame
This method is used to extract a given frame from double-frame records. To use it, select the double-frame
ensemble(s) of interest and call the method 'Make Single Frame' located in the category 'Image Con-
version'. In the dialog window that appears, specify the frame to extract and press the 'Ok' button to start
the extraction.

Dialog window for image extraction from double-frame records

15.55 Make Reverse Frame


This method is used to swap the frames of a double-frame image so frame 1 becomes frame 2 and vice
versa. To use this option, select an ensemble with Double-Frame images and call the method 'Make
Reverse Frame' located in the category 'Image Conversion'.

Press the 'Apply' button to preview the result and click on the 'Ok' button to accept the processing and
extend it to the rest of the images in the selected ensemble.

451
15.56 MATLAB Link
With the MATLAB Link data can be transferred from the DynamicStudio database to MATLAB's work-
space, and data-analysis performed using MATLAB scripts supplied by the user. Results can be trans-
ferred back to the DynamicStudio database for safe keeping.
MATLAB should of course be available from the PC where DynamicStudio is running and MATLAB should
have run at least once before attempting to use the link, otherwise DynamicStudio may not be able to find
MATLAB.

15.56.1 Contents
Recipe for the MATLAB Link
Selecting data for transfer to MATLAB
DynamicStudio data in MATLAB's workspace

Image map input


Scalar map input
Vector map input
Generic data input

Parameter String
See "General" on page 460
The Output variable

Image map output


Scalar map output
Vector map output
Generic data output
Metafile graphics output

Troubleshooting, Tips & Tricks

Connecting DynamicStudio & MATLAB


Running DynamicStudio and MATLAB on 64-bit platforms
User interface of MATLAB

15.56.2 Recipe for the MATLAB Link


The recipe for the MATLAB-Link has three tabs. The first one is used to identify which MATLAB script
should be applied to process the data from DynamicStudio:

452
With the button labeled 'Folder...' you can identify a default folder, where you will normally wish to look for
MatLab scripts for processing.
To select a script file click the down-arrow at the right hand side and select 'Browse...'.
Having identified a processing script you can open MATLAB's script editor to investigate or possibly mod-
ify the script by clicking the button labeled 'Edit Script'.
The second entry in this recipe tab is labeled 'Parameter string' and allows you to transfer various param-
eters to the script. This way you can affect processing without modifying the script itself.
The blue area at the bottom of the recipe is used for various status and/or error messages that may be
returned from MATLAB when you attempt to transfer data and/or run a processing script.
The second tab identifies how you wish to transfer the contents of the DynamicStudio ensemble(s) to
MATLAB. You may choose to transfer the datasets one at a time in which case you will be prompted after
processing of the first dataset, whether or not you wish to continue processing the remaining datasets in
the ensemble. If you answer yes each dataset will be transferred to MATLAB one at a time and processed
using the same script, and finally all the results will be stored in a new ensemble.
Alternatively you may choose to transfer all datasets immediately, meaning that all data are transferred to
MATLAB before running the processing script and you will be able to return just a single result.

453
With the button labeled 'Transfer Inputs to MATLAB' you can try out the transfer without running the script,
which will allow you to investigate the resulting data structure in MATLAB's workspace before starting the
actual processing.
The last of the three tabs in the recipe defines the coordinate system used and is relevant for scalar and
vector maps only. You may choose to use metric units, so positions are in mm and velocities in m/s, or
you may choose pixel coordinates, providing positions and displacements in pixel. Finally you may
choose to transfer no positions at all.

454
The checkbox at the bottom of this recipe labeled 'Clear all variables on Apply' may be useful to clean up
MATLAB's workspace before each processing session. If you do not do this variables from a previous
MATLAB session may be left behind in MATLAB's workspace and may disturb the present session. In
other cases you may specifically want to have access to data from a previous MATLAB session in which
case you should of course leave this checkbox unchecked.

15.56.3 Selecting data for transfer to MATLAB


If you just wish to transfer data from a single ensemble, simply select that ensemble and 'Analyze', select-
ing the MATLAB link from the list of possible analysis methods.
In some cases you may wish to combine different ensembles, f.ex. when combining different types of
data or combining data from two different cameras.
This require that you identify beforehand which ensembles you wish to transfer to MATLAB by 'Selecting'
them. This can be done by right-clicking each ensemble and then 'Select' in the context menu, or simply
by left-clicking each ensemble while holding the Ctrl-Key on the keyboard. Selected datasets will be iden-
tified by a small checkmark on the ensemble icons in the DynamicStudio database tree view:

455
When you right-click in the database tree you also have the option to 'Unselect All'. Doing this prior to iden-
tifying the ensembles you wish to transfer may be helpful to ensure that no other ensembles remain
selected from previous work on your data.

15.56.4 DynamicStudio data in MATLAB's workspace


In the following various MATLAB-specific data types and the like are used without further explanation
since the user is supposed to be familiar with MATLAB beforehand. Please refer to separate MATLAB doc-
umentation for explanation about f.ex.. Cell Arrays.
When DynamicStudio data is transferred to MATLAB a variable named 'Input' is created in MATLAB's
workspace to hold the data.
Input is a Cell Array and each cell will contain information from one ensemble. If you transfer data from
just one ensemble Input will thus be a [1x1 cell array], while data from multiple ensembles will make Input
a [Nx1 cell array] (where N=the number of ensembles from which data is transferred).
Input{1} is always the parent dataset, where output from MATLAB will be stored as a derived dataset once
processing has been completed. If further ensembles have been selected for transfer they will be stored in
Input{2}, Input{3} and so on starting from the topmost in the database.
Individual cells in a cell array are identified using curly brackets, so to look at data from the first ensemble,
type Input{1} at the command prompt and you may see something like this:
>> Input{1}
ans =
name: 'iNanoSense model 3E'
dataType: 'image'
fromMethod: 'Acquired'
datasetCount: 12
startTime: [2006 6 27 12 39 9.7810]
imageSize: [1280 1024]
imageOffset: [0 0]
gridSize: [1280 1024]
cameraName: 'NanoSense.0'
cameraIndex: 0
sensorSize: [1280 1024]
pixelPitch: [1.200e-05 1.200e-05]
scaleFactor: 1
timeBtwPulses: 5.000e-04
pixelDepth: 8
dataset: [1x12 struct]

456
The example above shows the transfer of acquired images and all but the last entry are general infor-
mation that apply to all images in the ensemble.
The actual data is a level deeper and can be accessed through the entry named 'dataset'. If you transfer
just one dataset, you can type simply 'Input{1}.dataset' at the command prompt, but if you transferred
more than one image from the ensemble, you will have to specify which image you wish to investigate. To
see f.ex. the 2nd image in the ensemble, type:
>> Input{1}.dataset(2) 
ans =
index: 2
timeStamp: 4
frame1: [1024x1280 uint8]
frame2: [1024x1280 uint8]
-Obviously this example shows the transfer of 8-bit Double-Frame images from DynamicStudio to MAT-
LAB.
Please note the MATLAB convention of specifying matrix (image) height first and width second, where
you would normally say WxH, i.e. 1280x1024.
Extracting the two frames in the second Double-Frame to separate variables can thus be accomplished
with the following commands:
>>  Frm1=Input{1}.dataset(2).frame1;
>>  Frm2=Input{1}.dataset(2).frame2;
For Single-Frame images the entry named frame2 would simply not be there.
For other types of data the ensemble header will be similar, but the contents of 'dataset' will of course
change depending on the type of data transferred:

Image maps
Input{n} Nx1 cell array
name char string Name of ensemble
dataType char string 'image' for image maps
fromMethod char string 'Acquired' for acquired images
datasetCount double # of datasets in ensemble
startTime 1x6 double Year, Month, Day, Hour, Minute Second
imageSize 1x2 double Height x Width
imageOffset 1x2 double From lower left corner of image sensor
gridSize 1x2 double
cameraName char string
cameraIndex double Identify which of multiple cameras
sensorSize 1x2 double
pixelPitch 1x2 double
scaleFactor double
timeBtwPulses double
pixelDepth double For example 8, 10 or 12 bit
dataset(n) 1xN struct array
index double Identify this dataset in the ensemble
timeStamp double mSec since start of acquisition
frame1 HxW uint8 -or uint16 for images of >8-bit
frame2 HxW uint8 Only present for DoubleFrames
For scalar imaging images may even be of type double representing f.ex. concentration or temperature
determined in each pixel position.

457
Scalar maps
Input{n} Nx1 cell array
name char string Name of ensemble
dataType char string 'scalars' for scalar maps
fromMethod char string Analysis method used to create these data
datasetCount double # of datasets in ensemble
startTime 1x6 double Year, Month, Day, Hour, Minute Second
imageSize 1x2 double Height x Width
imageOffset 1x2 double From lower left corner of image sensor
gridSize 1x2 double
cameraName char string
cameraIndex double Identify which of multiple cameras
sensorSize 1x2 double
pixelPitch 1x2 double
scaleFactor double
timeBtwPulses double
dataset(n) 1xN struct array
index double Identify this dataset in the ensemble
timeStamp double mSec since start of acquisition
X HxW double
Y HxW double
S HxW double
Status HxW double

458
Vector maps
Input{n} Nx1 cell array
name char string Name of ensemble
dataType char string 'vectors' for vector maps
fromMethod char string Analysis method used to create these data
datasetCount double # of datasets in ensemble
startTime 1x6 double Year, Month, Day, Hour, Minute Second
imageSize 1x2 double Height x Width
imageOffset 1x2 double From lower left corner of image sensor
gridSize 1x2 double
cameraName char string
cameraIndex double Identify which of multiple cameras
sensorSize 1x2 double
pixelPitch 1x2 double
scaleFactor double
timeBtwPulses double
dataset(n) 1xN struct array
index double Identify this dataset in the ensemble
timeStamp double mSec since start of acquisition
X HxW double
Y HxW double
U HxW double
V HxW double
W HxW double Only present for 3D vector maps
Status HxW double

Generic column data


Input{n} Nx1 cell array
name char string Name of ensemble
dataType char string 'generic' for generic column data
fromMethod char string Analysis method used to create these data
datasetCount double # of datasets in ensemble
startTime 1x6 double Year, Month, Day, Hour, Minute Second
imageSize 1x2 double Height x Width
imageOffset 1x2 double From lower left corner of image sensor
gridSize 1x2 double
cameraName char string
cameraIndex double Identify which of multiple cameras
sensorSize 1x2 double
pixelPitch 1x2 double
scaleFactor double
timeBtwPulses double
dataset(n) 1xN struct array
index double Identify this dataset in the ensemble
timeStamp double mSec since start of acquisition
ABC Nx1 double Column name assigned by user

15.56.5 Parameter String


An optional string named 'ParamStr' can be available if you choose to enter such a string in the recipe. The
string is intended for various parameters that the script may require to perform the intended analysis. The
string format is chosen because it allows a mix of different parameter types to be transferred together with-
out any prior knowledge or assumptions about the format, but the script will have to evaluate the string in

459
order to extract the desired parameters. If for example the parameter string contain two numbers sep-
arated using normal MATLAB syntax they can be extracted with the following simple commands:
Params=str2num(ParamStr); % Convert string to numbers
Param1=Params(1); % Extract 1st parameter
Param2=Params(2); % Extract 2nd parameter

15.56.6 General
Apart from Input and Parameter String the transfer of data from DynamicStudio to MATLAB will also
create a variable called 'General'.
It is a MATLAB 'Struct' with four fields:
Field name Type Description
General.when string Date and time when data was transferred from
DynamicStudio to MATLAB
General.sessionIndex double 1, 2, 3, ..., N when transferring ensemble datasets
one at a time.
General.positionCoordinates string 'none', 'pixels' or 'metric' corresponding to the coor-
dinate system chosen in the 'Advanced' Tab
General.acquisitionSystemSettingsXML string Unformatted xml-string describing where
the data in 'Input' comes from and how it
was acquired
The parameter 'sessionIndex' is similar to Input.dataset.index, but the latter refers to the acquisition
index, which may contain gaps in the sequence and does not necessarily start at one.

15.56.7 The Output variable


In order to retrieve analyzed data from MATLAB a global variable named Output must be used.
Please remember to declare the variable global before assigning any data to it, by using the following sim-
ple statement in your script:
global Output
Having declared the Output variable as global you can start assigning values to it, choosing from the fol-
lowing list as appropriate:
Field name Type Values Description
Output.name string Ensemble name
Output.type string image Optional dataset type
scalars
vectors
generic
figure
Output.pixelDepth double 8-16 or Pixel depth for image
64
Output.dataset struct
Output.dataset.frame1 (u)int8 Pixel data
(u)int16
double
Output.dataset.frame2 (u)int8 Pixel data
(u)int16
double
Output.dataset.U 2D-array of dou- Vector data
bles
Output.dataset.V 2D-array of dou- Vector data
bles
Output.dataset.W 2D-array of dou- Vector data
bles
Output.dataset.S 2D-array of dou- Scalar data

460
bles
Output.dataset.Status 2D-array of dou- Binary coded status value
bles
Output.dataset.(Xxx) Int32 A column named '(Xxx)'
uint32 will
float be created in the generic
double output dataset

Output.type
If field Output.type is present then an output dataset is created as indicated:
Values Dataset class created
image Image Map
images
imagemap
scalar Scalar Map
scalars
scalarmap
vector Vector Map
vectors
vectormap
generic Column data plus figure if one is open when data is returned
columns
figure Dummy column data plus metafile of figure
picture
metafile

Values are not case-sensitive.


If field Output.type is not present then the output dataset's type is determined from field names:

l image when 'frame1' is found


l 3D vectors if U and V and W
l 2D vectors if U and V, but not W
l 1D vectors (Scalars) if only U or S
l otherwise a generic column based dataset is created

Output.dataset

Image Map output


Allowed fields in Output.dataset are frame1 and frame2. Must be 2D arrays of int8, uint8, int16, uint16 or
double. Image width and height are deducted from the dimensions of frame1:
image.width = GetN(frame1)
image.height = GetM(frame1)
Verification of frame2 dimensions is performed and user notified on mismatch.
If Output.pixelDepth is not present pixelDepth is set to 8, 16 or 64 according to array type.

Scalar or Vector Map output


Allowed fields in Output.dataset are U,V,W,S, and status or Status. Must be 2D arrays of int32, uint32,
single, or double. Grid dimensions are deducted from U (or S) and all other fields are verified against the
array dimensions of U (or S).

Generic output
All fields in Output.dataset will result in a column of the same name and type. Must be 1D arrays of int32,
uint32, single, or double. Both vertical (Mx1) and horizontal (1xN) 1D arrays are allowed, but 2D arrays are

461
not supported (you can of course return such arrays column by column). Columns do not need to be of
same length.
From within DynamicStudio such a generic column dataset can be displayed numerically or using a XY-
plot (See "XY Display" (on page 644)). To do so select the MATLAB ensemble in the database tree and

click the XY-Plot icon () in the toolbar. Double-click the resulting display to choose which values to show
on which axis.
If the XY-plot does not suffice you can also use MATLABs built-in visualization tools to generate a figure
and save a screendump of such a figure in the DynamicStudio database along with the numerical values:
An optional metafile is supported to save a copy of a MATLAB figure if a figure window is open at the time
data is returned to DynamicStudio. Multiple figures can only be returned if they are shown in the same dis-
play window. If multiple figure windows are open only the most recently accessed will be transferred.

Generic output - metafile only


When Output.type = figure, picture or metafile a column dataset is created with zero columns, but a meta-
file with a copy of figure window contents can be transferred.

15.57 Troubleshooting, Tips & Tricks


Connecting DynamicStudio & MATLAB
In general the installation of DynamicStudio should be postponed until MATLAB has been installed, run,
closed again and the PC restarted. In most cases this will ensure that DynamicStudio can find MATLAB
once installed.
If DynamicStudio cannot find MATLAB or fail to connect to it, MATLAB may not appear in the Windows
registry database as a COM server. Try reinstalling MATLAB and restarting the PC. If this does not help
you can register MATLAB manually, by opening a Command Prompt Window as an administrator:
Click Start -> All Programs -> Accessories -> Right-Click 'Command Prompt' and select 'Run as admin-
istrator':

Answer 'Yes' when the User Account Control prompts you to confirm that 'Windows Command Proc-
essor' should be allowed to make changes to the computer.
In the resulting command window type 'matlab -regserver' and press Enter:

462
MATLAB should now start with just the command window, and the Command Prompt Window above can
be closed.
DynamicStudio should now be able to find and connect with MATLAB, if not you may have to restart the
computer once more.
When DynamicStudio has established connection with MATLAB once, it should be able to do so again
without using the Windows Command Prompt, so you need to do this only once.

Running DynamicStudio and MATLAB on 64-bit platforms


On a 64-bit hardware platform running 64-bit Windows, MATLAB will by default be installed as a 64-bit
application. DynamicStudio 4.00 and onward are 64-bit applications also, but DynamicStudio 3.41 and
older are 32-bit applications. They will run smoothly on a 64-bit platform, but be unable to connect with 64-
bit applications, meaning that the link between the two programs cannot be established.
To overcome this problem you may install MATLAB as a 32-bit application, with which DynamicStudio
should have no problems establishing a connection. According to MathWorks, the suppliers of MATLAB,
32-bit MATLAB on a 64-bit platform is not officially supported, but should work for most users:
Quote (Solution ID 1-1CAT7): ...

"Since MathWorks offers 64-bit MATLAB binaries for both Windows and Linux, we do not
support running 32-bit MATLAB binaries on 64-bit Windows or Linux architecture. However,
even though we do not officially support these configurations, installing and running 32-bit
MATLAB on 64-bit Windows or Linux machines is expected to work for most users."

...endquote.

User interface of MATLAB


For historical reasons and compatibility with older versions, MATLAB will by default be launched with a
simple command window when started from inside DynamicStudio. With later versions of MATLAB a
graphical user interface was introduced and it can in fact be set up to accept connections from other soft-
ware such as DynamicStudio. It requires that you start up MATLAB manually and enter the following com-
mand:

>> enableservice('AutomationServer',true);

-if you use the MATLAB-Link frequently you may choose to add this command to the script startup.m,
which will be executed automatically every time MATLAB is started. If you start MATLAB before attempt-
ing to use the link and enable the automation server as described above, DynamicStudio will establish a
link with the running version of MATLAB instead of opening a new command window. Once data has been
transferred from DynamicStudio to MATLAB you can open, edit and display variables from inside the
graphical user interface and run or debug scripts using MATLAB's integrated script editor.

463
15.58 Moving Average Validation
This method is used to validate vector maps by comparing each vector with the average of other vectors
in a defined neighborhood. Vectors that deviate too much from their neighbors can be replaced by the aver-
age of the neighbors as a reasonable estimate of true velocities.

15.58.1 Using the <Moving-average validation > method


To use this method the ensemble containing vector maps of interest and look for the method in the cat-
egory 'PIV Signal'. Parameters such as:

l The averaging area size (MxN)


l The acceptance factor (Ap)
l The number of iterations (n)
l (and addition al options as well)

are set by the user. (Press the 'Default' button to get values of reference.)

Note that the size of the (M x N) averaging area has no maximum and that M and N can be set inde-
pendently. However, the larger this area, the smoother the vector field becomes which leads to a loss of
resolution. (In most cases, M = N = 3, 5 or 7)
The acceptance factor 'Ap' is the single parameter used to determine whether a spurious vector is found
(inside the area M x N) and shall be replaced by a vector. This 'new' vector is calculated by local inter-
polation using 'n' iterations. (Typical values are Ap = 0.12 - 0.15 and n = 2 or 3).

464
(Left) Instant velocity vector map calculated by Adaptive correlation methodology (16 x 16, 25 % over-
lapping) and (Right) moving-average vector results.

15.59 N-Sigma Validation


N-Sigma validation is very common and straight forward for uni-variate (one-dimensional) measurements,
where you simply compute mean, m, and rms, s, from a series of data and subsequently reject all samples
lying more than N·s from the mean.
Introducing the normalized radius r this means that each sample x is considered valid only if the following
is fulfilled:
x−µ
= r <N
σ
…or equivalently…
x−µ 2
( )
σ
= r2 < N2
where acceptance limit 'N' is user defined while mean m and standard deviation s are computed from a
series of M samples by the classic formulas:
1
µ= ∑ xm
M
1 2
σ2 = ∑ (xm − µ )
M−1
Assuming the data is normally distributed the probability density function is:
   1 x − µ 2
( ) exp− r 2
1 1 1
f x  = exp−  =
  σ 2π  2 σ  σ 2π  2 
In which case, we should find that…
68% of the (valid) samples are within m±1·s,
95% of the (valid) samples are within m±2·s,
99% of the (valid) samples are within m±3·s.
In practice N-Sigma limits of 4-6 will normally include all valid samples and reject outliers even if the data
does in fact not follow a normal distribution.
Please note that the squared normalized radius r2 is used both for validation and in the definition of the
probability density function itself.

For multi-variate (multi-dimensional) measurements you can of course apply N-Sigma validation to each
variable independently, but you may falsely validate outliers, especially if the data is correlated. Instead
the N-Sigma validation is based on a fit to a multi-variate Normal distribution, defined by the probability
density function:

465
f (x) = (2π ) −k /2 |Σ| −1/2 exp− (x − µ)T Σ −1(x − µ)  = (2π ) −k /2 |Σ| −1/2 exp− r 2
1 1
 2   2 
…where…
k is the dimensionality (i.e. the number of values in each sample),
x is the k-dimensional sample vector,
m is the k-dimensional mean vector and
S is the k x k covariance matrix:
 C11 C12 ⋯ C1k 
 
⋯ C2k
Σ= 
C12 C22
 ⋮ ⋮ ⋱ ⋮ 
 C1k C2k ⋯ C kk 
 
… introducing the shorthand C =Cov{X ,X }. Note that C =Cov{X ,X }=Var{X }.
ij i j ii i i i
Σ is sometimes referred to as the dispersion matrix, since it describes the “spread” of the k–dimensional
distribution. The uppercase Σ symbol corresponds to the lowercase σ normally used to describe the
spread of a uni-variate (1-dimensional) normal distribution and does thus NOT represent summation as is
otherwise conventional.
|S| and S-1 is respectively the determinant and (matrix) inverse of S. If the determinant |S| is zero, the
covariance matrix is singular and S-1 does not exist. Generally this means that the true dimensionality of
the samples is smaller than k.

For k=3 we could f.ex. be measuring 3-dimensional velocity vectors (u,v,w) and get:

 µ 
 u   u 
x =  v  , µ =  µv 
 w   µ 
 w 
 C uu C uv C uw 
 
Σ =  C uv C vv C vw 
 C uw C vw C ww 
 
Fitting sample data to a multi-variate normal distribution require us to estimate the mean vector m as well
as the Covariance matrix S:
1
µi = ∑ x im , i = 1 … k
M

C ij =
1
M−1
∑ (x )(x
im − µ i jm − µ j ) , i, j = 1 … k

(…where S means summation as usual…)


For k=1 we get |Σ|=σ2 and Σ-1= σ-2, so the probability density function becomes the classic uni-variate (1-
dimensional) Gaussian above.
Comparing the probability density functions for the uni- and multi-variate normal distributions the squared
normalized radius r2 remains a scalar, but is now computed by matrix multiplications. For the uni-variate
normal distribution r2was validated by comparison with a user defined N2 and that remains the same for
the multi-variate case:
Each sample x is considered valid only if the following is fulfilled:

r 2 = (x − µ)TΣ−1(x − µ) < N 2
-where acceptance limit N is specified by the user.

466
The recipe for N-Sigma Validation looks like this:

There are 4 groups of selections to make:

In the topmost group the user specifies whether or not previously invalidated and/or substituted input data
should be included in or excluded from the analysis.

In the second group the user identifies the temporal window from which the statistics should be extracted.
The default is 'Include All' and the recommended choice if the physical phenomenon under investigation
can be considered stationary at least for the duration of the experiment. If it is not stationary you may
choose to extract the statistics from a sliding temporal window centered around each dataset. If used the
temporal window length must be odd (to ensure symmetry) and the lowest accepted (but not
recommended) window length is 9, meaning current sample plus 4 before and 4 after. Please note that at
the start and end of a dataseries (i.e an ensemble) the sliding window is truncated since there are no data
available before the first or after the last dataset. Consequently the number of datasets included is even
lower and with a nominal window length of 9 you can have as few as 5 neighboring datasets included. Sta-
tistics derived from just 5 samples is obviously not very reliable and very likely to let outliers pass unde-
tected.

The third group of recipe selections will show the various data available in the parent ensemble and let the
user choose which of them to include in the analysis. It is the users own responsibility to make meaningful
choices. In the example above you can f.ex. validate on the basis of pixel displacements or measured
velocities, but it will make little sense to include both in the same analysis since the two are strongly cor-
related and including f.ex. U-velocity will provide no new information if U-displacement is already
included.

467
The last and final recipe setting is the limit for computed N-Sigma above which a sample will be con-
sidered an outlier. Provided you have enough data for reliable statistics reasonable limit values will typ-
ically be 4-6, but please note that the number of samples included in the statistics limits the N-Sigma
value you might find regardless of the presence of outliers. If you compute statistics on M samples the
computed N-Sigma value is limited to:
x−µ M−1
= r ≤
σ M
With M=25 samples you will f.ex. never get r-values above 4.8, so setting the acceptance limit to 5 or
more is guaranteed to find no outliers at all no matter how bad they are.

15.60 Octave Link


The Octave Link data can be transferred from the DynamicStudio database to Octave's workspace, and
data-analysis performed using Octave scripts supplied by the user. Results can be transferred back to the
DynamicStudio database for safe keeping. For general information on Octave see http://e-
n.wikipedia.org/wiki/GNU_Octave.
DynamicStudio is compatible with GNU Octave for Microsoft Windows v3.8.2 which can be downloaded
from here: http://wiki.octave.org/Octave_for_Microsoft_Windows

The implementation and integration of the Octave Link has by design made as close as possible to the
MATLAB link. This means that most scripts developed for MATLAB Link can usually also be used on
Octave Link.

15.60.1 Contents
Recipe for the Octave Link
Selecting data for transfer to Octave
DynamicStudio data in Octave's workspace

15.60.2 Recipe for the Octave Link


The recipe for the Octave-Link has four tabs. The forth tap is named Configuration. Before you can use the
Octave Link DynamicStudio must know where Octave is installed. Click Brows and brows down in to
were Octave is installed. Select the file named Octave.exe and click OK.

468
Ones this has been done DynamicStudio will remember this path.

The first one is used to identify which Octave script should be applied to process the data from Dynam-
icStudio:

469
With the button labeled 'Folder...' you can identify a default folder, where you will normally wish to look for
Octave scripts for processing.
To select a script file click the down-arrow at the right hand side and select 'Browse...'.
Having identified a processing script you can open Octave's script editor to investigate or possibly modify
the script by clicking the button labeled 'Edit Script'.
The second entry in this recipe tab is labeled 'Parameter string' and allows you to transfer various param-
eters to the script. This way you can affect processing without modifying the script itself.
The blue area at the bottom of the recipe is used for various status and/or error messages that may be
returned from Octave when you attempt to transfer data and/or run a processing script.
The second tab identifies how you wish to transfer the contents of the DynamicStudio ensemble(s) to
Octave. You may choose to transfer the datasets one at a time in which case you will be prompted after
processing of the first dataset, whether or not you wish to continue processing the remaining datasets in
the ensemble. If you answer yes each dataset will be transferred to Octave one at a time and processed
using the same script, and finally all the results will be stored in a new ensemble.
Alternatively you may choose to transfer all datasets immediately, meaning that all data are transferred to
Octave before running the processing script and you will be able to return just a single result.

470
With the button labeled 'Transfer Inputs to Octave' you can try out the transfer without running the script,
which will allow you to investigate the resulting data structure in Octave's workspace before starting the
actual processing.

The third of the four tabs in the recipe defines the coordinate system used and is relevant for scalar and
vector maps only. You may choose to use metric units, so positions are in mm and velocities in m/s, or
you may choose pixel coordinates, providing positions and displacements in pixel. Finally you may
choose to transfer no positions at all.

471
The checkbox at the bottom of this recipe labeled 'Clear all variables on Apply' may be useful to clean up
Octave's workspace before each processing session. If you do not do this variables from a previous
Octave session may be left behind in Octave's workspace and may disturb the present session. In other
cases you may specifically want to have access to data from a previous Octave session in which case
you should of course leave this checkbox unchecked.

Octave does log the communication with Octave in the Log windows of DynamicStudio. The checkbox at
the very bottom of this recipe labeled 'Use global Log windows' will if checked add even more information
during data transfer.

15.60.3 Selecting data for transfer to Octave


Please see MATLAB link for a description.

15.60.4 DynamicStudio data in Octave's workspace


Please see MATLAB link for a description.

15.60.5 The Output variable


Please see MATLAB link for a description.

472
15.61 Oscillating Pattern Decomposition
The Oscillating Pattern Decomposition (OPD) method is based on Principal Interaction & Oscillation Pat-
terns (PIPs & POPs) as introduced in 1988 by K. Hasselmann in the field of climatology.
OPD is based on stability analysis of the mean flow and should be applied to time-resolved data only. Any
fluctuation of the flow is considered a kind of perturbation, which can either grow or decay exponentially in
time (if the flow is unstable or stable respectively).

The fluctuating part of the Navier-Stokes equation is modeled by a Langevin equation for a linear Markov
process:
d u(t)
= B ⋅ u(t) + ξ(t)
dt ,
where u(t) is a vector of velocity fluctuations, B is the deterministic feedback matrix and ξ(t) is noise driv-
ing the system (can be interpreted as the influence of smaller, unresolved scales).
The noise ξ(t) forms a covariance matrix Q , while the process itself is characterized by the covariance
matrix Λ :

Λ = u uT , Q = ξ ξT
.
The Langevin equation is a stochastic differential equation, which can be transformed into a Fokker-
Planck equation. It can be rewritten for lag time τ as follows:
u(t + τ) = G(τ) ⋅ u( ) + ζ( , τ) ,
where G is the Green function, which in turn can be estimated from the lag- τ and lag-0 covariance
matrices::

G(τ) = exp(Bτ) = u(t + τ) ⋅ u( ) T Λ −1

The eigenvalues g of the Green function G are related to the eigenvalues


β k of the feedback matrix B as
k
follows:

( ).
g k ≡ exp β kτ

The real part of eigenvalues


β k characterizes the decay/e-fold time τ ek of the k'th mode (should be neg-
β
ative for a stable system), while the imaginary part of eigenvalues k gives the oscillation frequency f of
k
the k'th mode:

τ ek =
−1
, fk =
Im β k( )
( )
Re β k 2π
.
The e-fold time of a mode ( τ e ), describes the time it takes for the signal amplitude to decay by a factor 'e',
−1
i.e. the time it takes for the amplitude to decay from 1 to e .
The spatial modes are the eigenfunctions of the matrix and they are thus the empirically computed eigen-
modes of the system.
Note that B and G matrices are obtained solely from the time series data u(t). If the system is well
described by a linear Markov process, then our estimate of G will be independent of the choice of τ . On
the other hand, if nonlinear effects are important, then G will vary significantly with τ . As long as the linear
approach holds, the knowledge of the Green function can be used for short-time forecasting of the system
behavior (ignoring the noise).

The matrix G is real, but non-symmetric, so eigenvalues


β k and corresponding eigen-vectors, -functions
and -modes are complex.

In theory the analysis could be applied directly on the state vectors of fluctuating velocity components
u(t), but even a modest sized vector map could quickly produce very large covariance matrices, so com-
putations would be come very CPU demanding.

473
To mitigate this DynamicStudio applies the OPD analysis to the outcome of a previous Bi-Orthogonal
Decomposition (BOD Analysis), dramatically reducing the complexity of the analysis.
Stability analysis is performed on the Chronos of the BOD modes and each eigenvalue provides directly
the frequency and e-fold time of a mode. Via the BOD Topos corresponding eigenvectors are mapped
back to the spatial domain, producing complex spatial OPD modes.

Meaningful OPD analysis require time-resolved input. According to Nyquist theory we can resolve frequen-
cies up to half the sampling rate, meaning that the highest frequency is sampled twice per cycle. In prac-
tice OPD analysis require more than 4 samples/cycle and we strongly recommend 6 samples/cycle.
Reliable detection of low frequencies require the duration of the experiment (sampling rate times number
of samples) to cover a reasonable number of cycles. Again you should aim for at least 4 cycles, preferably
6 or more.
The requirement of at least 4 complete cycles with at least 4 samples/cycle leads to a theoretical mini-
mum sample count of 16. In practice reliable results require a much higher number of samples, at least
100 ideally 1000 or more.

Having selected an input ensemble of BOD modes, the OPD Analysis is found in the analysis category
'Vector & Derivatives':

474
The OPD recipe has two groups of settings, one to choose which BOD modes to include or exclude from
the analysis, and another to sort and filter the resulting modes:

The first group 'Mode selection criteria' provides options to include or exclude various BOD modes from
the OPD analysis
All of these are optional and if none of them are selected the OPD-Analysis will include all but the last of
the BOD modes:

The first two selection criteria are related to the Amplitude/Energy of the various BOD modes:

l Setting a lower limit for Modal Energy Fraction will include all BOD modes that contribute more
than the specified fraction of the total energy.
l Setting a lower limit for the Residual Energy will keep adding more BOD modes until the residual
energy is below the specified limit.

475
The third mode selection criterion is related to the Topos and includes BOD modes on the basis of spatial
coherence:

l The Lag-1 Auto-Correlation Coefficient quantifies the degree of similarity between neighboring
vectors and you may set a required limit-value.

The last two selection criteria are related to the Chronos, one including BOD modes, the other potentially
excluding them:

l The Lag-N Auto-Correlation quantifies temporal coherence, based on the largest absolute value of
correlation among a number of small lags.
l The Kurtosis of a Chronos will be high for intermittent modes, where Chronos is almost zero most
of the time, but occasionally very large.

The first four of the criteria are 'Including' and a BOD mode fulfilling any of them will be included.
The last of the five criteria is 'Excluding' and any BOD mode with Kurtosis above the specified limit will be
excluded regardless of any other selection criteria.
To ensure that the OPD analysis can generate results, mode 0-2 is always included and the last of the
BOD modes is always excluded, no matter which selection criteria are enabled and no matter their limit
values.

Typical level of Chronos-Values:


A Chronos with constant values will generate a Kurtosis-value of 1.0.
A Chronos describing a clean, noise-free sine-wave will generate a Kurtosis-value of 1.5.
A Chronos dominated by Gaussian white noise will theoretically generate Kurtosis-values around 3.0
(valid signal may be hiding in such noise).
Kurtosis-values significantly above this will typically come from Chronos with just a few large values
among a lot of very small values. Such Chronos may be caused by and describe only undetected outliers
in the input datasets and the corresponding modes should of course be excluded from a subsequent anal-
ysis. If intermittent phenomena is present or expected in the experiment they too may generate high Kur-
tosis-Values and should of course not be excluded, so in that case this test should be disabled.

The group 'Post processing' contain various options to filter and sort OPD modes after they have been
computed:
'Minimum accepted e-fold time / acquisition timestep' provides you with the option to discard modes with
exceptionally small e-fold times:
If the computed e-fold time is significantly smaller than the acquisition timestep you may question the relia-
bility of the result; How can you reasonably expect to say anything reliable about phenomena that appear
to have time scales significantly below the temporal resolution of your measurement system? Indeed
such modes rarely contain anything but noise and you may thus choose to discard them altogether.

'Sort the OPD mode by ...'


Here you must choose to have the resulting OPD modes sorted by frequency (ascending), by e-fold time
(descending) or by periodicity (descending). If you wish to plot the results in a style similar to a con-
ventional frequency spectrum, sort by frequency. If on the other hand you want to see the most interesting
modes first, you should sort by e-fold time, since generally the longest e-fold times correspond to the most
interesting modes, showing the clearest coherent structures. Finally you may choose to sort by 'Peri-
odicity', which is the product of frequency and e-fold time. In this case the fist modes listed will be the
ones where e-fold time is long compared to the period of the corresponding oscillation.

476
Output from the OPD Analysis
OPD Analysis generates modes, that are stored as separate datasets in an ensemble, but you can not
say beforehand how many there will be. The list of "raw" eigenvalues will of course contain as many
β
values as BOD-modes included in the analysis, but the eigenvalues k typically come in complex-con-
jugate pairs, where the one with negative imaginary part describes an oscillation with negative frequency.
Such modes with negative frequency are discarded since they provide essentially the same information
as their positive frequency counterparts. There may be several modes with zero frequency, which are of
course kept, so the number of OPD-modes will typically be somewhat larger than half the number of BOD-
modes included in the analysis. If you choose to discard modes with very small e-fold time the remaining
number of OPD modes will of course be smaller still.

As in BOD Analysis Mode 0 will always be the temporal mean (DC) of the input datasets followed by the
actual OPD modes sorted as specified in the recipe.

The default display of an OPD mode is the spatial mode, inheriting the datatype of the ancestor datasets;
When the ancestor is a series of vector maps, all spatial modes will be vector maps also. This is similar to
the BOD Topos, but where Topos are real the OPD modes are complex. If you right-click the display of an
OPD mode you can choose to 'Show Complex Vector Box':

The Complex Vector Box includes a slider, with which you can vary the phase angle used in the current
display; You can step forward or backward using PgUp/PgDn to change the Phase Angle in steps of 45
degrees or you can move it back and forth manually with the mouse, animating a full cycle:

477
Apart from the spatial distribution each mode has its own unique frequency and e-fold time, which are
accessible either numerically via 'Open as Numeric' or graphically via 'Open as XY Line Plot':

The first 4 columns contain Mode numbers and corresponding frequencies, e-fold times and periodicity
values. These are common for all modes and remain the same as you step through the modes. Please
note that Mode 0 (the mean) has been assigned an e-fold time of 0, even if the mean by definition is
assumed to remain constant over time and thus should have had an e-fold time of infinity. Note also that
the numerical display has two columns for each velocity component so that you can see both real and
imaginary part of the complex vectors.

By default the XY Line Plot will show frequency, e-fold time and periodicity as separate curves, all as func-
tions of the mode number. In the display options you may choose to change this to show a scatter plot of
e-fold time and/or periodicity as a function of frequency and get a plot such as the one below:

478
In this plot the 'Show Marker' function has been enabled in the context menu in order to highlight the cur-
rent mode with a vertical line. Note also that the y-axis has been set to logarithmic scale and thus cannot
show zero. This means that mode 0 (=the mean) with e-fold time zero is not shown at f=0.

15.61.1 References
[1] K. Hasselmann, (1988):
"PIPs and POPs: The Reduction of Complex Dynamical Systems Using Principal Interaction and Oscil-
lation Patterns"
" Journal of Geophysical Research", 93, D9, 11.015-11.021.

[2] H. von Storch, G. Burger, R. Schnur, J.-S. von Storch, (1995):


"Principal Oscillation Patterns: a Review"
"Journal of Climate", Vol.8, pp.377-400.

15.62 Particle Tracking Velocimetry (PTV)


Where conventional PIV (Particle Image Velocimetry) estimate the average displacement of particle
clusters within an Interrogation Area, Particle Tracking Velocimetry (PTV) aim to determine the frame-to-
frame displacement of individual particles.

479
Particles are detected as grayscale peaks exceeding a certain threshold value, defined as a percentage of
maximum grayvalue supported.
In the example above the threshold is set at 24%, which for an 8-bit image with a max grayvalue of 255,
means that peaks with grayvalue below 61 will be ignored.
When particles have been identified on both frame 1 and 2 we need to solve the correspondence problem;
For each particle on frame 1 find the corresponding particle on frame 2. This is no trivial task and most
likely to succeed if seeding density is low and displacements are small.
To get a rough idea about where to look for corresponding particles a conventional one-pass cross-cor-
relation is performed first; Interrogation Areas are square and the user must choose IA Size and overlap;
Generally small IA's give the best spatial resolution and are least affected by gradients in the flow, but IA's
must be large enough to enclose at least 3-4 particles each and also 3-4 times bigger than the largest
expected displacement. The default IA Size of 64x64 pixels work well in many cases.
If you do not wish to process the entire image, the second tab of the recipe allows you to specify a rec-
tangular ROI (Region-Of-Interest) within which to identify and track particles. You can specify upper,
lower, left and right boundaries of the ROI freely or click the shortcut buttons to get one in the center or one
of the corners:

480
The example below show an example where PTV has been applied to a flow that spirals out from the
center:
The overall spiral flow can be recognized, but there are also many outliers, probably due to false matches
in the attempted solution to the correspondence problem.

15.63 Peak Validation


There are four possibilities to validate on the correlation peak.
Relative to zero peak is only used in Auto correlation
In Auto Correlation the zero peak represent the sum of all gray values in the images. The ideal second high-
est peak is 0.5 height.

Relative to peak 2
This is the relative height of the highest peak compared to that of the second highest. In many cases of
few particles, this number is not very usable. For higher particle concentration, it is often quite satisfactory
to use 1.1. If a very strict validation is required a value as high as 2 can be used.

Peak width, Minimum & Maximum


The peak width is estimated with a 3 line parabolic fit in each direction and evaluated as the square root of
the product wx*wy (for more details see the reference manual). The peak widths are given in pixels.

15.63.1 Interactive setting and finding good parameters


With an open vector map, it is possible to have the Validation box enabled (right-click on the map and ena-
ble). It is at the same time possible to have an active histogram (right-click on the map and choose his-
togram) of the peak widths. This gives the opportunity to find optimal settings in an interactive way.
Tool tip: On the vector map validation box, click on the slider and use the keyboard cursor for fine-tuning.

481
15.63.2 Example using the peak validation for phase separation
The figure below shows an instantaneous snapshot of air bubbles injected into a water tank seeded with
red fluorescent particles. The top image comes from camera 1, which was fitted with a green filter. In prin-
ciple, one should one see the bubbles. But as it can be seen on the histogram to the right there is a large
portion of correlation peaks around 2.3 pixels diameter. The reason being is that the fluorescent particles
also scatter some green light from their surface.
The bottom image comes from camera 2, which was fitted with a red filter. In principle only the seeding
should be seen. It is seen in the histogram, that the bubbles are seen as well. The scalar map in between
the picture and the histogram is the peak widths generated from 400 images using the Average Cor-
relation.

Using the peak validation, based on judgement from the peak width proportions of the Average Cor-
relation, it was possible first to validate vector maps for identifying the fluid velocities represented by the
seeding with correlation peaks sizes around 2.3 pixels. The bobble velocities were found using large inter-
rogation regions and peak validation allowing only large correlation peaks.
Below is shown the stream traces generated in Tecplot from Amtec, from the two phases.

482
15.64 Probability Distribution
This method is used to calculate statistics and related histogram or probability density function inside a
region of interest (ROI) and whole field single record. This record can be a calibrated LIF image (e.g. con-
centration or temperature image), a filtered calibrated image (i.e. a LIF image processed using the numer-
ical routines available in the Image Processing Library) or statistical dataset (i.e. Mean pixel or RMS
pixel).
Content:

l Define and apply a mask


l Distribution processing
l More about ROI data analysis

15.64.1 Define and apply a mask

For detailed information on mask definition and image masking, please refer to the ' Define mask' and '
masking Image map' help sessions.

15.64.2 Distribution processing


The method is located in the category 'LIF Signal'. Select the dataset of interest and run the numerical
method 'Distribution processing'. Complete the dialog window that appears as follows:

483
1. Check the 'Use mask' box if a mask need to be used and select it using the 'Select' button
2. Select the type of distribution to calculate; i.e. Counts (or histogram of values) or Probability
3. Set the scaling parameters by defining the 'Bins' values (typically between 25 and 100) and if nec-
essary the lower/upper limits of data binning.
4. Press the 'Apply' button to previous the results and then 'OK' to accept it. The data are stored in

the database and easily located with the icon.

<Distribution processing> dialog window

Using Image ensemble as mask input


Instead of using a Defined Mask as input, it is possible to select an image ensemble as input. The Image
ensemble most hold the same amount of images as there are datasets in the parent ensemble.

The pixel values of the images in the ensemble indicates if the pixel is masked, here a value of 0 indicates
that the pixel is masked. Any other value will indicate not masked.

Using methods like Image Math, Image Arithmetic and Image process Library you can create any series
of mask images that can be used as mask input for the method.

15.64.3 More about ROI data analysis


In the database, open the 'Distribution' record and right click with the mouse on it to access the 'Display'
options. Click on the 'Display options...' to specify the type of data to graph and then select the 'Chart
type', 'Scatters' and X-/Y-axis scaling options to adjust the view.

484
Data display is quickly adjusted with a few mouse clicks.

To further view the raw data in an Excel-like sheet, press the icon and extract the data of interest using
the 'Copy to clip board' and 'Export as file…' options.

15.65 Profile plot


The profile plot method allows the user to draw an arbitrary line across an image map, a scalar map or a
vector map, and extract the values along this line to a profile plot.

l Detailed description
l A handy shortcut

485
l Numerical Values
l Examples

15.65.1 Detailed description

l Select an input dataset (Parent dataset)


l Right click and select "Analyze"
l On the Analysis Methods tab, pick "Plots" from the list of categories
l Pick "Profile Plot" from the list of applicable methods

l The profile plot recipe dialog is displayed.


l On the "Positions" part of the display, choose start and end point of the line profile to be extracted.
You may directly enter the pixel coordinates into the respective number field for "Position 1" and
"Position 2". Values are separated by comma. You may also activate "Position 1" or "Position 2"
(radio buttons) and use the sliders for X-position and Y-position. A check mark in the "Snap to vec-
tors" button will lock the target positions to the center coordinates of the nearest vector (inter-
rogation area).
l Several lines can be extracted at the same time. Press Add or Remove buttons.
l The parent dataset can be displayed and display options can be chosen. A line on the parent data-
set indicates where the line profile is extracted. Extracted values are NOT interpolated. In images
and scalar maps, the pixel closest to the line is extracted. In vector maps the value from the inter-
rogation area closest to the line is extracted. Small squares frame the interrogation areas from
which values are extracted.

486
l Click "Apply" to preview the plot.
l Click "OK" to generate the plot. Note that all the variables present in the parent dataset will be
extracted along the profile.

487
l Double-click on the plot window, or select "Display Options" from the context menu. The display
options dialog is also accessible from the profile plot recipe dialog by pressing the "Display" but-
ton.
l Any variable can be chosen for x or y-axis. Only one variable at a time can be chosen for x-axis
and multiple variables can be chosen to be plotted against the x-variable.
Right click in the Data column to have the option to Select or Exclude all.

15.65.2 A handy shortcut


You may find the following, alternative way for generating a profile plot more handy.

l Open the input data set (parent data set)


l In the parent data set draw a line by:
SHIFT-click on the start point of the line,
Hold the mouse button down while moving to the end point of the line
Release the mouse button at the end point of the line
l Press CTRL-A (with focus still on the parent dataset display window) to open the "Anlalysis"
dialog
l Pick "Plots" from the list of categories
l Pick "Profile Plot" from the list of applicable methods
l Click "Apply" to preview the plot
l Make changes the plot parameters as desired
l Click "OK" to generate the plot

488
15.65.3 Obtaining the numerical values

You can easily display the numerical values of the profile plot:

l Select the corresponding profile plot dataset in the database window


l In the "File" menu choose "Open as numeric", or
l Press CTRL-U, or
l On the standard toolbar choose click the icon "display as numerical" to display the numerical
values in a worksheet.

To use the data directly in another worksheet application simply select, copy and paste the data.
To export the data to a file:

l Select the corresponding profile plot data set in the database window
l In the "File" menu choose "Export"

15.65.4 Examples

Example 1:
Shear flow phenomenon in a microfluidic Y-coupler. The concentration map shown on the left is of the type
"image map". A vertical cut along the blue line gives the concentration profile (shown to the right) along

489
that line, here plotted with a logarithmic y-axis to better reveal the structures at low concentration levels.
The coordinates of the start and end points are indicated in the title of the profile plot.

Example 2:
Vortex pair in a water flow. The vorticity plot shown to the left is of the type "scalar map". To the right a
plot of the line profile of the vorticity along the black line.

490
15.66 Proper Orthogonal Decomposition (POD)
Proper Orthogonal Decomposition is a powerful method for system identification aiming at obtaining low-
dimensional approximate descriptions for multi-dimensional systems. The POD provides a basis for the
modal decomposition of a system of functions, as in the case of data acquired through experiments. It pro-
vides an efficient way of capturing dominant components of a multi-dimensional system and representing
it to a desired precision by using a relevant set of modes, thus reducing the order of the system.
POD consists of two parts, taking the "snapshot" of a series of data, and then "projecting" the data
through a selection of modes.

POD can be applied to scalar- or vector maps (conventional 2-component or stereoscopic 3-component).

15.66.1 POD Snapshot


The Proper Orthogonal Decomposition, POD, is closely related to Principal Component Analysis, PCA,
from linear algebra and was first introduced in the context of Fluid Mechanics by Lumley [1]. This imple-
mentation of POD applies the so-called "Snapshot POD" proposed by Sirovich [2]:
Each instantaneous PIV measurement is considered a snapshot of the flow. An analysis is then per-
formed on a series of snapshots acquired in the same position and under identical experimental

491
conditions. The first step is to calculate the mean velocity field from all the snapshots. The mean velocity
field is considered the zero'th mode of the POD. Subtracting the mean from all snapshots, the rest of the
analysis operates on the fluctuating parts of the velocity components (umn, vmn, wmn) where u, v & w
denote the fluctuating part of each velocity component. Index m runs through the M positions (and com-
ponents) of velocity vectors in each snapshot and index n runs through the N snapshots so
umn= u (xm,ym,tn).
All fluctuating velocity components from the N snapshots are arranged in a matrix U such that each col-
umn contain all data from a specific snapshot:

If input is a series of 2-D vector maps there will be no w-components of velocity and the lower third of the
U-matrix is simply omitted. Similarly if input is a series of scalar maps only the upper third of the U-matrix
is used, where the fluctuating part of the scalar values replaces the u-components of velocity while v- and
w-components are omitted.
From the U-matrix create the NxN autocovariance matrix C as:

-and solve the corresponding eigenvalue problem:

-where λi and Φi are corresponding eigen-values and -vectors.


Solutions are ordered according to the size of their eigenvalues:

Since the subtracted mean value was calculated from the data itself the N'th eigenvalue will always be
zero (ignoring round-off errors) and can in practice be discarded.
The eigen-vectors corresponding to each of the eigen-values can be combined with the U-matrix to com-
pute the eigen-functions, which in turn are normalized to get the POD modes:

-where Φi is the i'th eigenvector corresponding to the eigenvalue λi and modes are normalized by the dis-
crete 2-norm defined as:

POD Snapshot analysis is implemented in DynamicStudio as a separate analysis method, that take no
input parameters and require a single ensemble of scalar or vector maps as input:

492
As described above the Snapshot analysis will generate all the POD modes and their corresponding ener-
gies (=the eigenvalues) and the Modal energy distribution is shown graphically when the analysis is com-
plete:

It can be shown [3] that the amount of total kinetic energy from velocity fluctuations in the snapshots asso-
ciated with a given POD mode is proportional to the corresponding eigenvalue. The ordering of the eigen-
values, eigen-vectors and eigen-functions therefore ensures that the most important modes in terms of
energy are the first modes. This usually means that the first modes will be associated with large scale
flow structures.
In the example above modes 1 & 2 are clearly dominating, while modes 3-8 may be relevant and the drop
and subsequent flattening of the curve suggests that modes 9 and up may simply describe noise. This
means that modes 1 & 2 will be suitable for describing "typical" fluctuations, but if a specific snapshot con-
tain some kind of unusual fluctuation this might be better described by a higher order mode.

493
15.66.2 POD Projection
Each of the snapshots from which the POD modes were determined can be expanded in a series of the
POD modes with expansion coefficients an for each POD mode n. The expansion coefficients, also called
POD coefficients, are determined by projecting the fluctuating part of the velocity field onto the POD
modes:

-where each of the POD modes φi occupy a column in the mode matrix Ψ:

Knowing the POD coefficients an we can reconstruct the corresponding velocity vectors by expansion:

-since the first modes contain the most energetic parts of the flow, we may choose to exclude the higher
modes from the reconstruction (assuming they represent noise), or we may exclude the lowest modes
from the reconstruction if we are looking for medium or small scale structures in the flow.

In DynamicStudio POD Projection is implemented as a separate analysis, that require two inputs, the par-
ent vector or scalar maps and a set of POD modes from POD Snapshot onto which you wish to perform
the projection.

Normally the input scalar or vector maps will be the same as the ones from which the POD modes were
generated, but any scalar or vector map can be projected onto a set of POD modes provided it has the
same grid size and dimensionality (scalar, 2-D or 3-D vector) as the modes. (This will of course make
sense only if the input data and the POD modes describe similar flows).

494
The POD Projection contain two groups of analyses, one to perform actual reconstruction (projection) of
the input data and another to investigate the modes and their contributions to reconstructions over time
(i.e. the datasets in the parent ensemble).

Reconstruction
When reconstructing input scalar or vector maps you may choose to specify exactly which modes to
include in the reconstruction or you may choose to have a specified fraction of the total energy recon-
structed:

l Enter a comma separated list of modes (f.ex. 0-2, 4-5, …). Please remember mode 0 if you want
the mean flow included.

... OR ...

l Specify an energy fraction (the software will include as many modes as required to recover ≥ the
requested energy fraction).

-When projecting by energy fraction the default is to include mode 1 and up until the requested energy frac-
tion has been recovered, but you may also choose to 'Sort by dominance', meaning that for the recon-
struction of the current input the software will take the mode with the largest POD coefficient first even if it
is not mode 1. If this does not provide the requested energy fraction the mode with the second largest
POD coefficient will be included, then the third largest and so on until the requested energy fraction has
been recovered.

The example above shows an input vector map (with the mean subtracted) and the same vector map
reconstructed from the first two modes of a POD Snapshot.

Modal investigation
Instead of reconstructing the input data you may choose to investigate the modes themselves or their con-
tributions to the reconstruction of individual snapshots:

l 'Extract modes only' will extract the modes from the snapshot dataset and store them as a con-
ventional ensemble that you can browse trough.
With this option the parent data are in fact not used, but even so it is important that the number of
input datasets match the number of modes.
This is easily ensured by choosing the ensemble from which the modes were generated as par-
ent.

495
Choosing a different parent is possible, but makes little sense since the parent is in fact not used
when extracting the POD modes.

Examples:

l 'Time history of POD coefficients' will project the parent datasets onto the modes, but return the
POD coefficients instead of performing the reconstruction.
The results are shown in an X/Y-plot, allowing you to investigate how much each mode con-
tributes to each snapshot over time or how modes relate to one another:

The example above illustrates the POD coefficients for modes 1 & 2 when a series of vector maps are
projected onto the POD modes. In this case the input data are obviously time-resolved since a cyclic
behavior can be clearly seen. The POD as such does not require Time-Resolved input, but a plot as the
one above makes little sense if it is not.

496
The example above shows the exact same data as before, but plots POD coefficients for mode 1 vs
mode 2 instead of plotting them both vs time.
This so-called phase portrait will help identify mode pairs describing cyclic phenomena even if the input
data is not time-resolved.

15.66.3 References
[1] J. L. Lumley (1967):
"The structure of inhomogeneous turbulent flow".
-In A. M. Yaglom and V. I. Tatarski, editors:
"Atmospheric Turbulence and Radio Wave Propagation", pages 166-178.

[2] L. Sirovich (1987):


"Turbulence and the dynamics of coherent structures. Part I: Coherent structures."
Quart. Appl. Math., 45(3):561-571.

[3] K. Fukunaga (1990):


"Introduction to Statistical Pattern Recognition".
Academic Press, 2nd edition.

[4] P. Holmes, J. L. Lumley & G. Berkooz (1998):


"Turbulence, coherent structures, dynamical systems and symmetry".
Cambridge monographs on mechanics. Cambridge University Press.

[5] J. M. Pedersen (2003).


"Analysis of Planar Measurements of Turbulent Flows".
PhD thesis, Department of Mechanical Engineering, Technical University of Denmark.

[6] K. E. Meyer, D. Cavar & J. M. Pedersen (2007):


"POD as tool for comparison of PIV and LES data".
7th International Symposium on Particle Image Velocimetry. Rome, Italy, September 11–14, 2007.

497
15.67 Range Validation
Any vector map can be validated against a user defined expected range of velocities. This is done by
selecting the ensemble containing the vector maps in question and then select the analysis method
'Range Validation' in the category 'PIV Signal'. In the recipe you can specify acceptable vector length as
well as min and max values for each of the velocity components of the vector map:

-Apart from the in-plane velocity components U & V Stereo PIV analysis will also estimate the out-of-
plane velocity component W and for such a dataset you can of course specify upper and lower limits for
acceptable W-values as well. Suitable limit values are often determined best by simple trial and error. Try
various values, press 'Apply' to see how they affect the vector maps and when you're satisfied press 'OK'
to validate the remaining vector maps in the parent ensemble.
Here's an example of a vector map that has been validated with the settings above: Invalidated vectors
are color coded in red, while the rest remain blue to indicate that they are deemed valid vectors:

Please note:
Range Validation does NOT substitute invalid vectors with an estimated guess for the correct velocity. To
do this you need to apply yet another validation method such as Moving Average Validation.

498
15.68 Rayleigh Thermometry
Rayleigh thermometry strives to determine temperature distribution in e.g. flames and rely on Rayleigh
scattering from the molecules in a gas. Rayleigh scattering is orders of magnitude weaker than e.g. Mie
scattering and generally require an intensified camera to be detected at all.

15.68.1 Rayleigh theory


Rayleigh theory describes the scattering of light (or other electromagnetic waves) from particles sig-
nificantly smaller than the wavelength. A detailed description of the theory is beyond the scope of this text
and will not be covered.
Rayleigh Thermometry depends on light-scattering from the molecules of a gas, where the number density
(molecules/m3) is the governing factor: In general a gas will expand with increasing temperature, meaning
that the number density and thus the intensity of scattered light will decrease. Measuring the intensity of
scattered light at a known temperature and monitoring how that intensity changes allows us to determine
corresponding changes in temperature. The scattering intensity is of course affected by many things other
than temperature (e.g. laser wavelength, scattering angle and more) and Rayleigh Thermometry is gen-
erally applied assuming or requiring most of these things to be constant.
The following expression may be used to describe the response to some parameters, which cannot
always be assumed constant:

-where subscript ‘Measure’ refers to the measurement image and ‘Reference’ to a reference image where
the temperature, T , is known as is all other parameters that are expected to change. The var-
Reference
ious symbols used are: ...

Symbol Meaning Unit Comment

I (Greyscale) Intensity [-] The “raw” measurement for all pixels.

T (Absolute) Temperature [ Kelvin ] The intended result for all pixels.

σ Scattering Cross Section (describes [ - ] or [a.u.], Typically relative to N .


2
the scattering efficiency of different but consistent May vary from reference to actual measurement,
gases and gas mixtures) but remain constant within each image.

p (Absolute) Pressure [ Pa ] or [a.u.], May vary from reference to actual measurement,


but consistent but remain constant within each image.

E (Laser pulse) Energy [ mJ ] or [a.u.], Spatial distribution may vary, but being unable to
but consistent detect this we can only compensate for shot to
shot variations in total pulse energy.

Please note that all parameters except temperature enter as ratios so you need not use any specific unit
as long as you are consistent. Parameters that remain constant need not be known at all since the ratio
will just be 1.
If all but temperature remains constant Rayleigh Thermometry will in principle boil down to the following
very simple formula:

In practice both reference and measurement image will contain gray-value contributions from other
sources than Rayleigh scattering and these may severely disturb results if ignored.
The two main error sources are Background noise (dark current etc) & Mie scattering from dust particles
and similar contaminants in the measuring area (typically much stronger than the Rayleigh scattering

499
since the dust particles are much, much bigger than the molecules and thus scatter much more of the
incoming laser light).
Assuming the Background Noise can somehow be measured it can be subtracted and a modified Ray-
leigh expression becomes:

The Rayleigh Thermometry recipe supports only background subtraction, while Mie scattering may be
removed using existing image processing techniques in DynamicStudio prior to entering the Rayleigh Ther-
mometry recipe. 

15.68.2 Rayleigh Thermometry analysis in DynamicStudio


The reference image should be stored in the DynamicStudio database as a Calibration image and be
selected for analysis before moving to the ensemble containing measurement images and from there enter
the Rayleigh Thermometry recipe:

-In this example preprocessing has been applied to a series of reference images in order to get a single ref-
erence image. You may also select an ensemble with multiple images in which case DynamicStudio will
compute the mean of all these images and use that as the reference image.

500
The simplest Rayleigh calculation is applied by setting all but the temperature to 'Constant' and attempt-
ing no background subtraction:

501
Top: Reference image with constant temperature of 300 K.
Bottom: Measurement image with higher flame temperature and particle images in the surroundings.

Top: "Raw" result of the Rayleigh analysis produce extreme temperatures due to Mie scattering from the
particles.
Bottom: Rayleigh results clamped to the range 300 K - 1200 K to reduce the effect of Mie scattering while
keeping most of the information inside the flame.

Rayleigh analysis in Region of Interest


The recipe allows you to perform the Rayleigh analysis in a specified region of interest instead of always
processing the entire image:

To process a subset of the full image select 'Region of interest' and click the button 'Select ROI...', which
becomes active. In the resulting dialog identify the region in which you wish to perform Rayleigh analysis.

502
Selecting Frame 1 or 2 from a Double-Frame image
If the input images are double-frame you may also specify whether to process Frame 1 or 2:

-Please note that it is highly unusual and not recommended to use double-frame imaging for Rayleigh ther-
mometry; Double-frame imaging is normally used in order to investigate the dynamics of rapidly changing
phenomena and the image intensifier required to detect Rayleigh scattering will typically not be able to sup-
port the very short time between the two images in a double-frame.

Temperature, Scattering cross-section, pressure & Laser pulse energy


The central part of the Rayleigh Thermometry recipe is the most important part where you set up the anal-
ysis:

A reference image has to be pre-selected before you can even enter the recipe. If more than one image
was pre-selected a drop-down button is available allowing you to specify which of the pre-selected images
to use as reference image.
The temperature of the reference image must be specified in Kelvin and is the only mandatory parameter
in Rayleigh thermometry.
By default Scattering cross-section, Pressure and Laser pulse energy will be assumed constant, meaning
that they are assumed to be the same in both reference and measurement image.
If these parameters are not constant you may simply type in a value for the measurement image. The
moment you do so the corresponding parameter will be enabled for the reference image, allowing you to
type in a value there also. Please note that these three parameters are in arbitrary units since the impor-
tant thing is the ratios between measurement and reference values, not the values as such. If for example
you know that the gas in the measurement image scatters 5% more light than the gas used when acquir-
ing the reference image, you may simply enter scattering cross-sections as 1.05 and 1.00 for meas-
urement and reference image respectively.
If you have already added some of the parameters as Custom Properties of the selected image ensem-
bles there is an optional, and quicker way of filling in the information in the Rayleigh Thermometry recipe.
Simply make sure that the box ‘Use custom properties’ is checked, and the software will automatically
retrieve all the information that you have previously entered. This option is selected by default. If you need
to make adjustmens on some of these parameters when you do the analysis you can un-check the box
again, and you will have full rights to edit all the values used for the analysis. Should any information be
missing, you are of course also allowed to fill in the final details in the recipe manually to make the table
complete.

503
For the Scattering Cross-Section you may also look up values from a library of known gas species and
mixtures by clicking the drop-down box at the top of each column:

Just pick the gas species or mixture you're using and the corresponding scattering cross section will be
filled in accordingly. Do the same for the reference image, specifying the scattering cross section, either
by typing it in directly or by identifying the gas or mixture used.
If the gas species or mixture you're using is not in the list you may type in the scattering cross section
manually or you may enter the library of known gas species and mixtures to create an entry describing
name and properties of your gas. To enter the Species and Mixture Library press the button labeled '...' in
the left hand side of the Rayleigh recipe.
For the Laser pulse energy you may of course choose 'Constant' or enter a nominal value, but you have
two more options, Region Of Interest or Analog Input:

The ROI (Region Of Interest) selection can be used if there is an area within the cameras field of view in
which you know what the temperature is. When you've selected ROI in the drop-down box the button
labeled 'Select ROI...' will be enabled allowing you to identify the region in which the temperature is known
and constant. The temperature must be entered in the box to the right of the button. Within the ROI aver-
age greyscale values are computed for both measurement and reference image. Combining these with
specified temperatures (and other parameters) provides an estimate of laser pulse energy which is then
used for Rayleigh analysis in the entire image.
The analog input option (named 'Input #1' in the example above) will be available only if analog waveforms
have been preselected, matching the measurement and reference images respectively. The analog wave-
forms are assumed to come from a Pulse Energy monitor, mounted on the laser to measure the energy of
each and every laser pulse. In practice the waveforms are lowpass filtered and the maximum value of the
filtered signal is assumed proportional to the laser energy. Comparing analog waveforms for the reference
and measurement image we can thus compensate for shot-to-shot variations in the laser pulse energy.

Noise suppresion
Background noise is an important error source in Rayleigh thermometry analysis and removing or just
reducing it may improve the quality of results significantly. DynamicStudio offers three different methods
for background subtraction, all accessible via the group 'Noise suppression':

504
Apart from 'Not used' you may enter a fixed greyscale value, which will then be subtracted from all pixels
in both reference and measurement image.
You may also choose 'ROI' which will allow you to identify a region of interest, where the camera sees
only background and no laser light. The average of all pixels within this ROI will be computed and sub-
tracted from all pixels in both reference and measurement image.
If a separate background image has been recorded (e.g. with the lens cap on) and stored as calibration
image you may preselect this along with the temperature reference and then pick the option 'Use image' to
subtract background image from both reference and measurement image. This is the only way to subtract
varying greyscale values from pixels in different parts of the image. Selecting 'Use image' will also enable
the drop down list to the right, allowing you to specify which among multiple images is to be used as back-
ground image.

Output temperature limits


A "raw" Rayleigh thermometry analysis can easily generate very high or very low temperatures that are
not physical. Mie scattering from particles may for example appear as bright spots in the image, which
Rayleigh analysis will interpret as extremely cold areas. Truncating results to be within physical limits is
useful to prevent such extreme values from disturbing the final result. Switching on 'Clamp to limits' will
allow you to specify both upper and lower limits at which results will be truncated so no temperatures will
be accepted outside the specified range:

15.68.3 Species and Mixture Library


DynamicStudio contains a library of known gas species and mixtures with physical properties such as
scattering cross section. A number of predefined gas species and mixtures exist, which you cannot
change, but you can add your own species and mixtures and edit those freely.
The Species and Mixture Library can be entered from the recipe of Rayleigh Thermometry (or other anal-
ysis methods relying on data from the library) or you can access it directly from DynamicStudio's Tools
menu:

When you first enter the Species and Mixture Library you will see the list of known species and their scat-
tering cross sections:

505
If you click the Tab labeled 'Species mixture' you will switch to the library of known gas mixtures:

In the Species library you can see the list of preset species available and their corresponding scattering
cross-sections (for light at 532 nm), relative to that of gaseous nitrogen, N .
2
You can add new species to the library by typing the name of the new species in the left hand side of a
new line at the bottom of the list and then type in the scattering cross-section (relative to that of N ) in the
2
right column. All species in the library are easily accessible from e.g. the Mixtures library.
In the Mixtures library you can see the available preset gas mixtures, what species they are composed of
and at what mole fractions. You can also create new mixtures and add to the library. A mixture is created
by clicking ‘New’ and typing a name on the Species Mixture Name line. Then select the desired species
under Mixture Component (these are the species found in the Species library) and type the mole fraction of
the species under Mole Fraction. The relative scattering cross-section of the gas mixture is updated
online, and displayed in the top right of the dialog. The preset mixtures cannot be changed by the user, but
you can make a copy of a preset mixture by clicking ‘Copy’. In the copy you can then make any changes
you like.

506
15.69 Reynolds Flux
Reynolds Flux calculations are performed in two steps. First, instant Reynolds flux maps are derived and
then the user calculates the averaged map. Although no physical information is contained in the instant
Reynolds flux map, this feature is introduced to enable the user to check convergence by calculating Rey-
nolds flux maps e.g. with the first 250 data, then with 500 data etc.
Statistically 5,000 instant maps are needed to reach a 95% confidence interval. For such heavy cal-
culations, it is recommended to use distributed analysis (DA). By experience 1,000 instant maps give a
good picture of the Reynolds fluxes, but this depends entirely of the flow properties.

15.69.1 Image re-sampling


To calculate Reynolds flux, a PIV/LIF set-up is required; i.e. velocity maps and re-sampled scalar maps.
Re-sampling of the scalar map is necessary as this step describes the alignment of two camera views.
(Consult the 'Resampling methods' help for further information.)

15.69.2 Reynolds flux calculations


To calculate Reynolds flux, the average velocity and instant/average re-sampled scalar maps shall be cal-
culated first. Select all velocity maps (or part of them) and chose the 'Reynolds flux' method. Complete
the dialog window that pops up with the first scalar map and then averaged maps. Click on the 'Apply' but-
ton to view the preliminary result and accept with 'OK'.

Multi-select the Reynolds flux maps (or part of them) and using the 'Mean method' in the statistics cat-
egory, calculate the Reynolds flux map.

15.70 Region of Interest (ROI) Extract


ROI Extract method can be used to extract a region of an image. The region to extract is specified by a rec-
tangle as shown in the screenshot below and the resulting image will have the same height and width as
the rectangle specified. If the ROI rectangle is rotated the output image will be rotated when compared to
the input image. When the region is being extracted it is done using the specified interpolation method.
There exist three different interpolation methods (Nearest neighbor, Linear and Cubic).

507
The ROI rectangle can be manipulated either by mouse interaction or by text input in its property dialog. In
follow paragraphs the two different.

15.70.1 Manipulating the ROI rectangle using the mouse.


The position, size and rotation of the ROI rectangle can be changed by mouse interaction. The entire ROI
is moved by using the left mouse button to drag the ROI to its new location. When the ROI is selected or
while hovering the mouse over the ROI rectangle its manipulation handles is displayed as illustrated in the
image below. By using the mouse to drag these manipulation handles to a new location it is possible to
change the ROI rectangle. If the shift button is pressed while dragging a corner manipulator the center of
the ROI is kept at its current location. If the shift button is pressed while dragging the rotation manipulator
(the right circle in the image below) the rotation angle is kept to a multiple of 45°.

15.70.2 Setting the ROI rectangle using the property dialog.


If right clicking the ROI rectangle its context menu will be shown (left image below). When selecting the
‘Properties…’ menu item the property dialog is shown (right image below):

508
In the property dialog entering the desired values of the position, rotation and size can alter the ROI rec-
tangle.
Hint: If the angle is set to zero and the interpolation method is nearest neighbor the ROI extract can be
used as a simple cropping tool.

15.70.3 Using the image view.


The view of the image and ROI rectangle can be controlled through the context menu that appears when
right clicking inside the image. In this popup menu the zoom level, the active frame (for a double frame
exposure) and visual appearance (Display option) can be adjusted.

The zoom can also be adjusted by scrolling the mouse wheel button or by dragging a rectangle around the
desired area to view. If holding the control key while dragging inside the image the view area can be
moved (panned) around.

15.71 Scalar Conversion


This method is used to transform experimental maps according to a user-defined polynomial function and
thereby gain additional empirical information on the process investigated. (Typical example is the trans-
formation of [OH] density maps to Temperature maps estimates; see related information in the

509
Combustion-LIF application manual.) The order of this polynomial is set by the operator and can vary
between m = 0 and 5.
To use this method, select the re-sampled map(s) of interest and call the numerical method 'Scalar con-
version' located in the 'LIF Signal' category. Complete the dialog window with the value of the coefficients
a0, a1, … amof the polynomial and press the 'Apply' / 'Display' buttons to preview the result. Click on 'Ok'

to accept the result and create a new map in the database.

Dialog window for user-defined (empirical) calculations derived from measurements.

15.72 Scalar derivatives


This method comprises a number of different derivatives that can be calculated from a Vector map. How
the method calculates the different derivatives is displayed in the recipe. Below an example is shown:

510
The first field in the recipe dialog is a drop-down list. The list shows all the possible derivatives that can be
calculated. It is possible to edit the text, in which case all the nearest possible derivatives are listed. If f.
ex. the user enters a “d” all the derivatives in the list starting with a d will be shown.

Below the drop-down selection box a text box is shown. Here is a description of how the result is cal-
culated.
All the variables will be computed at the same time when "Apply" or "OK" is pressed. All the variables will
be stored in the same ensemble and will be displayed as Scalar Maps (See "Scalar Map Display" on page
661).

By checking the check-box "Calculate All available scalars", next to the drop-down selection, it is pos-
sible to have all the available scalar values calculated and saved in to the resulting scalar map. If this "Cal-
culate All available scalars" is unchecked only the selected scalar value will be calculated. Calculating
only one scalar value can dramatically increase the performance of the method.
When "Calculate All available scalars" is unchecked, the additional operation and Return operation
becomes available.

Additional operation radio buttons determine what operation (if any) is applied after the calculation. It is
possible to negate the values or take the absolute value of the results.
The Return radio buttons select what post-processing (if any) to apply after the calculation has been done.
It is possible to have all values returned or just to keep positive or negative values.

15.72.1 Calculating the gradients of U, V and W in the x and y direction


Vectors in a vector map are in a discrete regular grid, so velocity gradients are estimated by comparing
neighbor vectors to one another.

511
Whenever possible a central difference scheme is used, but if only one valid neighbor vector can be found
a forward or backward difference scheme will be used instead (this will f.ex. apply along the edges of the
vector map, where only one neighbor will be present in the direction normal to the edge).
For example the gradient at the point (m,n) of velocity component U in the x-direction will be calculated as
follows:

This is a central difference scheme and the resulting gradient corresponds to the slope of a 2nd order poly-
nomial (i.e. parabolic) fit to 3 neighbor velocities and is thus said to be 2nd order accurate.
If only 2 valid neighbors can be found a forward or backward difference scheme will be used instead:

Forward and backward difference schemes corresponds to the slope of 1st order polynomial (i.e. linear)
fits to 2 neighbor velocities and are thus said to be 1st order accurate.
If no valid neighbors can be found velocity gradients are set to zero, but tagged as invalid with the status
code 'Rejected'.
Gradients of V and W in the x-direction are estimated similarly, simply replacing U with V or W in the
expressions above. Gradients in the y-direction are also estimated in a similar manner, keeping index m
fixed and adding or subtracting 1 from the n-index in order to identify neighbors in the y-direction.
The gradients can be collected in the 3x3 velocity gradient tensor J:

-from which several scalar derivatives can be derived.


(For planar vector maps spanning the x/y-plane, derivatives in the z-direction cannot be estimated, so the
rightmost column of J is commonly treated as zeros).

15.72.2 Scalar derivatives that can be calculated

dU/dx, dU/dy, dV/dx, dV/dy, dW/dx & dW/dy


Gradients of velocity components U, V & W in the x- & y-direction:

Divergence UV
Divergence of a 3D vector field such as U is defined as

For planar data gradients in the z-direction cannot be calculated, so it reduces to

512
For incompressible flows (i.e. liquids or gases at velocities an order of magnitude below Mach 1) the diver-
gence must be zero, since nonzero values indicate local changes in density. Nonzero divergence values
can thus help identifying erroneous measurements. For volumetric data nonzero divergence-values can
otherwise be used to identify f.ex. shock-fronts (across which density may change dramatically). For pla-
nar data nonzero divergence values may be used to indicate areas where the flow is not 2-dimensional
(i.e. contain significant out-of-plane velocities).

Shear UV
The velocity shear tensor in a point is derived from velocity gradients:

-where each term describes the shear parallel to the yz-, zx- and xy-planes respectively.
For planar data gradients in the z-direction cannot be calculated, so only shear parallel to the xy-plane can
be determined:

Vorticity (Z)
Vorticity in a point is defined as the local rotation or curl of the 3D velocity field:

-where each term describes the rotation around the x-, y- and z-axes respectively.
For planar data gradients in the z-direction cannot be calculated, so only rotation around the z-axis can be
determined:

Lambda-2
The Lambda-2 vortex criterion is based on the velocity gradient tensor J. For planar data input gradients in
the z-direction may be set to zero in the following calculations.
Split the gradient tensor in a symmetric and anti-symmetric part S & R, rate of Strain and rate of Rotation
tensors:

from this compute the eigen-values of the symmetric tensor S2+R2.

513
Since the tensor is real and symmetric all 3 eigen-values will be real and they can be sorted so λ ≥λ ≥λ .
1 2 3
If the point under investigation is part of a vortex at least two of these eigen-values will be negative, cor-
responding to the Lambda-2 vortex criterion requiring simply λ <0.
2
Local minima of negative-valued Lambda-2 can be used to identify vortex cores, while positive values indi-
cates areas of the flow, where shear may be present, but no swirling motion.

Reference:
Jeong & Hussain (1995)
“On the identification of a vortex”
J. Fluid Mech. (1995), vol. 285, pp. 69-94

Swirl Strength
Swirl strength is defined as the imaginary part of the complex eigen value of the velocity gradient tensor J.
For planar data gradients in the z-direction cannot be calculated, and setting them to zero simplifies eigen
value calculation, so the square of the imaginary part can be computed as:

-which is the figure returned as the swirl strength.


Please note that the corresponding eigen values will only be complex if this number is negative!
Local minima of negative-valued swirling strength can be used to identify vortex cores, while positive
values indicates areas of the flow, where shear may be present, but no swirling motion.

Reference:
Adrian, Christensen & Liu (2000)
“Analysis and interpretation of instantaneous turbulent velocity fields”
Exp in Fluids, 29/3, p. 275-290

2nd invariant Q
The 2nd invariant Q of the 3x3 velocity gradient matrix J may also be used to identify vortices.
The second invariant, Q, of this 3x3 matrix is:

In the immediate vicinity of a vortex Q will be positive and have a maximum at the vortex core.
For planar data gradients in the z-direction cannot be computed, and setting them to zero the expression
above simplifies to the determinant of the 2x2 gradient matrix:

-which will be the same for both 2D and 3D vector input.


Local maxima of positive Q can be used to identify vortex cores, while negative values indicates areas of
the flow, where shear may be present, but no swirling motion.

References:
Hunt, Wray & Moin (1988)
“Eddies, stream, and convergence zones in turbulent flows.”
Center for Turbulence Research Report CTR-S88, p. 193

514
Chong, Perry & Cantwell (1990)
”A general classification of three-dimensional flow fields”
Phys. Fluids A 2, 765

15.73 Scalar Map


The 'Scalar map' display function is used for on-screen display of a number of data-types, including e.g.
Vorticity and re-sampled LIF data representing concentration or temperature. It is also used for display of
3D-PIV vector maps, where the scalar display is used to show the out-of-plane velocity component, while
in-plane components are shown using a traditional vector plot.
The numerical method 'Scalar Map' is used to extract a scalar quantity from a dataset with multiple
values, and the parent dataset will thus determine the contents of the recipe with respect to the options
available.
Select the dataset from which you want to extract scalar data and from 'New data set' select 'Scalar Map'
in the category 'Plot'.
The recipe will show you what scalar quantities are available from the parent dataset, pick the one you
want:

Extracting a Scalar map from a Vector map.


Peak heights, Peak height ratio and Peak widths for the 1st and 2nd highest peak in the correlation plane
can be used to evaluate the overall quality of your vector map. For high quality data, peak 1 should be sig-
nificantly higher than peak 2, i.e. the peak height ratio should be higher than one.
Peak widths should normally be in the range 3-6 for good quality data. Narrow peaks normally indicate that
particle images are very small, introducing risk of pixel locking in the resulting vector maps. Broad peaks
can be the result of large (poorly focused) particle images, or caused by strong flow gradients within the
interrogation area used.
The last entries are used to extract horizontal and/or vertical velocities or to extract the velocity magnitude
of the vectors (the length).
Press the 'Apply' button to calculate the scalar map and view the results (Click on the 'Display' button to
access further visualization methods) and 'OK' to accept the calculation and visualization settings. When
needed, raw data can also accessed via the 'Open as numeric...' option.

515
Example: Cylinder wake, where mean flow is shown as a vector map overlayed on a scalar map of Var{U}
+ Var{V} (~turbulent energy).

All scalar maps (other than re-sampled scalar maps) are tagged with the icon for simple identification
with further processing, e.g. with batch export via the Tecplot Loader or the MATLAB Link.

15.73.1 Visualization methods…


With the mouse, double-click on the scalar map (or Right click on the map with the mouse and select 'Dis-
play option'): scaling options, color codes and other advanced data representations are now available.

The recipe tab 'Levels and Range'


'Levels' determine how many different shades and/or colors should be used for the scalar map display.
'Minimum' and 'Maximum' determines the upper and lower limits for the scalar values shown, where the
default 'Use full range' will set these automatically based on the numerical values present in the scalar
map.

516
The recipe tab 'Style'
'Drawing style' determine how the scalar map is shown on the screen. The default is 'Follow contours',
where the display will interpolate between discrete scalar values to estimate the continuous variation of
the scalar quantity.
The style 'Discrete' will not interpolate, but simply show a colored rectangle for each point in the scalar
map, thus producing a display of colored 'tiles'.
For the contour plot you may choose to add lines for each change in contour level, or show these lines
only.
'Color use' determines the color palette used to display scalar values. The default is 'Rainbow', going from
magenta over blue, cyan, green and yellow ending in red. From the drop down list you can choose a
number of other palettes, including simple grayscale shading.

The recipe tab 'Interpolation'


This is relevant only when showing contours and/or contour lines, not when using the drawing style 'Dis-
crete'.
For every pixel in the display interpolation is performed by calculating a weighted average of neighboring
discrete scalar values. The size of the averaging neighborhood is determined by the 'Integration step size',
where large areas produce smoother displays than small areas.

517
The checkbox 'Mask out invalid regions' provides the option of not showing regions that are invalid. For
example previous masking of a vector map may have tagged some of the vectors as being outside the
flow (i.e. inside a wall or similar).

15.74 Scalar statistics


For an ensemble of scalar maps three statistical analyses can be performed, Scalar Sum, Scalar Mean &
Scalar RMS.
As the names imply, the analysis methods will compute respectively the sum, the mean or the rms of sca-
lar values and produce a single scalar map from an ensemble containing multiple scalar maps.
To actually perform the analysis right-click the scalar map ensemble in the database window or right-click
inside an open display of one of the scalar maps in the ensemble. From the resulting context menu select
'Analyze...' and then select the analysis category 'Statistics' or 'LIF Signal':

Pick the desired analysis method and click OK.

518
Traveling vortices in the boundary layer of an acoustically
excited jet appear as local minima and maxima in a scalar
map of vorticity (vector map overlaid for clarification).

519
Scalar mean of a series of vorticity maps illustrate how overall
vortex strength (vorticity) decay downstream from the jet exit.

520
Scalar RMS of a series of vorticity maps indicate that the top
vortices appear more stable than the bottom ones, but both
spread out downstream from the jet exit.

Computing the Scalar Sum (not shown) would produce results corresponding to the Scalar Mean mul-
tiplied by the number of scalar maps in the parent ensemble.

15.75 Shadow Histogram


The histogram display refers directly to the underlying histogram dataset, which is a subset of the Shadow
dataset.

.
When selecting Shadow Histogram, the following window appears.

521
15.75.1 Variable to process
Select the variable from the list.
Equivalent Diameter is the diameter of the spherical particle having the same surface as the measured
object.
Area is the area of the measured object
Perimeter is the perimeter the measured object
Eccentricity expresses the elongation of the object. For a circle, Ecc is 1 and for a line it is greater than
one. The eccentricity is defined as

where the central moments are defined as

,
and where the central moments for an image are calculated as

and the centroid as

522
and
Orientation theta can be defined as the angle between the X axis and the principal axis

Shape factor is a measure of the circularity or compactness of the shape

where P is the perimeter and S the surface of the shape.


Major and Minor axis length are the major and minor radii of the ellipsis calculated as

U and V velocity are define along the horizontal and vertical axis respectively.

15.75.2 Process data from


In double frame, mode, the data can be processed from the first frame (check Image A), from the second
frame (check B) or from both

15.75.3 Display
Select the histogram type between count, percent of the total count or cumulative.

15.75.4 Scaling
Check autoscale checkbox so that the X axis scale is automatically adapted to the range of the measured
particles or uncheck it and type the minimum and maximum value. The number of bins or classes can be
changed from the "Bins" field.

15.75.5 Region
The histogram can be calculated from a specific region of the looking area. Uncheck "use entire area" and
type the X and Y coordinates.

523
15.75.6 Histogram display properties

In a right clicking on the graph, it is possible to change the display options and the data selection (see win-
dow below).

524
For further information about the XY display setup, please refer to XY display in display section of the
online help>
To copy the graph to the clipboard, press "clip clipboard" and paste the graph in a third part software.

15.76 Shadow Sizer processing


This method is used to extract information such as the size, the position, the shape, the velocity, etc. of
droplets, bubbles or particles imaged according to shadow principles. Due to the nature of the technique,
there are no limitations on the size and shape of the droplets, etc. and it can be used both with transparent
and opaque droplets/particles as well.
The Data base should include single- or double-frame image(s) and calibration image.

15.76.1 Content
Field of view and calibration
Recipe <Shadow processing> dialog window

l Assistant
l Threshold levels and validation criteria

525
l Region of interest (ROI) processing
l Reject non-closed contours

Data visualization section

15.76.2 Field of view and calibration


The field-of-view is defined (i) either using a calibration target and dewarping shadow images accordingly
(see related help files from the category 'Coordinates') or(ii) using the spatial information contained in the
"Field of view" property of the calibration  menu (default settings). When using the first methodology,
press the 'Select' button and look for the calibration to use. The scale factor (second methods) can be
measured from the calibration image by doing a right mouse click on the image and by selecting

Field of view property

Measure scale factor from calibration image


When using the measuring scale factor the origin point displaces the found particles by its offset to the
image origin. If no displacement is needed, the origin point must be placed in the lower left corner.

15.76.3 Recipe <Shadow Sizing>


To use this numerical method, select the single- or double-frame image(s) of interest, call the method
'Shadow processing' located in the 'Particle characterization' category and complete the following dialog
window (recipe) as described below.

526
Shadow processing dialog window

l Assistant

The shadow assistant is a setup wizard which assists the user in determining the parameters that best
capture the particle image characteristics of acquired images. This is done by selecting a particle on one
or two images and analysing the outcome. The Assistant displays the gray level characteristics, including
particle pixel depth and edge gradients.

527
The results are shown in 2 windows, the first displaying the particle contour with statistical information,
the second displaying the gray level characteristics along the minima and maxima lines.
A general procedure is as follows: ·

l Select region about one particle in frame 1. ·


l The software automatically determines the optimum local threshold and processes the image.
l The user can then adjust the local threshold manually and reprocess. · To go back to the sug-
gested threshold press Auto select
l Select frame 2 (if double-imaged) and press select to pick one particle. ·

528
l Once complete the user can select OK and the parameters are saved to the analysis setup for
processing of the entire image. The local threshold is used to calculate validation criteria (Edge
height and edge slope validations). A small difference might be observed between the local thresh-
old suggested by the assistant and the threshold level in the recipe window. A safety margin is
applied so that the local threshold can be used for the other particles without rejecting too many of
them. In case of 2 image frames, the softest parameters of the 2 frames will be used in the recipe
window (minus a safety margin).

Note:
If several particles are selected simultaneously, the assistant will process only one.
It is a good idea to test several particles in the same image when optimizing validation, also to take into
account variations. The parameter suggested from the wizard should be considered as starting point: it
might be necessary to manually fine tune the validation parameters to improve detection rate.

l threshold value and validation criteria

The threshold level is a main parameter that helps define the contours of the droplets, bubbles or par-
ticles on the images. This parameter expresses the threshold level in % of the maximum gray level of the
camera. The higher this value is, the easier is becomes to identify 'large' structures.
Auto-compensate double images: In double frame mode, check this check box to equalize first and sec-
ond frames relatively to their mean, maximum and minimum gray levels.

l Edge height/Slope validation

The edge height validation is the acceptance of particles whose edge depth as a proportion of
total image resolution satisfies a user defined value. In many situations the visibility of particles is
limited, either due to scattering of the medium or poor light access. In these cases the edge height
will be limited.

Examples

529
In the above example a particle shows a rather soft edge and utilizing a small portion of the entire
image range. An even weaker shadow image is shown below:

In this example there is no distinct edge, but rather a soft featureless (probably defocused) par-
ticle. The edge slope is calculated from a least squares fit of a line (made up of segments) that

530
crosses the threshold cutoff. For a particular image the threshold cutoff is determined at the mid-
point between the minimum and maximum edge height.

In the above case the edge depth is so small that the slope has little meaning and should be
rejected outright as a suitable particle measurement. The slope is high because of a noise ele-
ment in the profile near the threshold. Therefore, both edge height and slope need to be taken into
consideration when selecting validation limits.

A fine example of a focused particle in a weak light environment is shown below.

531
Often, in double images, one of the images of a particular particle may be out of focus. It may help
to step back any restrictions to allow for particle matching (in cases where velocity is needed).
The auto-selection feature in the shadow assistant will normally step back 5 % when determining
suitable limits.

l Area validation

Enable the 'Area validation' and set the minimum size (in pixels) to remove the noise when needed. The
maximum area (in pixels) is typically not used and is generally set to any high value.

l Region of interest (ROI)

To limit image analysis to a given region of the image, uncheck the 'Use entire region' checkbox, press the
'Region' button and select (with the mouse) the area of interest. If required, fine-tune the (X, Y) positions of
the ROI manually by adjusting the (X1, Y1) and (X2, Y2) values. Press the 'OK' button to close the ROI
dialog window and jump back to the 'Shadow processing' main window.

532
Region of Interest processing is defined according to user needs.

l Options

Merge open contours; attempts to merge nearby open contour segments, and thereby close the contours.
The search radius of the merge operation is given by the Merge distance input.
If any open structures remain, remove them by check the 'Reject non-closed contours' option (if required).

Press the 'Apply' button to preview the results and accept/extend the processing to the other selected
images.

15.76.4 Data visualization

The resulting image is labeled with the icon to facilitate its location in the database. To edit this result,

press the 'Open as numeric' shortcut and a spreadsheet containing information on droplets, bubbles or
particles location, center position, eccentricity value, equivalent diameter, etc. is open. This information
may then be copied or sent to the MATLAB Link for further data analysis.
To enhance selected information on the image (e.g. axis and contours), open the 'Display' option of the
shadow result image and modify the color codes and the scaling of the velocity vectors.
Measured parameters such as eccentricity, orientation, shape factor, major and minor axis length, veloc-
ity are described in the Shadow spatial histogram (Analysis method). For more information see See
"Shadow Histogram" on page 521

533
Information gained from shadow processing can be enhanced and accessed for individual droplet, bubble
or particle.

15.77 Shadow Spatial Histogram


The histogram display refers directly to the underlying histogram dataset, which is a subset of the Shadow
dataset

 When selecting Shadow Histogram, the following window appears.

534
15.77.1 Calculation
Select the histogram type from the drop down list. Histogram type can be selected between
Particle count, Equivalent diameters, Area mean, Perimeter mean Average orientation, Average eccentricity, Avera
Concerning the definition of these parameters, please see See "Shadow Histogram" on page 521.

15.77.2 Number of cells


The image is divided in spatial cells. The mean value will be calculated from the particles detected in each
cell.

15.77.3 Process
In double frame, mode, the data can be processed from the first frame (check Image A), from the second
frame (check B) or from both. 

15.78 Shadow Validation


Check the validation criteria to be applied and type the range of each parameter.

535
For the parameter definition, please see See "Shadow Histogram" on page 521.

15.79 Size-velocity correlation


The size-velocity correlation is a 2D histogram that bins velocity data against to diameter classes. Each
particle in the input data is mapped according to its velocity and size. The mapping can be direct, as in
point data, or binned further by the user to average the results.

536
Select component Select which velocity component to compare with diameter
Scaling Auto-scaling or user selectable
Velocity; min, max User selectable limits for velocity
Velocity bins Number of bins to use for velocity (only for binned proc-
essing)
Diameter; min, max User selectable limits for diameter
Diameter bins Number of bins to use for diameter (only for binned proc-
essing)
Output Point data – direct output as X-Y data binned – averaged
results as colored scalar plot
Region Select region of interest
- Use entire area-
- User limits: x min, x max, y min, y max

Output as point data

537
Output as binned data

15.80 Spectrum
The analysis method Spectrum makes an estimate of power spectral density based on an extract from
multiple datasets in selected points (i, j). This is particularly useful in connection with for example time
resolved data, but can also be used in other connections.
Input to the Spectrum calculation should be a multi-selection including multiple scalar or vector maps (2D
or 3D). From the input datasets the user must select one or more points in which to perform the cal-
culation, and also specify the quantities to be used for the analysis (for scalar maps this will be the scalar
quantity, while for vector maps it can be one of the velocity components measured).

538
The results of a spectrum calculation can be shown as graphics or opened as Numeric in a spreadsheet
display from where it can be exported or copied via the clipboard to e.g. MS Excel.

15.80.1 Example: Spectrum in the wake of a cylinder


In the wake of a cylindrical obstacle vortices are being generated at a fixed rate depending on the free
stream velocity and the size of the obstacle. Statistical analysis of a time resolved PIV data series
produces both a mean flow field and a map of the variance for U- and V-velocity components. In the figure
below the mean flow field is shown superimposed on a map of the variance of the V-component. The
strongest fluctuations appear to be in a point about 2 diameters downstream of the cylinder.

In this point we can extract a time series of the V-component to confirm the periodic oscillations (See help
files for XY Plot regarding time series plot):

To perform a spectrum calculation on the basis of this signal you must first select the ensemble containing
the vector maps. (The spectrum is calculated using FFT, but the number of vector maps need not be a
power of two. If it's not the algorithm will automatically zero-pad your data to the nearest higher power of

539
two).
For the first dataset in the multi-selection select Spectrum in the New Dataset dialog:

In the resulting recipe you must now select the quantities on which you wish to do the analysis (in this
case the V-component of the velocities measured)

You must also identify the point(s) in which you want to perform the calculation.

540
Points in the vector map are identified with index numbers, where position (0,0) is the lower left corner of
your vector map. Please note the option to open the parent dataset to see an example of where the
point(s) selected are in the flow-field:

541
If you wish to calculate the spectrum in several points you can either add several points to the list here, or
repeat the entire calculation for each point. Calculating several spectra simultaneously will store the
results together, which may be convenient for numerical overview. In graphs the data will be shown as dif-
ferently colored curves in the same figure window. If you have more than a few spectra it may be more con-
venient to perform a separate analysis for each of them in order to get the plots shown in separate
windows.

If you wish to show the spectra using logarithmic axes, just right click the display and select Logarithmic
from the context menu:

542
15.81 Spray Geometry
15.81.1 Introduction
Spray geometry determines the spatial geometry of a spray nozzle with one or more nozzle exits. The anal-
ysis can characterize the geometry of a spray seen as a plume (See "Spray Geometry" on page 543) or
seen from below or above as a pattern ("Spray Pattern" (on page 549)). Of interest is the number of plumes
(cones) found as well as their geometry and orientation. Typically the geometry of interest occurs at a spe-
cific time delay after injection. The user may be interested in the temporal development of the spray at dif-
ferent time delays, or time averaged.

The analysis of the spray geometry requires the following steps:

n Determination of the number of cones (plume).


n The orientation of each cone.
n The angular spread of each cone.
n The penetration length of each cone.

Prior to analysis the user must:

n Enter the nozzle coordinates and direction: See "Spray nozzle properties " on page 545
n Select the region-of-interest (bounding area of spray): See "Region of interest" on page 547
n Adjust the image threshold: See "Cone Geometry - Plume geometry" on page 547
n Optionally adjust the radial scan limits: See "Cone Geometry - Plume geometry" on page 547

543
15.81.2 Setup window

Spray settings Select nozzle origin, direction and search area using the mouse in the select
Nozzle dialog or entering numeric values
Flat spray Select whether the spray is seen primarily from the side or from
above/underneath (Flat spray checked). If Flat spray is checked the
method searches 360 degrees around the nozzle point, otherwise 180
degrees will be searched.
Radial limits Set Minimum and maximum R(adius) to sweep during analysis. R is
defined from the nozzle origin using the mouse in the select Nozzle
dialog or entering numeric values
Invert image Invert the image gray-scale to process images where background has a higher
gray value than the spray
Mean filter size A NxN mean operator smoothens the image.
Threshold filter Percent of total gray-level range of the camera to apply as threshold
value. The corresponding gray scale level is displayed.
Updating the slider while the threshold image is being shown, will

544
update the threshold image in real time.
Select ROI Select region of interest. Only the area selected will be analyzed.
Select frame Select image frame to process.
Output Select plume or pattern output
Z (height from the nozzle) Vertical distance from the spray origin used to calculate cone angle from spray
pattern images

15.81.3 Spray nozzle properties


To define the nozzle origin, type in the X and Y coordinates (in pixel) or press Select Nozzle The fol-
lowing window appears.
- Select Frame 1 or Frame 2 (bottom of the window)
- With a right mouse click, you can zoom in and adjust image contrast. Thresholding can be set from the
main dialog. Mouse moves can be reset by pressing Esc, at any time.
- Set the nozzle properties by:

- Press the left mouse button to identify the spray origin (red cross).

545
- Drag the mouse to define the spray direction and the outer radius of the search area. Release to finish the
selection

-Click the image to define the inner radius of the search area
If the Flat spray is selected, the spray radius indicator will be circular instead.

546
15.81.4 Region of interest
To define the Region Of Interest (ROI), press the ROI button. The region to be used is delimitated by a red
rectangle. With the mouse, it can be moved, expended or reduced from the small square on the corner. A
right mouse click on the image allows to zoom in and out and adjust contrast image display.

15.81.5 Cone Geometry - Plume geometry


The cone geometry analysis is as follows:

n A NxN mean operator smoothens the image. The size of the mean filter is user-specified. A large
N will effectively dilate the image (enlarge the spray area on the image)
n A user-supplied threshold operator filters out all pixels below the specified value. Adjusting the
threshold affects the outcome of the cone angles as well as penetration length.
n A radial sweep (originating at the user-specified nozzle position) is conducted to determine the
number of cones. A histogram of results is accumulated whereby the maximum number of counts
determines the number of cones. The user can manually adjust the radial sweep length to better
suit the image inputs.
n Once the number of cones is determined, a radial sweep is conducted once more to determine the
cone edges. Since the edges vary along the length of the cone an average value is calculated.
Here the radial sweep length will play a part in the cone spread angle since a long sweep will often
result in a narrow cone, a short a sweep a wide angle.
n The penetration length is then evaluated by taking roughly 25 % of the far spray edge and aver-
aging the length.
n The cone angle is measured clockwise relatively to the positive horizontal axis.

547
The measured parameters are displayed in the info box of the dataset. If the info box is not available, right
click on the spray geometry dataset and check the info box option.

548
The results are also available as a separate table. From the tool bar of DynamicStudio, click on the "open
as numeric" icon:

15.81.6 Spray Pattern


The spray pattern analysis determines the position and shape of the each spray cone cross-section at dif-
ferent distances from the nozzle. The analysis follows closely with the plume geometry in that:

n The input image is smoothened by a NxN mean operator.


n A threshold is applied

549
n The number cone objects with a cross-section above a certain size are determined through Canny
edge analysis.
n The centroid position is determined.
n The pattern perimeter and spatial statistics are calculated

The measured parameters are displayed in the info box of the dataset. If the info box is not available, right
click on the spray geometry dataset and check the info box option. The parameters are also available in a
separate table (click on the "open as numeric" button from DynamicStudio tool bar)

Parameters:
Centroid X, Y center of the object
Area area of the measured object
Perimeter the perimeter the measured object
Shape factor a measure of the circularity or compactness of the shape. Furhter information can
be found in See "Shadow Histogram" on page 521
Alpha Angle between the cone axis and nozzle axis. Requires the distance

550
between the spray origin and the measurement plane to be documented

Liquid content Estimation of the relative quantity of liquid calculated as follow:

15.81.7 Spray geometry processing - Temporal evolution


The Spray geometry processing allows plotting geometrical parameters versus a user defined variable
such as a time delay after injection. This routine can be used to plot temporal evolution of the penetration
length for example.

Select the spray geometry datasets to be used for the plot as fixed input (press bar space)as shown
below:

551
Do a right click on the first spray geometry dataset and select Spray Geometry Processing.

Time variable Select the variable for the X axis. This variable must be available for each
spray geometry ensemble to be used for the plot. see below
Select cone Select the cone to be studied in each ensemble
Select variable to include Select the variable to be plotted (Y-axis)

552
How to add a Time variable to the Spray Geometry ensemble
Prior to use the Spray Geometry Processing, it is necessary to add a time variable to each Spray Geome-
try ensemble. Right click on the Spray Geometry icon and select "Custom properties".

Add a property from the window below by clicking on "add" button and by typing in a name (delay in this
example). Press OK.

553
The new property is now added to the record properties window and can be documented:

15.81.8 Trouble shooting


The following error message appears when the software cannot find any cones.

554
To solve the problem, make sure that

n The origin of the spray is positioned correctly


n The Region of Interest (ROI) covers the cones to study
n The Radii have ben set correctly
n The output (Plume geometry or Spray pattern) is set correctly
n The threshold is not too high
n The spray is not too noisy, try increasing the blur filter size.

15.82 Stereo-PIV
The Stereo PIV processing method computes 3C velocity vectors in a 2D plane (a light sheet) by com-
bining data from two cameras, each providing double-frame particle images from the light sheet as seen
from different viewpoints. From each of the two cameras the method requires a 2D vector map and a cam-
era calibration (See "Imaging model fit" on page 347), describing how points in object space map to points
in the image plane of the camera in question.
When looking at the light sheet at an angle instead of head on, the lens and image plane need to be tilted
as illustrated below. This is known as the Scheimpflug condition and ensures proper focusing :

Due to perspective out-of-plane motion will be perceived differently from each of the two cameras and this
difference is exploited to infer the third velocity component. This is illustrated above where the true dis-
placement is shown by the blue vector, whereas the green and red vectors illustrate what this looks like
from Camera 1 and 2 respectively (projected onto the light sheet/object plane).

555
15.82.1 Method and formulas
An imaging model F describes the mapping of a point X=[X Y Z]T in object space to the corresponding
point x=[x y]T in the image plane of a camera:
x = F(X)
… or splitting the vector function F(X) into separate scalar functions f & g for x & y respectively:
x = f (X , Y , Z )

y = g(X , Y , Z )
… the exact behavior of functions f & g is determined by the imaging model chosen

With two cameras there will be two such imaging models and two sets of image coordinates from the
same point in object space.
Using subscripts A and B to identify each of the two cameras we get:
xA = FA(X)

xB = FB(X)
…or…
   f A(X , Y , Z ) 
 xA   
 yA   gA X , Y , Z 
( )
 = 
 xB   fB X , Y , Z 
( )
 yB   
   gB X , Y , Z 
( )
   

If we know (or assume) that two image points (x,y)A and (x,y)B represent the same point in object space,
the expressions above create 4 equations with 3 unknowns from which we can estimate the cor-
responding object space location (X,Y,Z).

Differentiating with respect to time the point-to-point mappings become mappings of velocity instead:
dx
u=
dt

δx δX δx δY δx δZ
= + +
δX δt δY δt δZ δt

δx δx δx
= U+ V+ W
δX δY δZ

dy
v=
dt

δy δX δy δY δy δZ
= + +
δX δt δY δt δZ δt

δy δy δy
= U+ V+ W
δX δY δZ
…or equivalently in a matrix formulation:

556
 δx δx δx  
 u = δX δY δZ  U 
 v   δy δy δy
 V 
  W 
 δX δY δZ 

Multiplying with the time between the two exposures in the double-frame image we find the mapping of dis-
placements for a single camera:
 δx δx δx    δf δf δf  
 ∆x    ∆X   ∆X 
δX δY δZ   δX δY δZ
 ∆y  =    ∆Y  =    ∆Y 
   δy δy δy
  ∆Z  
δg δg δg
  ∆Z 
 δX δY δZ   δX δY δZ 
…introducing again x=f(X,Y,Z) and y=g(X,Y,Z).
Please note that the partial derivatives above will themselves typically depend on object space location
(X,Y,Z).

Once more we combine data from two cameras to obtain a system of 4 equations with 3 unknowns (as
before subscripts A & B identify the cameras):
 δ fA δ fA δ fA 
 ∆x   
 A   δX δY δZ

 ∆y   δ gA δ gA δ gA
  ∆X 
 A   δX δY δZ   ∆Y 
 = δ fB δ fB δ fB  
 ∆x B     ∆Z 
   δX δY δZ

 ∆y   δ gB δ gB δ gB

 B 
 δX δY δZ 

Solving this equation system in a least squares sense leads to an estimate of the object space dis-
placement [DX, DY,DZ]T.
As a quality check we can project the solution back through the gradient matrix and compare to the image
plane displacements from which the solution was found:

 δ fA δ fA δ fA   
 δX δY δZ   ∆x 
 ϵ xA     A 
 ϵ yA   δ gA δ gA δ gA
  ∆X   ∆y 
 = δX δY δZ   ∆Y  −  A 
 ϵ xB   δ fB δ fB δ fB    ∆x 
    ∆Z   B 
ϵ yB δX δY δZ
     ∆y 

δ gB δ gB δ gB
  B 
 δX δY δZ   

ϵ= ϵ 2xA + ϵ 2yA + ϵ 2xB + ϵ 2yB

The total reprojection error e should be below 0.5-1.0 pixels for decent quality PIV images and reasonably
accurate camera calibrations. If it is not one (or both) of the 2D vectors from camera A & B may be erro-
neous and the algorithm may reprocess with each vector replaced with the average of its immediate spa-
tial neighbors.

557
15.82.2 Input Required
Stereo PIV requires an imaging model fit and a 2D vector map from each of the two cameras used.
Either of the vector maps can be the parent, if the images from which the vector maps have been derived
aren't dewarped, camera calibrations needs to be part of the user selection that need to be made before
entering the Stereo PIV recipe (See See "Working with the Database" on page 58 and/or "Selection (Input
to Analysis)" (on page 233) ). If dewarped images are used, the camera calibrations are automatically
selected based on the corresponding cameras in the input.
The vector maps can be derived from raw images or from images that have been dewarped (See "Image
Dewarping" on page 310):

The two vector maps must share a common grid of vector locations, which is easiest accomplished by
dewarping the images to a common user defined grid and applying the same processing to get the vectors.
The images can be dewarped to the same grid by selecting both image ensembles and calibrations when
dewarping the images. This will make sure the images are dewarped to the same coordinate system.
The two vector maps do not have to be from the same level, i.e. one can be a raw vector map and the
other can be a validated vector map.

15.82.3 Recipe for Stereo PIV processing


Different parts of the recipe become active or inactive depending on the input chosen.
If the 2D vector maps are derived from raw images you may choose to define your own mesh (/grid) or
have DynamicStudio generate one for you. The auto-generated grid will by default attempt to match the
vector density of the parent 2D vector maps, but setting oversampling factors larger than one will increase
the density, while oversampling factors smaller than one will reduce it:

558
If the vector maps are derived from dewarped images, the resulting 3D vectors will inherit the grid from
their 2D parents (you will get an error message if the two 2D vector maps do not share a common grid).
Since the grid is given by the parents, oversampling or user defined mesh is not an option and thus dis-
abled in the recipe. You can however specify a max accepted reconstruction error (the e described above).
For historical reasons this is not supported when 2D parent vector maps are derived from raw images:

If the reconstruction error e exceeds the specified limit, DynamicStudio will assume that one or both
2D vectors are erroneous and try to replace them with the average of the 8 nearest neighbors. This leads
to three new stereo reconstructions;

l One where only the vector from Camera 1 has been replaced.
l One where only the vector from Camera 2 has been replaced.
l One where vectors from both cameras have been replaced.

If either of these calculations brings the reconstruction error below the accepted limit, Dynamic Studio will
pick the solution with the smallest e (if possible replacing only one of the vectors).
If the reconstruction error remains too high even when replacing both vectors with the average of their
neighbors, the system will keep the initial result, but tag it as invalid.

15.82.4 Displaying results


In-plane velocity components U & V are typically shown as a conventional vector map, while the Out-of-
plane velocity component W can be shown as a scalar map underneath:

559
You can also overlay images and vector maps, but the image needs to be dewarped so both datasets are
represented in a common object space (metric) coordinate system rather than an image plane (pixel) coor-
dinate system. Please note that the image dewarping simply maps the image plane onto the light sheet
plane. This means that objects in front of or behind the light sheet will be more or less displaced and may
also appear distorted depending on the viewing angle and the distance from the light sheet.

To see the image you may need to turn off the scalar map display of the out-of-plane velocity W, make the
scalar map transparent or put the image on top, adjust the lookup table and make the image transparent:

560
The example above shows the flow in a horizontal plane above a magnetic stirrer, spinning at the bottom
of a tank. The stirrer can be seen, but is probably slightly displaced due to perspective effects. Even so
the out-of-plane component shows clearly how water is pushed up in front of the stirrer tips and sucked
down behind them. In this case the image is shown on top of the vector map and the image uses a trans-
parent and inverted color map from white to gray so both in- and out-of-plane velocity components can be
seen.

15.83 Streamlines
As indicated by the name, this method calculates streamlines of a 2D-, instant or averaged velocity vector
map.
To use it, select the map of interest and pick-up the 'Streamlines' method (which is found in the 'Vector &
Derivatives' category). Once selected, enter the number of 'seeds' for X- and Y-coordinates to use and
specify the configuration (i.e. options 'Seed along X edges', etc.) Typically, integrator settings do not need
to be adjusted. In addition, enable/disable the filter option and specify the filter size when use. Press the

'Apply' or 'OK' button to view the results which is then identified with the icon icon in the database.

561
15.83.1 Example
In the example below, streamlines are calculated on a (20 x 20) grid for the averaged velocity map describ-
ing the flow field of a jet in cross-flow. The result is superimposed on to the averaged velocity vector map
to better appreciate the significance of the results.

Example of streamlines map calculated on the average velocity map of a jet in cross-flow.

562
< Back to the top

15.84 Subpixel Analysis


Subpixel analysis takes as input a vector map and creates a histogram of pixel displacements. This may
be done over the full range of displacements in the vector map or on the subpixel part alone. The presence
of peaks near integer pixel values may indicate pixel locking in the data. Pixel locking can be caused by
particle images being smaller than 2 pixels in diameter meaning that the Nyquist sampling criterion was
violated already when images were acquired. In this case the data is undersampled and there is very little
you can do to recover the information lost during image exposure.
If particle images are too small the best solution will be to acquire new images where particle images are
bigger. This can be accomplished by reducing lens aperture and/or defocusing the lens slightly to make
particle images blurry. Both will reduce particle image intensity and if it was low already particles may no
longer be detectable, so correlation becomes impossible. You may be tempted to increase particle image
sizes by low-pass filtering the undersampled images before you correlate, but this will increase particle
image sizes symmetrically around the already biased positions and thus not remove the bias towards
integer pixel positions.
Instead you can try to correlate using larger interrogation areas, thereby including more particles in the cal-
culation of each vector. Averaging over a larger area can mitigate the problem of pixel locking at the price
of reduced spatial resolution.

Below is an example of a situation where pixel locking is present; To the left the full range histogram
shows particle displacements from just below 7 to just above 9 pixels, but with a very distinct peak at 8
pixels and two less distinct peaks near 7 and 9. To the right the same vector map has been processed
looking at the subpixel part and a clear bias towards zero can be seen.

563
Please note that a peak near integer pixel values does not necessarily indicate pixel locking! If the flow
being measured has a very narrow velocity distribution the peak may actually describe the physics of your
flow correctly. To test for pixel locking in this case try to make a new acquisition, where time between
pulses has been increased or decreased slightly. This will mean that particle displacements should
increase or decrease accordingly, so the peak should move if it represents the physics of your flow, but
remain at or near integer pixel values if pixel locking is present.
The example below shows a situation similar to the one above, but without pixel locking: The full range his-
togram on the left indicate that the majority of displacements are in the range 8-9 pixels, while the subpixel
part on the right shows a reasonably flat distribution also indicating no pixel locking problems.

15.85 PTV
PTV (Particle Tracking Velocimetry) analysis methods can perform tracking of particles in either a 2D
plane or a 3D volume. The PTV methods are used for calculating the tracks of individual particles in the
measurement volume.

564
15.85.1 Tomographic PTV
The Tomographic PTV method is using time-resolved images from at least 3 cameras observing the meas-
urement volume from different angles. The cameras must be calibrated to a common reference coordinate
system. This calibration can be performed by using the camera calibration method of Dynamic Studio (
See "Multi Camera Calibration" on page 371 or See "Imaging model fit" on page 347for more information
on camera calibration.) Based on the camera images and the corresponding calibrations, a voxel volume
is reconstructed from each time-step. Within these voxel volumes the particles are identified and sub-
sequently tracked from frame to frame over the entire acquisition.
The picture below show the recipe dialog for the Tomographic PTV method.

565
Several parameters can be adjusted inside the recipe. The parameters are grouped into 4 groups.

The first group 'Threshold ' is used to specify the threshold level that is used to detect particles in the
voxel volume. If the threshold value is too high very few or zero particles will be detected - And if the
threshold level is to low, noise may be detected as particles.
The optimal threshold value depends on the intensity level of the camera images. eg. If the images are
dark, a low threshold value is required in order to properly detect particles.
The next group 'Search area' is used to specify how far a particle may move from one time step to the
other. The search area is specified as a cuboid in the voxel volume.
The group 'Voxel space reconstruction' is used to setup which part of the measurement volume to recon-
struct and analyze. The 'Origin' is the metric position of the beginning of the reconstructed volume. 'Voxel
space' is the metric size of the axis aligned volume. 'Voxel resolution' is specifying the scaling factor
between voxels and metric unit (mm). the voxel resolution is the same for all axis (aspect ratio is one). All
metric values must be given in the common reference coordinate system that is establish by the cal-
ibrations.
The last group, named 'Output' can be used to select the kind of result the method will generate. The
method can output either 3D particle tracks of 3D particle positions.
The detected particles and tracks can be restrained by a number of parameters that is accessible by the
option dialog. By clicking the button 'Options...' the following dialog will show up. Within this dialog sev-
eral constraints can be applied to the detection and tracking of particles.

The different parameters are:


Minimum par- the minimum required volume of a particle, measured in voxels. All detected particles that does
ticle volume not fulfill this requirement are ignored.

Minimum # of Specifies the number of frames that a particle must be tracked before it is considered a valid
particles per track. If a particle can not be tracked over this number of frames (specified by this parameter) it
track. will be ignored.

Lightsheet Provides a convenient list of predefined lightheet orientation (normal vectors) that is parallel to
orientation the coordinate axes.

Lightsheet nor- A vector that is orthogonal to the lightsheet plane. Used to specify the orientation of the light-
mal sheet.

566
Lightsheet A point in object coordinates that lies in the center of the lightsheet. (mm)
posistion

Lightsheet The thickness of the lightsheet (mm)


width
Particles that is not located within the specified light sheet will be ignored.

15.85.2 Time-resolved PTV


Time-resolved PTV needs only two frames from a single camera in order to do tracking. The camera does
not need to be calibrated. As a result of this, the cordinates are in pixes instead of mm. The settings for the
time-resolved PTV is similar to the voxel based one.

Even though the Time resolved method is 2D the results can be visualized in 3D

567
15.86 Universal Outlier Detection
The Universal Outlier Detection analysis is used to detect and optionally substitute false vectors based on
a normalized median test using the surrounding vectors. The technique is well known as the Universal Out-
lier Detection for PIV data, and was presented by Westerweel & Scarano in Experiments in Fluids 2005. It
was proven that a small adaptation to the standard algorithm for the median test by introducing a single
threshold to the normalized vector residuals makes the algorithm tolerant (universal) to a variety of dif-
ferent flow conditions and characteristics.

The adaptation to DynamicStudio provides a rectangular vector neighborhood defined by a uneven MxN
number of vectors. The number of surrounding vectors included in the algorithm depends on the location of
the displacement vector in the data set. In the 3 following examples, using a 5x5 vector neighborhood, the
displacement vector is in the corner of, on the edge of, and well inside the data set respectively.

a) b) c)

As it can be seen when the displacement vector is in a corner a) only 8 neighboring vectors are included in
the median calculation. Whereas if the displacement vector is on the edge of the data set b)14 surrounding

568
vectors are used, and when inside the data set c) 24 = 5x5 - 1 are used. Note that in none of the cases the
displacement vector itself is used in the calculation.

The algorithm also ignores previously rejected vectors. If the displacement vector is a rejected vector, it is
left unchanged. If one or more of the neighboring vectors are rejected vectors, these are ignored when cal-
culating the median. If no surrounding vectors can be used in the median calculation, the displacement
vector is left unchanged.

The normalized vector residuals can be calculated as:

U0 − Um
r0 =
rm + ε , where

U0 is the displacement vector.


Um is the median vector calculated using the neighborhood vectors {U 1, U 2, … , UM × N − 1} .

rm is the median residual calculated using the neighborhood residuals {r 1, r 2, … , rM × N − 1} where ri = U


{i = 1, … , M × N − 1} .
ε is the minimum normalization level.

If the normalized vector residual r 0 is above the detection threshold, the displacement vector U 0 can be
substituted by its median vector U m , otherwise it is left unchanged.

In the recipe dialog for the Universal Outlier Detection analysis the neighborhood size, the detection
threshold and the minimum normalization level can be specified. The output can either be validated and
the invalid vectors rejected or the invalid vectors can be substituted by the median vector calculated using
the neighborhood vectors.

Example of the result of a Universal Outlier Detection analysis:

569
a) Standard cross-correlation PIV output. From the image it can be clearly seen that some of the vector
are expected to be invalid.

b) The same data with the Universal Outlier Detection analysis applied. The red vectors indicates that the
vector is rejected by the filter. Using the default settings of the Universal Outlier Detection most of the
expected invalid vectors are found.

570
c) The same data with the Universal Outlier Detection analysis applied. The green vectors indicates that
the vector is substituted by the filter. The same vectors which are invalidated are now replaced by the
median vector calculated using the neighborhood vectors.

15.87 UV Scatter plot Range Validation


Any vector map can be validated against a user defined expected range of velocities. This is done by
selecting the ensemble containing the vector maps in question and then select the analysis method 'UV
Scatter plot Range Validation' in the category 'PIV Signal'.

The recipe of UV Scatter plot Range validation plots the vectors in a vector map in a XY scatter plot,
where the X Axis is U component (pixels) and the Y Axis is the V component of the vectors.

571
Suitable limits are often determined best by simple trial and error. Try various values, press 'Apply' to see
how they affect the vector maps and when you're satisfied press 'OK' to validate the remaining vector
maps in the parent ensemble.
It is possible set the limits either by typing in the values or by dragging a rectangle in the plot.
Here's an example of a vector map that has been validated with the settings above: Invalidated vectors
are color coded in red, while the rest remain blue to indicate that they are deemed valid vectors:

572
Please note:
UV Scatter plot Range Validation does NOT substitute invalid vectors with an estimated guess for the cor-
rect velocity. To do this you need to apply yet another validation method such as Moving Average Val-
idation.

15.88 Vector Arithmetic


As indicated by the name, the anlysis method 'Vector Arithmetic' enables the user to e.g. subtract a veloc-
ity vector from a velocity vector map. The result is another vector map, which can be examined and/or
used for further processing. The analysis method can be applied to both 3-D and 2-D vector maps as well
as Scalar maps (which can be considered 1D-vector maps).
You can add, subtract, multiply or divide a fixed vector from all vectors in your vector map(s) or you can
apply the analysis with another dataset as operand. The latter is commonly used to f.ex. subtract the
mean vectors from instantaneous vectors in order to reveal f.ex. vortices conveyed with a bulk flow...
This requires that you first identify the (mean) vector map that you wish to subtract from the other vector
map(s):
To do this you must of course calculate the mean velocity vector using 'Vector Statistics'. Having done
that right-click the Vector Statistics and choose 'Select' from the context menu. Alternatively left-click the
statistics while pressing the Ctrl-key. Either way a small checkmark will appear beside the vector sta-
tistics icon indicating that this dataset has now been selected for use in an upcoming analysis.

You can now return to the ensemble containing the vector maps from which you wish to subtract the
mean. Right-click it, select 'Analyze...', and in the resulting dialog choose 'Vector Arithmetic' in the cat-
egory 'Vectors and derivatives':

573
In the resulting analysis recipe you can choose to either subtract a fixed vector or subtract another vector
map. If you've previously selected another vector map such as the mean vector map as outlined above
this option will be chosen by default and the name of the ensemble will be listed as shown below

If you did not pre-select another vector map for subtraction you will only have the option of subtracting a
fixed value, which you can enter in the lower half of the recipe.

Performing the vector subtraction will give you a new ensemble of vector maps all calculated as the orig-
inal vector maps minus either a fixed vector or a chosen vector map.

As you can see from the recipe, Vector Arithmetic can do more than subtracting values, it can also add,
multiply or divide and combine with other datasets and/or constant values as operands.

Not all combinations of input (parent) data and operator are possible or meaningful. Possible combinations
are listed in the table below:
Input (Parent) dataset
Scalar 2D Vector 3D Vector
Operand Scalar + - * / + - * / + - * /
(constant or 2D Vector + - * / + - * / + - * /
other dataset) 3D Vector + - * / + - * / + - * /

574
Operations with a green background color are possible, while operations with a red background color are
not allowed. Arithmetic operations with a yellow background color are possible, but not always mean-
ingful. Often they will only be relevant with constant (i.e. user defined) operator values, while using
another dataset as operator in these cases will typically be of very limited use.

Resulting datasets will generally be of the same type as the parent (i.e. Scalar - Scalar, 2D - 2D & 3D -
3D).

If you attempt one of the operations marked with red in the table above, you will get an error message,
explaining that there is a mismatch of dimensionality between the Input and Operand dataset. Similarly
you will get another error message if the input and operator dataset does not have the same size (i.e. dif-
ferent number of vectors horizontally and/or vertically).

15.89 Vector Dewarping


In PIV and most other measuring techniques based on light-sheets and cameras, it is assumed and/or
required that the camera is oriented normal to the light-sheet.
In many experiments this is however not feasible, either because of restricted optical access to the exper-
imental setup or because the camera would thereby disturb the flow-field under investigation. In such
experiments measurements may have to be performed with an off-axis camera, looking at the ligh-sheet
at an angle instead of normal to it.

Images recorded with an off-axis camera will suffer from perspective distortion, meaning that the scale fac-
tor is not constant, but varies across the cameras field of view. With numerical models describing the per-
spective distortion ("warping"), it is however possible to compensate and correct ("de-warp") the images
themselves or (in the case of PIV) correct the vector maps derived from the warped images.
Due to perspective off-axis cameras generally cover a larger area of the flow-field than corresponding on-
axis cameras, but instead of a square, each pixel covers an oblong trapezoidal section of the flow-field,
and this may cause loss of information due to smearing of features within the cameras field of view. After
dewarping of the image each pixel will again cover a square section of the flow-field, but the information
lost has not been recovered!
Similar considerations apply regarding the dewarping of vector maps.
For small off-axis angles (smaller than 30�-45�) the problem is small, but nevertheless it is recommended
to use on-axis cameras whenever possible.
If off-axis cameras cannot be avoided, keep the off-axis angle as small as possible.

15.89.1 Setting the z-value and w-value


Performing PIV-measurements with an off-axis camera it is possible to de-warp the images prior to cor-
relation, but dewarping vector maps is a possible (and usually faster) alternative. The process is similar to
the one used for dewarping images.

575
As for the dewarping of images the vector map is assumed to be recorded in Z=0 unless otherwise spec-
ified by the user. Similarly nonzero Z-values should be entered in the Log Entry of the vector map dataset
properties, but beyond this you have the possibility to specify an overall W-velocity component (m/s).

It is well known from conventional PIV that flow through the light-sheet can severely disturb the meas-
urement of in-plane velocities, especially if the through-plane velocity component is of the same or higher
order of magnitude as the in-plane velocities. With increasing off-axis angles PIV becomes even more sen-
sitive to the effects of through-plane motion, but knowing the through-plane velocity it is possible to predict
and compensate for the resulting errors in the calculated in-plane velocities.
When dewarping vector maps recorded with an off-axis camera you are therefore strongly encouraged to
enter a W-value even if all you have is an educated guess. If nothing is specified, the system will assume
Z = 0 mm and W = 0 m/s.

Example
In this example the effect of through-plane motion has not been accounted for, so the two vector maps
appear similar, but there are a few important differences:

l First of all the effects of perspective is clearly visible: Vectors in the original vector map are posi-
tioned in a rectangular grid, where neighboring vectors always share the same x- or y-coordinate,
and both horizontal and vertical distance between neighbors are constant. In the de-warped vec-
tor map this is no longer the case; Vectors are not random, but positioned in a non-uniform trape-
zoidal grid, where neighbors share neither x- nor y-coordinates, and where both horizontal and
vertical distance between neighbors vary across the area covered.
l Secondly the imaging model used is responsible for a change from image to object coordinates,
so positions and displacements are measured in mm instead of pixels, and velocities are meas-
ured in m/s instead of pixel/s. Note also that the origin has moved from the lower left corner of the
original vector map to the center of the de-warped vector map. The new origin corresponds to the
position of the zero marker in the calibration images used for the imaging model fit.
l Despite these changes there is a one-to-one correspondence between vectors in each of the two
vector maps, meaning that information regarding for example vector status codes (Valid,
Rejected, Outside, etc.) is maintained, and vector validation methods can thus be applied either
before or after the dewarping depending on user preferences.

576
Vector map before and after dewarping (top and bottom respectively).
The non-uniform grid of vector positions may impede further analysis such as the calculation of vorticity
and/or streamlines, since most algorithms for analysis of PIV data are designed for rectangular grids. To
overcome this problem you may wish to resample the de-warped vector map to get back to a uniform grid,
but please remember to validate the vector map before doing so.
A re-sampled vector is a weighted average of four neighboring vectors in the parent vector map. Assuming
for simplicity that these four vectors are weighted 25 % each, it is obvious that an undetected outlier
among them will be much harder to detect in the re-sampled vector map than it was in the original vector
map.

15.90 Vector Interpolation


Vector interpolation takes as input a mask and a vector dataset and reconstructs (interpolates) vectors
that overlay selected regions in the mask. The method of Thin Plate Splines (TPS) is used to interpolate
data. TPS is an algorithm for interpolating and/or fitting 2D data. As the name implies, TPS essentially

577
takes as input 2D data and "bends" a flat plate until all the points pass through it. In the event of too much
noise and the possibility of singularities, the user can apply "relaxation". Zero relaxation forces the plate
(surface) to pass through all the input points, a large value reduces the result to a least squares approx-
imation.

Interpolation is controlled by the following two parameters:

1. Radius: the distance in pixels about a point of interest when collecting data points for an inter-
polation.
2. Relaxation: the degree of relaxation, that is, requirement, that all points pass through the resulting
surface created during interpolation.

Application of the mask above results in the following given vector map. Interpolated vectors are marked
as substituted (green).

578
Overlay comparing original and interpolated vectors.

15.91 Vector Masking


This method is used to mask velocity vectors in user-defined regions of a vector map. Note that the vec-
tors are not changed or removed from the map but simply tagged with status code 'Outside', 'Disabled' or
'Rejected'. Based on the status code, these vectors can then be hidden from the vector map display
and/or excluded from further analysis.

To apply masking you must first define a Mask, using either the analysis method "Define Mask" (on page
284) or a regular single-frame image with the Custom Property 'Mask' enabled (See"Custom Properties"
(on page 232)). The mask ensemble must contain either one static mask or N dynamic masks, where N
equals the number of vector maps to be masked. Dynamic masks are often derived from the same parent
images as the vector maps e.g. using the "Image Processing Library (IPL)" (on page 329), but please note
that vector masking require single-frame masks.
If you use regular images for masking, nonzero pixels in the Mask image will identify regions in the vector
map that are to be left untouched, while Zero-valued pixels in the Mask image identify regions where

579
vectors will be tagged 'Outside'. It is the vector location (i.e. center of Interrogation Area) that determine
whether or not the vector is masked no matter if the corresponding IA extend into masked areas.
To apply a mask to a vector map, pre-select the mask (See "Selection (Input to Analysis)" (on page 233))
and then select the ensemble containing the vector maps to be masked. Look for Vector Masking in the
category ’Masking’.
No matter what kind of mask you use, the vector masking recipe is the same and has no settings or
options:

The resulting masked vector ensembles are labeled with the icon and thus differ visually from the par-

ent vector ensemble icon .

The following examples are based on a top-down view into a square water tank with a magnetic stirrer at
the bottom. The light sheet is horizontal and just above the spinner. In each image the (static) tank walls
can be seen as well as the (moving) spinner at the bottom of the tank.

Using "Define Mask" (on page 284) we can create a mask to identify and hide the noisy vectors from out-
side the tank:

580
...please note there is only one mask, which is applied to each of the parent vector maps successively.

Using the "Image Processing Library (IPL)" (on page 329) we can create a series of masks to remove vec-
tors overlapping the spinner:

581
...please note there are 20 masks here, one for each of the parent vector maps. The parent image and thus
the 'Derived Mask' are both double-frame and thus cannot be used directly for vector masking. Therefore
we extract a single frame mask using "Make Single Frame" (on page 451).

To remove both (static) walls and (dynamic) spinner we can apply the two masking operations suc-
cessively or we can merge the two masks into a hybrid mask by applying the static mask to the dynamic
ones (Using "Image Masking" (on page 313) with the 'Black-out areas' option):

582
...applying these hybrid masks to the vector maps we can remove both walls and spinner:

As stated above Vector Masking tags invalid vectors with a status code 'Rejected', 'Outside' or 'Dis-
abled', but does in fact not remove or change any vectors.
In the examples above vector map display options have been set to hide the masked out vectors:

583
In the default vector display all vectors are shown in which case the masked vector map could look some-
thing like this:

15.92 Vector Resampling


As indicated by the name, this method re-samples velocity vector maps by interpolating between neigh-

boring vectors. A re-sampled vector maps is labeled with the icon and, if needed, it can be edited, but
changes in the data-sheet will be updated on the vector map. Typically, re-sampling is done to refine the
spatial resolution of the velocity vector map(s). The method is flexible enough to enable vector map "dila-
tation" too (which would lead to a loss of resolution) to e.g. match CFD grids.

584
To use this method, select the map(s) of interest and look for the method named 'Resampling of vector
map' in the Coordinates€™ category.

Select then the appropriate option:

l Automatic re-sampling of vector map


l User-defined re-sampling of vector map

15.92.1 Automatic re-sampling


With automatic re-sampling, the user just needs to define the over-sampling factor. Velocity vector maps
are spatially refined when this factor is greater than 1.00, whereas spatial resolution is lost when this fac-
tor lower than 1.00 (see examples below).
By default, the grid is square but non-isotropic re-sampling can be made if necessary by entering X- and Y-
over-sampling factor values.

585
Example of velocity vector map re-sampled with (Bottom, left) (2 x 2) over-sampling factor and (Bottom,
right) (½, ½) over-sampling factor. The top image shows the reference velocity vector map.

15.92.2 User-defined re-sampling


User-defined re-sampling grid can be applied too: specify the region of the map to consider in the (X, Y),
(Min, Max) boxes and enter the step size desired. For anisotropic re-sampling, uncheck the 'Square grid'
option and give a value to the Y-step size.

586
Example of velocity vector map re-sampled manually on the region [(50.5, 50.5);(500.5; 700.5)] with aniso-
tropic (2 x 3) grid step-size. (Left): Reference vector map and (Right): Re-sampled vector map.

15.92.3 Edit data


To further access raw data of the refined velocity vector map calculated, open the vector map and select
the menu 'Open as numeric'.

587
Move the cursor of the mouse over the top, left cell and click the right button of the mouse to get 'Display
options' and 'Export as file' capabilities. Comparison with the velocity data of the raw vector map can be
made easily by opening the data-sheet of this velocity vector map.

In this example, the first 2 columns give the position in CCD pixel coordinates whereas the next 2 other
columns give the U- and V-components (in m/s) of the velocity at the selected positions.

15.93 Vector Rotation/Mirroring


This method is used to rotate or mirror vector maps.
To rotate a vector map, look for the method 'Rotate / Mirroring' in the category 'Coordinates' and select the
option(s) of interest; i.e.

l Rotate: of 0�, 90�, 180� or 270�


l Mirror around Y- or/and X-axes

588
The result vector map is then labeled with the icon , clearly showing that a rotation/mirror has been

applied to the raw vector map ( ) or masked vector map ( ). Note that images can be rotated (by any
degree) as well using the 'Rotate' method of the Image Processing Library module.

15.94 Vector/Scalar subtraction


As indicated by the name, the Vector/Scalar subtraction method enables the user to subtract a vector or
scalar from a scalar or vector map. The result is another vector/scalar map, which can be examined and/or
used for further processing.
In the following the method is described and applied to a vector map, but it can be applied to a scalar map
in exactly the same manner.
You can subtract a fixed vector from all vectors in your vector map(s) or you can subtract one vector map
from another. The latter is commonly used to f.ex. subtract the mean vectors from instantaneous vectors
in order to reveal f.ex. vortices conveyed with a bulk flow...
This requires that you first identify the (mean) vector map that you wish to subtract from the other vector
map(s):
To do this you must of course calculate the mean velocity vector using 'Vector Statistics'. Having done
that right-click the Vector Statistics and choose 'Select' from the context menu. Alternatively left-click the
statistics and press the Space-key. Either way a small checkmark will appear beside the vector statistics
icon indicating that this dataset has now been selected for use in an upcoming analysis.

You can now return to the ensemble containing the vector maps from which you wish to subtract the
mean. Right-click it, select 'Analyze...', and in the resulting dialog choose 'Vector/Scalar Subtraction' in
the category 'Vector & Derivatives':

589
In the resulting analysis recipe you can choose to either subtract a fixed vector or subtract another vector
map. If you've previously selected another vector map such as the mean vector map as outlined above
this option will be chosen by default and the name of the ensemble will be listed as shown below

If you did not pre-select another vector map for subtraction you will only have the option of subtracting a
fixed value, which you may choose to have the system calculate for you as the mean of all vectors in the
instantaneous vector map.
Performing the vector subtraction will give you a new ensemble of vector maps all calculated as the orig-
inal vector maps minus either a fixed vector or a chosen vector map. An example is shown below:

590
Instantaneous vector map.

591
Temporal (ensemble) average vector map.

592
Instantaneous vector map minus the ensemble mean.

If the size of the 2 vector maps is not identical, DynamicStudio will issue an error message and stop all fur-
ther calculations. Most likely this is a mistake, but if not you may overcome the problem by resampling
one of the vector maps so it matches the size of the other one. Refer to the help file of the Re-sampling of
vector map method (found in the Coordinates category) to get further information on this calculation.

Note
'Vector/Scalar subtraction' is a legacy analysis method and we recommend the use of 'Vector Arithmetic'
instead. (See "Vector Arithmetic" on page 573).

15.95 Vector Statistics


As indicated by the name, the Vector statistics method calculates statistics from multiple velocity vector
maps. Graphically results are presented as a vector map of mean velocity vectors, but a lot of other sta-
tistical quantities are calculated as well. These can for example be accessed via the numerical display
and include mean velocities, standard deviations, variances and covariance between different velocity
components. For each position (i.e. interrogation area) in the vector map, the number of vectors included
in the statistical calculations is also stored.

593
All vector statistics are labeled with the icon to be easily located for later processing.
To use the Vector statistics method, select an ensemble containing at least 2 vector maps of equal size.
Vector statistics supports 2D-2C, 2D-3C (stereo) and 3D-3C PIV data.
In the category Analyze/Statistics, look for the method called Vector Statistics and select it.

Among the 3 different options available, select the one corresponding to your needs:

l The All vectorsmethod includes all the vectors calculated (i.e. correct, bad and substituted if any
calculated by the PIV algorithm used) for each interrogation area.
l The All valid vectorsmethod discards bad vectors.
l The All valid, non-substituted vectors method only considers correct vectors and is thus the most
stringent options. This is the default method, as it also represents the most realistic situation.

Press the Apply button to calculate vector statistics and view the results (Click on the Display button to
access further visualization methods.) and OK to accept the calculation.

594
Typical vector statistics result PIV measurements in a jet / cross-flow.

A Vector statistics box always appears on the resulting map. This box, which can be moved over the map
(using the mouse), contains statistical information at every interrogation area. The format of the data can
be modified too: When the mouse hover over the box, right click and select the data format wished.
(Select the Hide option to remove this box from the map.)

15.95.1 Visualization methods


With the mouse, double-click on the image (or Right click on the image with the mouse and select Display
option: scaling options, color codes and other advanced data representations are now available. See "Vec-
tor Map Display" on page 651

15.95.2 Numeric data display


To access raw data of the vector map calculated, open the map and select the menu Open as numeric.

595
Move the cursor of the mouse over the top, left cell and click on the right button of the mouse to get the Dis-
play options.

The first columns show x-/y-coordinates for each vector. For all vector maps coordinates will be available
in mm or as a simple index number, while for 2D vector maps, coordinates will also be available in pixels.
The next columns show mean velocities. For all vector maps velocities will be available in m/s, while for
2D vector maps pixel displacements are also available. Then follows the length of each mean velocity vec-
tor and the standard deviations on each of the velocity components (2 or 3). These quantities are all in
m/s. The next column of data is the sum of the variances in all 2 or 3 directions. Mathematically the sum of
variances is proportional to the turbulent kinetic energy, but please remember that PIV produce velocity
estimates as a spatial average over the interrogation area. Vector statistics also calculate covariance
between the different velocity components and normalize with the standard deviations to get the dimen-
sionless correlation coefficient. Mathematically the covariance is proportional to the Reynolds stress, but
please note again that PIV performs spatial averaging over the interrogation area, while classic fluid
mechanics operate in an infinitely small fluid element (i.e. a point).
The column labeled N gives the number of vectors used to calculate the statistics. In the example above
20 vector maps were used for the statistical calculations. When N = 18 in the numerical display, it means
that 2 of the 20 vectors were discarded from the statistics calculation, typically because they were
marked as invalid. The last column shows the status for each vector position. The status is numerically
coded, but can be shown as text by right-clicking inside the numerical display and selecting 'Show Status
as Text'.
Data format can be modified too so as to e.g. match a typical format when using the export function.

15.95.3 Formulas used


Classic formulas for statistical quantities are used. Expressions below use symbols Un and Vn to
describe velocity samples, and corresponding formulas are of course used for U, V, W and their com-
binations.

596
1
Mean velocity u= ∑ un
N
1 2
Variance σ u2 = ∑ (u n − u)
N−1

Standard deviation σu = σ u2
1
Covariance cov(u, v) = ∑ (u n − u)(vn − v)
N−1
cov(u , v )
Correlation coefficient ρuv =
σ uσ v
3
∑(u n − u )
Skewness skewu =
N σ u3
4
∑(u n − u )
Kurtosis kurt u = −3
N σ u4
A total of N data samples is assumed, so all sums are from n = 1 to n = N. Generally you would expect N
to match the number of vector maps used for the calculation, but the user may choose to exclude vectors
that have f.ex. been identified as invalid. Consequently N may be smaller than the # of vector maps for
some of the positions in the resulting vector statistics map.
For the calculation of variance and covariance please note the division with (N-1) instead of N. These
quantities are based on deviations from the mean, but the mean is itself derived from the same data sam-
ples, so using N would bias results towards zero. For sufficiently large values of N this will be of little
importance, but with PIV you often have limited amounts of data. Please remember though that reliable
statistics typically require at least 20-30 samples that are independent of one another. It is mathematically
possible to calculate statistics on just two samples, but results have little or no physical meaning.

15.96 Vector Stitching


Vector map stitching merges vector maps of different sizes and locations into one large vector map. Basic
requirements are: user–defined positions for each vector map and predetermined scale factors.
Select vector maps from the database tree menu. You can select to have the software automatically deter-
mine the dimensions of the final vector map, or define a region and only vectors that fall into this region are
accepted.

597
Automatic determination of final vector dimensions: the software will determine the dimensions of the
output vector map on basis of the input ensembles.
Vector map dimensions: the user supplied minimum and maximum dimensions of the output vector
map. Any vectors outside these limits will be ignored.
Calculated area: the total area of all input vector maps.
Number of vectors: the total number of input vectors.

Above: Result of two vector maps of same size position adjacent to each other.

598
15.97 Volumetric Velocimetry
DynamicStudio provides three methods for analyzing volumetric imaging data:

l Volumetric Particle Tracking Velocimetry


l Tomographic Particle Tracking Velocimetry
l Least Squares Matching (LSM)

The selection of which technique to use depends on the hardware configuration and the particular physical
nature of the measurement. The following table outlines the operating field for each of the above tech-
niques:
Technique Configuration Medium Seeding den-
sity
Volumetric Particle Tracking Velo- 2 cameras – time resolved Water Low
cimetry
Tomographic Particle Tracking Velo- 3 cameras – time resolved Water Medium
cimetry
Least Squares Matching 3-4 cameras – double- Water / High
framed air
Both Volumetric Particle Tracking Velocimetry and Tomographic Particle Tracking Velocimetry require tel-
ecentric lenses.
Volumetric Particle Tracking Velocimetry
The simplest to operate of all three techniques, Volumetric PTV, requires only two cameras running in
time-resolved mode and one pinhole image model fit from each camera ( See "Multi Camera Calibration"
on page 371 or See "Imaging model fit" on page 347for more information on camera calibration.)
The general processing scheme is as follows:

l Extraction of individual particles in images from cameras A and B over time period T.
l 2D particle tracking of camera A data (using 3-frame tracking).
l Matching of data from camera B for all track histories found in camera A.
l Transformation and reduction to velocity.

Inputs:

l Pinhole imaging models for cameras A and B.


l Time-resolved image inputs from cameras A and B.

Outputs:

l Particle positions
l 2D tracks from camera A or B
l 3D tracks

599
Figure 1: Classical PTV recipe.

Threshold Image background subtraction level


Search area (X, Y) Maximum distance between particles in pixels (2D tracking)
Relative angle Angle (deg) between cameras A and B.
Lens magnification Telecentric lens magnification
Output Data output format

600
Figure 2: Options dialog
Acceptance angle Maximum track angle deviation between 2 particles.
Deviation from epi- Maximum allowed pixel distance from epi-polar line during stereoscopic matching.
polar line
Min. particles per Minimum number of particles per track.
track
Max. 3D track devi- Maximum 3D track deviations (in mm) between 2 particles in a track.
ations
Lightsheet orien- Provides a convenient list of predefined lightheet orientation (normal vectors) that
tation is parallel to the coordinate axes.
Lightsheet normal A vector that is orthogonal to the lightsheet plane. Used to specifies the orientation
of the lightsheet.
Lightsheet posistion A point in object coordinates that lies in the center of the lightsheet. (mm)
Lightsheet width The thickness of the lightsheet (mm)
Fundamental matrix Override input fundamental matrix.
override

Both 2D and 3D tracks represent time histories of individual particles measured in the flow, that is, for
each step in time the position and velocity for one particle are stored.

Tomographic Particle Tracking Velocimetry


As opposed to tracking individual particles in Volumetric PTV, TomoPTV determines particle locations by
reconstructing the voxel spaces of camera pairs and using only those voxels that pass specific criteria.
Determining potential particles via voxel space allows for higher concentrations of seeding, and using the
concentration of voxels with a minimum gray-level value assists in avoiding ghost particles (“false
tracks”). TomoPTV requires 3 cameras placed coplanar about the measurement volume.

601
Figure 3: TomoPTV recipe

Threshold Image background subtraction level


Search area (X, Y, Z) Maximum distance between particles in pixels.
Relative angle cam 1 & 2 Angle (deg) between cameras A and B.
Relative angle cam 1 & 3 Angle (deg) between cameras A and C.
Lens magnification Telecentric lens magnification
Output Data output format

Figure 4: Options dialog

Minimum voxel counts Minimum number of voxels per particle.

602
Lightsheet width Lightsheet thickness (mm)
Lightsheet midpoint Position of lightsheet midpoint (mm)
Min. particles per track Minimum number of particles per track.
Fundamental matrix override Override input fundamental matrix.

Inputs:

l Fundamental matrix for cameras A, B, C.


l Image input from cameras A, B, C.

Outputs:

l 3D particle tracks
l Averaged reconstructed voxel space.
l Raw reconstructed voxel space.

Processing scheme:

l Image threshold
l Voxel reconstruction of camera pairs (A+B) and (A+C)
l Transformation and alignment of voxel spaces
l 3D tracking based on accumulated voxels (4-frame tracking).

Least Squares Matching


Least Squares Matching (LSM) is a method for determining 3D velocity fields in highly seeded flows in
water and air. In contrast to Volumetric PTV and TomoPTV, the output data are equally spaced vectors
and the input data consists of double-frame images. Regular cuboids from two or three reconstructed
voxel volumes are analysed to determine local affine transformations.
See "Least Squares Matching" on page 401

15.97.1 References
J. Kitzhofer, P. Westfeld, O. Pust, H. G. Maas and C. Brücker. Estimation of 3D deformation and rotation
rate tensor from volumetric particle data via 3D Least squares matching. In: Proceedings of the 15th Int
Symp on Applications of Laser Techniques to Fluid Mechanics. Lisbon, Portugal, 05-08 July, 2010.

T. Nonn. Application of high performance computing on volumetric velocimetry processing. In: Pro-
ceedings of the 15th Int Symp on Applications of Laser Techniques to Fluid Mechanics. Lisbon, Portugal,
05-08 July, 2010.

P. Westfeld, H.-G. Maas, O. Pust, J. Kitzhofer and C. Brücker. 3-D least squares matching for volumetric
velocimetry data processing. In: Proceedings of the 15th Int Symp on Applications of Laser Techniques to
Fluid Mechanics. Lisbon, Portugal, 05-08 July, 2010.

H.-G. Maas, P. Westfeld, T. Putze, N. Bøtkjær, J. Kitzhofer, C. Brücker. Photogrammetric Techniques in


Multi-Camera Tomographic PIV. In: Proceedings of the 8th International Symposium on Particle Image
Velocimetry - PIV09. Melbourne, Victoria, Australia, August 25-28, 2009.

J. Kitzhofer, C. Brücker and O. Pust. Tomo PTV using 3D Scanning Illumination and Telecentric Imaging.
In: Proceedings of the 8th International Symposium on Particle Image Velocimetry - PIV09. Melbourne,
Victoria, Australia, August 25-28, 2009.

603
15.98 Voxel Reconstruction
15.98.1 Introduction
When performing reconstruction based 3D velocity measurements, the analysis procedure consists of
two parts:

1. Volumetric Reconstruction
2. Velocity analysis

The voxel reconstruction Methods takes over the first part, by generating a voxel based 3D representation
of the measurement domain for each time step and double image. This representation is called voxel
space or voxel volume as sometimes referred to. A voxel can be imagined as a pixel extended into 3D
space with each voxel having a gray value information a pixel. Typically all edges of a voxel are of the
same length. Different approaches exist to reconstruct a voxel volume, in Dynamic Studio there are two
different methods available:

1. " MinLos Reconstruction" (on page 607)


2. " SMART Reconstruction" (on page 608)

Afterwards the voxel based reconstruction, the velocities can be calculated by applying a 3D LSM on the
reconstructed voxel volume. Additional, if there is not need to store the voxel spaces, one can directly use
the See "Least Squares Matching" on page 401 with the opportunity to apply the reconstruction there. Hav-
ing the voxel spaces stored can be an advantage, since for testing different LSM settings, the voxel
spaces do not need to be recalculated for every test, thereby evaluation time is saved.
The recipe is located in the "Volumetric" category of the analysis method menu :

In order to apply this analysis method, at least cameras with their time synchronous particle images as
well as the calibration files of the cameras need to have checkmarks, as seen in the example image.(use
spacebar to checkmark the datasets and calibrations). Furthermore, this method can only be applied to

604
double images. If the dataset consists in ensemble of single frame images (i.e. Time-Resolved meas-
urement), please use the dedicated recipe to obtain double frame images (see "Make Double Frame" (on
page 450)).

15.98.2 The Voxel Reconstruction Recipe


The voxel reconstruction recipe, presented in the following picture, is divided into 2 sections that cor-
respond to the 2 steps of the reconstruction procedure :

o see" Voxel space setup" (on page 605)


o see "Reconstruction setup" (on page 606)

Each of these point will be addressed in the following of this document. Additionally you can choose dis-
tributed analysis and if you will be prompted to continue the reconstruction after the first double image pair.
These checkboxes are found in the lower part of the Voxel Reconstruction user- interface.

Voxel space setup


As already mentioned, a voxel is the equivalent for a volume of a pixel for an image. Each each voxel rep-
resenting a gray value from the 3D particle distribution. In order to perform the reconstruction, the user has
to provide the recipe some geometrical parameters regarding the voxel space :
Resolution of the voxel space
The voxel resolution will determine the number of voxels that will be used to discretize the voxel space,
and corresponds to the physical size of a voxel. This number should be equivalent to the camera's res-
olution, i.e. the physical size in mm per pixel. The smaller this number the higher the number of voxels,
and thus the amount of data to be computed. .
Origin of the voxel space
Enter here the coordinates (x, y, z) of the origin of the desired voxel space, with respect to the calibration.
Entries are in mm. The next image shows a target image with the overlaid calibration. Now we want to

605
start our volume at the x|y|z positions of : -20 | -20 | -10 mm, the starting point of x and y can directly bee
seen in the image. The z position results from the first target calibration image with a right hand coordinate
system, in this case behind the imaged plane.

You can always control your calibrated volume, by overlaying the calibration onto the calibration images
(compare to last image)
Size of the voxel space
Indicate here the size of the voxel space in millimeters. In the later example, the values to enter would
have been (40|40|20) which for the example would lead to the seen above end points in the x- and y direc-
tion. The end point of z would be in a plane in front the image.
The resulting voxel space dimensions are calculated and displayed accordingly to the specified dimension
of the voxel space and the voxel resolution.

For more information regarding the calibration see also Multi Camera Calibration

Reconstruction setup
This part defines the reconstruction method used to generate the voxel space. Two methods are appli-
cable via a drop down menu. The very fast MinLos reconstruction and the more precise, but com-
putationally more costly, and therefore slower, SMART reconstruction. For low seeding densities until
0,03 particles per pixel (ppp) and a four camera setup, or just some test analysis the MinLos recon-
struction should give good results. If the seeding density exceeds these limits and no tests want to be
made, the SMART should be the reconstruction method of choice.
The different options and the graphical user interface is shown below:

606
MinLos Reconstruction
MinLos stands for Minimum Line of sight. For the reconstruction this means: the minimum gray value inten-
sity of all line of sights crossing in one voxel is inherited to this voxel. Hence, it is a kind of intensity com-
parison and the only option that can be chosen by a checkbox is:
Normalize Images. This applies a histogram equalization between the different camera images. This is
sometimes necessary, especially if the cameras are positioned in different scattering directions of the
Laser illumination and the resulting images have big differences in their brightness level.
A sketch of the working principle is given in the next figure, where two particles are present in a voxel
space. The lines of sight of camera 1 and 2 would produce four crossing positions and thereby four par-
ticles. These not existing but reconstructed particles are the so called ghost particles or ghost intensities.
With the third camera the two ghost particles are removed, due to the 0 entry for the pixel representing this
line of sight. With the minimum value chosen, the gray value from the centre of the left particle would 197,
because this is the minimum value of the lines of sight crossing it. Respectively the gray value for the right
particle would be 124.

607
SMART Reconstruction
SMART stands for Simultaneously Multiplicative Algebraic Reconstruction Technique.
The general idea of SMART is, that an observed pixel intensity in the camera images, should equal the
integration of the reconstructed voxel intensities along the pixels line of sight. If the pixel intensity and
projected intensity differs, all voxels along the line of sight are corrected.
The amount of correction applied to each voxel depends on the weight that the voxel has on the pixels
intensity.
The schematic of updating the voxel entries can be seen in the next image.

From Elsinga et al. (2006)


The figure show a representation of the imaging model used for tomographic reconstruction. Here a top
view is shown from 2 cameras and the voxel volume in one slice and thereby simplified in 2D. The gray
level indicates the value of the weighting coefficient ( wi,j )in each of the voxels with respect to the pixel
I(x1, y1). Note that this process is an iterative one, so that each voxel is updated multiple times. The sche-
matic clearly shows the necessity for this, because after the first iteration the line of sights in the voxel
domain of each non-zero pixel are filled with gray values. Those gray values will be removed with an
increasing amount of iterations. The residual is defined by the difference in the gray values between a
voxel and a pixel in the line of sight and it decreases with the number of iterations.
In a first approach the voxel space is filled with entries from a first guess in this case, the first guess
comes from a MinLos, which is automatically applied before the SMART.
see "Color map and histogram " (on page 676)

So much for the theory, more or additional information is given in the literature links below. The next point
will discuss interface from the SMART Reconstruction
Iterations: This specifies how many iterations for the reconstruction will be made, typically 5
Relaxation: Is a damping parameter for the reconstruction procedure that needs to be smaller 1. A typical
value here is 0,1. Note: the smaller the relaxation parameter, the more iterations are necessary!
Normalize Images This applies a histogram equalization between the different cameras, see" MinLos
Reconstruction" (on page 607)

Memory usage for the SMART reconstruction


Be aware of the fact that the 3D reconstruction needs a lot main memory. For instance only the voxel
space in the main memory needs the voxel dimension multiplied by each other times 4 byte. In our exam-
ple this results to 400*400*200*4byte = 128 MB. But with the typical dimensions of a high speed camera
and a voxel depth of 1/4 of the x-y dimension it would be: 1024*1024*256*4byte = 1.1 GB.
Additionally, the weighting matrixes for every camera also get very large (several 10 GB per camera) ,
depending on the seeding density and how many pixels in the cameras have a non zero entries. If the
needed memory exceeds the computers main memory, the weighting matrixes will be stored on the local
hard drives. This causes many read and write accesses on the hard drive and slows down the MART
reconstruction tremendously.

608
Hence, all the camera images need to be pre-processed by the user in a way, that no pixel areas result in a
0 gray value. This reduces not only the amount of memory needed, but also calculation time for the MART
reconstruction.

Once the reconstruction is finished. The voxel volume will be displayed. For more information to the dis-
play please see "3D Display" (on page 664)
A other possibility to visualize the voxel volume can be found in the recipe of the reconstruction. Here the
button 'View slice...' will bring up a menu to view x-y, x-z or y-z slices of the voxel space.

Here a X-Z slice of the voxel space is visualized.

The voxel space is displayed as 2D images corresponding to the selected planes of the voxel volume. It is
possible to scan into the depth of the volume using the drag tool. Contrast and Zoom can be adjusted
using the context menu by right clicking on the display here you also can swish between the double
frames and change the magnification. For more information about the histogram see "Color map and his-
togram " (on page 676)
One can also toggle between the frames by using the hot keys "t" for toggle or "a" for frame 1 and "b" for
frame b.

The number of voxels to be displayed corresponds to one being reconstructed.

For more information about tomographic reconstruction see:

609
[1] T. Nonn. Application of high performance computing on volumetric velocimetry processing. In: Pro-
ceedings of the 15th Int Symp on Applications of Laser Techniques to Fluid Mechanics. Lisbon, Portugal,
05-08 July, 2010.
[2] Elsinga G.E, Scarano F, Wieneke B, van Oudheusden B. W: (2005 a): "Tomographic particle image
velocimetry" 6th Int. Symp. PIV (Pasadena, CA, USA)

15.99 Waveform Calculation


The Waveform Calculation (formerly known as Rescale Analog) enables you to apply one or more trans-
formations to analog waveform (generic) data. It allows you to combine data from any of the coincident
data channels, with a number of build-in mathematical functions and operators.
Each transformation consist of a number of formulas representing the returned output data. Every output
data must have a name defining the column and an optional unit for the value.

15.99.1 Formulas
The Analog transform analysis uses formulas to define the transformations between input data (columns)
and output data (columns). One formula represents one output data column (in generic format) with a
name and units. A formula can be created using build-in operators and functions, along with available input
variable names (see below).
Using the formula drop-down menu enables you to select between the build-in functions and operators, in
the two first columns, and the available input variables in the last column. The input variable names are
corrected using the naming conversion described later.
When an item is selected it is automatically added to the formula editor. This is especially useful to iden-
tify and select the available input data (columns) for the formula.

15.99.2 Built-in Functions


The following table gives an overview of the functions supported. It lists the function names, the number
of arguments and a brief description.

610
Name Arguments Explanation
sin 1 sine function
cos 1 cosine function
tan 1 tangent function
asin 1 arcsine function
acos 1 arccosine function
atan 1 arctangent function
sinh 1 hyperbolic sine function
cosh 1 hyperbolic cosine
tanh 1 hyperbolic tangent function
asinh 1 hyperbolic arcsine function
acosh 1 hyperbolic arccosine function
atanh 1 hyperbolic arctangent function
log2 1 logarithm to the base 2
log10 1 logarithm to the base 10
log 1 logarithm to the base 10
ln 1 logarithm to base e (2.71828...)
exp 1 e raised to the power of x
sqrt 1 square root of a value
sign 1 sign function -1 if x<0; 1 if x>0
rint 1 round to nearest integer
abs 1 absolute value
if 3 if ... then ... else ...
min var. min of all arguments
max var. max of all arguments
sum var. sum of all arguments
avg var. mean value of all arguments

15.99.3 Built-in Operators


The following table lists the binary operators supported by the transformation.

Operator Meaning Priority


= assignment* -1
and logical and 1
or logical or 1
xor logical exclusive or 1
<= less or equal 2
>= greater or equal 2
!= not equal 2
== equal 2
> greater than 2
< less than 2
+ addition 3
- subtraction 3
* multiplication 4
/ division 4
^ x^y, raise x to the power of y 5

611
*The assignment operator is special since it changes one of its arguments and can only be applied to var-
iables.

15.99.4 Naming Conventions


Variable names in the formulas must follow standard mathematical conventions. As a consequence var-
iable names cannot include characters like spaces or others that can be interpreted as an operator or
such. Because data columns can be freely named by DynamicStudio, renaming of the columns names
are sometimes necessary. Valid variable names can be selected in the formula editor in the recipe.
Underscores are used as the preferred delimiter. The following characters are converted 

Input char Converted char


(space) _ (underscore)
- _
( _
) _
[ _
] _
{ _
} _
. _
, _
(any other non-alpha) _
# N
(digit as first char) X

All other non-alphanumeric characters are ignored and left out in the conversion. If the first character is a
digit an ‘X’ replaces it.

15.99.5 Syntax and Numerical Errors


During calculations both syntactical and numerical errors can occur. Syntactical errors are reported with a
precise description of where the actual error is encountered, with a clear indication what to correct. Numer-
ical error can happen during calculation and if often caused by floating point domain and/or range errors.
E.g. if a divide-by-zero calculation error is found the analysis will stop, and report an numerical error in the
formula.

15.99.6 Examples

Linearization
Following examples represent data transformation which converts data from a source data format in volt-
ages into a destination data set in a given unit. Most often analog data is read as voltages from external
instruments or equipment. These analog voltages represents a physical parameter like pressure or tem-
perature. To get from the analog voltage to the physical parameter a linearization is necessary. This lin-
earization can typically be accomplished by a polynomial transformation. Using the if-then-else build-in
function also allows you to define different linearizations to different parts of the input data.

612
The following linearization example shows an instrument having a
relation between the analog voltage and the temperature. At volt-
age 0 V the temperature is 20 °C etc. The curve shows that the lin-
earization trend changes at 4 V. The two linearizations can be
found as:
Trend 1: C = 5*V + 20, and
Trend 2: C = 2*V + 32.

The Analog Transformation can now be used to express this linearization relation using the formula:
if(V<4, 5*V+20, 2*V+32)

15.100 Waveform Extract


The Waveform Extract makes an extract of multiple datasets in selected indices (i). This is particularly
useful in connection with analog waveforms, but can also be used in other connections.
The result from an extract can either be opened as Numeric (exported or copied via the Windows Clip-
board to i.e. MS Excel) or it can be view in DynamicStudio as XY plot.

15.100.1 Extracting Data


The picture below illustrates how the Analog Extract analysis works. When an ensemble contains more
than one analog or generic dataset, it extracts the data values from the same index in all the datasets. The
result is a curve representing all the data values from the same index.

613
The order in which the data values are extracted is based on the current ensemble sorting. It is also pos-
sible to specify more than one index to extract data from, within the same data, in which case a two col-
umn data set is created.

15.101 Waveform Statistics


Basic statistical properties for analog waveform data (generic data) can be calculated using the Waveform
Statistics analysis method. The analog statistics is useful when a trend, in the analog data, is to be
analyzed during a complete series of measurement.

15.101.1 Statistical Values


All the statistical values is returned in one dataset including the following data columns:

614
Column Name Description
X, Y, Z Traverse position
Time Time when data was acquired
Count Number of samples
Min Minimum value in data
Max Maximum value in data
Mean (arithmetic mean)

StdDev (standard deviation)

The X, Y and Z values are collected from the record properties for the dataset. These can be manually
specified or automatically added when using the Acquisition Manager. The Time value corresponds to the
acquisition timestamp, also found in the record properties. The Count value is determined from the input
dataset.
The Min, Max, Mean and StdDev are calculated for each selected input column. The results are presented
using the following naming rules: <Column name>_Min, <Column name>_Max, <Column name>_Mean,
and <Column name>_StdDev , for each of the data values.

15.101.2 Example
The Analog Statistics analysis method can be used to find the average value of an Analog Waveform for
each acquired image, when images and analog data is acquired simultaneously.
Together with the advanced sorting option of ensembles, the analog statistics can be used to sort a series
of images based on an average analog value.

15.102 Waveform Stitch


Waveform Stitch allows you to merge or combine a series of analog (generic data) datasets into one data
series. This is especially useful to combine a series of analog waveforms acquired for each image into one
analog time series for the entire measurement.
You can select which of the available input data columns to stitch, and how the time base of the new data
should be defined.

15.102.1 Stitching Data


The picture below illustrates how the Analog Stitch analysis works. When an ensemble contains more
than one analog or generic dataset, it stitches the data from all the individual datasets into one. The result
is a curve representing all the data values from all the datasets. The order the data is stitched is based on
the acquisition time, meaning that stitching is not possible if the data is missing the acquisition time1 infor-
mation.

1Can be seen with databases created with older versions of DynamicStudio.

615
The above datasets are stitched together to present one curve on a common axis.

616
If the common axis is set to the Time, the curve will be aligned to the acquisition times of the individual
datasets, see image to the left. If the common axis is set to index the "space" between the individual data-
sets will become more evident.
You can stitch more than one data channel if available in the datasets, and then later define which to dis-
play in the XY plot.

15.102.2 Analog Stitching


The way an Analog signal is stitched will depend of the ratio between the time between images (pulses)
and the analog sampling time. If the time between images is high compared with the analog sampling
time, the stitched curve will have longer periods with no data.

15.103 Correlation option Window/Filter


A number of correlation methods is offered for processing of PIV images:
Auto-correlation
Cross-correlation
Adaptive correlation
Average correlation

For all of these a processing option named Window/Filter is available, with a recipe as shown below:

15.103.1 Window functions


The correlations are calculated using Fast Fourier Transformation (FFT). This approach gives much higher
calculation speed than a direct implementation, but the method is based on the assumption that the input
particle patterns are cyclic and correlates across the interrogation area boundary using this assumption.
For example particles near the right hand edge of the interrogation area may correlate with particles near
the left-hand edge, but interpret this as a small displacement to the right; -Due to the assumed cyclic
behavior, the particles near the left hand edge are assumed to be present also just to the right of the right-
most edge of the interrogation area, and similarly particles near the right hand edge are assumed to be
present also just left of the leftmost edge. Obviously this is an error source, but with suitable interrogation
area sizes and time between the two particle image recordings, the "true" particle displacement will nor-
mally produce a dominant peak in the correlation map, while these so-called phantom correlations produce
small noise-peaks that do not affect the final result.

It is however possible to reduce or even eliminate the cyclic noise from the correlation map by using a top
hat window: The window function is a preprocessing of one of the interrogation areas prior to performing
the correlation: The top hat masks out the upper, lower, left and right 25 % of the interrogation area, setting
all grayscale values here to zero and process further on the remaining central 50 % of the interrogation
area. This way cyclic noise is eliminated, since there are no particles near the edge of the interrogation
area that can correlate with phantom particles across interrogation area borders.

The drawback is that signal strength is reduced to 25 % by reducing both height and width of the effective
interrogation area by 50 % each. To avoid this it is generally recommended to use larger interrogation
areas than normal, when applying a window function. In order to recover the information that is lost when
masking the interrogation area, it is furthermore recommended to use overlapping interrogation areas

617
when applying window functions. An overlap of 50 % both horizontally and vertically is suitable when
using the top hat window.

Clipping of particle images is another error source, that is present even when using the top hat window:
Particles straddling the interrogation area border will be clipped so only part of the particle image is visible
when calculating the correlation. The clipped particle will be interpreted as being closer to the center of the
interrogation area than it actually is, and on average particle clipping will tend to bias average results
towards lower velocities. A similar problem exists with the top hat window, where particle images strad-
dling the edge of the window will also be clipped and bias end results towards lower velocities.

The clipping of particle images can be avoided by using a Gaussian window instead of the top hat window;
Where the top hat window multiplies grayscale values with either 0 or 1 depending on the pixel position
within the interrogation area, the Gaussian window multiplies with a factor between 0 and 1 depending on
the position. The effect is that grayscale values decrease gradually when moving away from the center of
the interrogation area instead of dropping suddenly to zero halfway between the center and the edge. This
way particle clipping is avoided, but the Gaussian window does not reach zero at the edges, so some
amount of cyclic noise will remain (but reduced significantly compared to a similar correlation without win-
dows at all).

By setting a parameter, k, the user can change the width of the Gaussian window applied and thus
achieve a suitable compromise between signal strength and cyclic noise; High k-values will produce a nar-
row window, which for all practical purposes reach zero at the edge of the interrogation area, thus elim-
inating cyclic noise. The signal strength will however be low, since only the pixels at the very center of the
interrogation area will contribute, while all other pixels are multiplied with a very small number. Small k-
values will on the other hand produce a broad window giving high signal strength, but at the cost of
increased cyclic noise, since the window does not reach zero at the edge of the interrogation area.

15.103.2 Filter functions


As explained above, correlations are performed using Fast Fourier Transforms, which transform from the
spatial domain to the frequency domain and back again. While the window functions described above oper-
ate in the spatial domain by manipulating the interrogation areas prior to correlation, the filter functions
operate in the frequency domain.

To fulfill the Nyquist sampling criterion particle images should be at least two pixels in diameter, but in
practical PIV experiments this is often not possible, and particle image diameters in the range 1-2 pixels
are not unusual. Violating the Nyquist criterion will inevitably produce pixel locking, where measured veloc-
ities are biased towards values that correspond to integer pixel displacements on the images. This cannot
be avoided completely, but the effects can be mitigated by the use of a Gaussian low-pass filter: Small par-
ticle images produce narrow peaks in the correlation plane and in the frequency domain this corresponds
to high frequencies. With a low-pass filter, high frequency components are damped, resulting in broader,
but also lower correlation peaks. The broadening of the peaks reduces the pixel locking effect, while the
lowering of the peak corresponds to a reduction of signal strength. Fortunately the low-pass filter also
reduce the noise somewhat, so end-result is still OK. The filter width can be controlled by the user, by set-
ting the k-value similar to what is done for the Gaussian window; Small k-values produce broad filters, that
broaden and lower the correlation peaks very little, while large k-values produce narrow filters, that
broaden and lower the correlation peaks much more, until eventually the individual peaks can no longer be
distinguished.

While the Gaussian is a low-pass filter, the filter named No-DC is a band-pass filter that also removes
DC-components from the signal. Generally speaking the background and/or average grayscale intensity in
interrogation areas 1 and 2 will correlate and produce a background DC-correlation level, not related to the
flow being measured. Normally the mean grayscale values are subtracted from both interrogation areas
before processing, so the problem should be small, but any remaining DC-component can be completely
removed using the No-DC filter.

618
The Phase only Gaussian filter has the effect of removing all energy content from the cross spectrum.
This is done by normalizing each element in the cross spectrum (all frequencies are treated as having an
amplitude of one) followed by an optional Gaussian low-pass filter as described above. As a result the
peaks in the cross correlation image becomes very sharp and distinct, but because all frequencies are
given the same weight the Phase only filter can be sensitive to noise. The Gaussian low pass filter can
then be used to dampen this noise sensitivity. One benefit of the Phase only filter, is that it makes the cor-
relation more tolerant to variations in the background eg. visible objects or shadows in the lightsheet.

619
16 Data Exchange
You can import images into DynamicStudio using the Image Import option from the File Menu.
Data can be exported from DynamicStudio to disk as either images or numeric data.

16.1 Image Import 


Available from the file menu.

16.1.1 Formats
The import function allows to import images in the following formats:
File extension File type
.bmp Windows Bitmap file
.tif, .tiff TIFF Tagged Image File Format file
.jpg, .jpeg JPEG File Interchange Format file
.raw Raw file format

The selected record from where the import is initiated determines the location of imported images in the
database:

l If the selected record is the database root, a new project and run will be created.
l If the selected record is a project a new run will be created*.
l If the selected record is a run a new ensemble will be created*.
l If the selected record is an ensemble images will be appended if size and pixel depth matches
existing images*.

* You can not import images into a project containing acquired images, in this case a new project will be created.

JPEGs and Windows Bitmaps are always converted and imported as 8-bit grayscale images. The
accepted input formats are 8 bpp indexed or 24 bpp RGB. TIFF images are imported as either as 8-bit or
16-bit, depending on the input format.

For the ".raw" file format the size and pixel depth of the image is usually not stored in the image. However,
DynamicStudio tries to retrieve this information from properties written to the file.The property names that
DynamicStudio is looking for are "Image Width", "Image Height" and "Image Bit-Depth". If Dynam-
icStudio can find these properties, these will be used. Otherwise DynamicStudio will ask for this infor-
mation needed.

16.1.2 How to Import Images


Select the project, run or ensemble in the database and follow the instruction below.

1. Press File from the tool bar and select Import Images. The Import wizard pops up.

2. Press Add images to select the images you want to import. It is possible to import several
images at a time by selecting them from the dialog window (use the mouse and Shift or Ctrl key).
If many images are to be imported then use the 'Add folder' instead, this way there is not limit on
how many images that can be imported.
Press Add folder to add all the images within an existing folder.
Press Remove to remove the selected images form the list.

620
Select if the images are to be imported as single or double frame images.
While adding images it is possible to skip validation to reduce the time it takes to add many
images.
This is done by pressing the skip button on the progress bar

Cancel will stop adding any further images.

621
3. When importing double frame images and the image files are not double frame tiff images, it is
possible to specify which images should be frame 1 and which should be frame 2.
This is done using regular expressions.

the regular expression shown above means frame one should include all images that ends with 6
digits and the letter 'a'. A search on the Internet will provide detailed information regarding regular
expressions.
4. When Images have been selected and sorted it is possible to select what camera the images
should be associated with.

622
The camera can be user defined where the 'Add a new custom camera' can be changed to any
user defined name. The user can change the pixel pitch of that camera.
If the 'Select camera already used in database' is selected then a list of cameras already used in
the database will be presented for selection.

If the 'Add new camera from DynamicStudio library' is selected then a list of known camera mod-
els will be presented for selection.
If you are importing images into an existing run or ensemble, the properties are already known and
this window will not appear. If you are importing images into a project, you need to specify the
camera name and properties along with timing parameters.

6. The import is finished when you have pressed Finish on the last wizard page.

623
16.2 Note
If you want to use your imported images as calibration images you can move them to a calibration ensem-
ble afterwards .

16.3 Image Export


Image export function can be opened by right-clicking an ensemble or a collection of images and selecting
export.

16.3.1 Formats
The export function allows to export data in the following formats:
File extension File type
.bmp Windows Bitmap file
.tif TIFF Tagged Image File Format file
.jpg JPEG File Interchange Format file
.avi Video for Windows Movie file
.emf Windows Enhanced Metafile file
.raw raw format, only pixel data is saved to the file

When one of the image or movie formats is selected the display representation is exported; for Images
this means the image itself, and for Vector plots an image of the vector plot.

Image Export
When JPEG or Windows Bitmap formats are selected the images are represented in only 8-bit. This
means, that when an Image with a higher resolution is exported the original image resolution is reduced by
resampling the pixel values, resulting in loss of information. If loss-less export of Images is required,
choose the TIFF format. Exported images in the TIFF format will contain full pixel information in 8-bit, 16-
bit and 32-bit floating point using LSbit-first fill order, - currently suitable for all image formats. If double
frames are exported in this format the both frames will be contained in a single TIFF file.

Note
Most image viewers and tools in Windows does not support 16-bit integer or 32-bit floating point tiff
images. An example of a image processing tool that do support these tiff formats is ImageJ
(http://rsbweb.nih.gov/ij/)

If double frames are exported in Bitmap or JPEG a image file will be created for each frame and signed
with the letter a or b indicating frame a or frame b.

When exporting tot eh "raw" format, information on the image size and pixel depth is stored as properties
on the file. These properties is not visible in Windows, but can be retrieved by use of special tools found
on the Internet. The property names are "Image Width", "Image Height" and "Image Bit-Depth".
Pixel data is stored as a stream of pixel values, row by row, starting from the upper left corner of the
image. If pixel depth is 8 bit, then only one byte per pixel is used. If pixel depth is more than 8 two bytes
per pixel is written to the file(little endian format).

Please note that some of the export formats takes very long time for exporting; especially when large
Images are exported as Text format.

AVI Export
When exporting images or vectors to AVI, movies are always saved as 8-bit using the current LUT infor-
mation.

624
Vector Export
When exporting Vector plots it is possible to save images as Windows Enhanced Metafiles.

16.3.2 File Format


The exported data sets are saved into a Destination path, using a Base file name, File type and Index to
construct the resulting export file name. 
<Destination path>\<Base file name>.<Ensemble id>.<Index>.<File type>
The index is prefixed with 0, zeros are added until 6 digits is reached. This format is useful when listing
and sorting in the Windows Explorer. The Ensemble id uniquely identifies the ensemble record, ensuring
unique output file names.

16.3.3 Enhanced image quality


Prior to exporting images, it is possible to enhance image contrast or to change color for example.

l From the images to export, open the color map and histogram (right mouse click)
l Adjust the LUT (contrast), select grayscale or color display

l Close the "color map and histogram" window and save the new display settings as Global color
map.
l The images can now be exported with the new display settings

16.3.4 How to Export Data


Single data sets can be exported from within an expanded ensemble by right-clicking on the data set index
or name. All data sets in an ensemble can be exported by right-clicking on the ensemble record.

625
1. Select the ensemble or data set inside the ensemble to export.
2. Right-click on the record and select Export....
3. Fill the Export dialog
Destination path is remembered between sessions and is default set to \<database folder>\-
Exports\
Base file name is preset by the name of the selected ensemble
File type is remembered between sessions and is default set to .tiff format.
Start index is preset to 1.
4. Press Ok to start exporting.
5. During export a progress bar is displayed. It is not possible to perform other tasks in Dynam-
icStudio while exporting, but the export can be aborted by pressing Cancel.

When exporting in Movie format it is furthermore possible to specify the Playback and Compression rates.
The Playback rate determines how fast the movie should show the images or frames pr. second (fps). The
Compression rate determines the quality of the movie, a high compression (close to 100%) gives poor
quality but a small movie file, a low compression rate (close to 0%) gives high quality and a large movie
file. Default the Frame rate is set to 10 fps. and the Compression rate to 70 %.

16.4 Numeric Export

Numerical export function can be opened by right-clicking an ensemble or a collection of images and
selecting export.

Numerical export allows you to export datasets to the following formats:

File extension File type


.CSV Comma separated values.
.TAB Tab delimited text file.
.DAT Tecplot data file.
.XML Extensible Markup Language

Basics
In order to export the following information must be specified:

l Path (path to the directory to export to)


l Base name (first part of the name of the export file)
l Index (second part of the name of the export file)
l Export type (the type of export to perform)

A Preview button is available to see the export result of a given export setup on the first dataset.

Includes
When exporting numerically it is possible to add properties to the export file. The properties are as follows:

626
l Originator (what ensemble does the export come from)
l CameraInfo (camera name and index)
l Custom Properties (properties that have been added by the user to the project, run and ensemble)
l TimeStamp (timestamp of the data)

Columns select
It is possible to deselect the column data that is not needed.
Here it is also possible to select number of decimals for the individual values.

Export Types

l CSV
l TAB
l DAT
l XML

CSV
The csv file contains a header section ">>*HEADER*<<" and a data section ">>*DATA*<<".
These strings denote the start of each section.
The header section will always contain a line for file id and version.
If properties has been added to the export. Additional lines will be added to include these.
One line pr. property.

>>*HEADER*<<
FileID:DSExport.CSV
Version:1
Originator:deleteme1.New Project.New Run.Cross 32 50%
CameraInfo:Cam.9
Custom
Properties:AnalysisEnsemble.Coordinates.X.Double:;AnalysisEnsemble.Coordinates.Y.D
time.Double:;
TimeStamp:0,0005
>>*DATA*<<
x;y;x (pix);y (pix);x (mm);y (mm);U pix;Length;Status
0;0;15,5;15,5;0,1178;0,1178;0,000689571882168438;3,89431493223103E-05;0
0;1;15,5;31,5;0,1178;0,2394;0,000510268292648681;1,06871219396756E-05;0 
.
.
.

The Custom properties is actually one line in the file and the format is:

name:value;name :value ...

TAB
See CSV for details on header.

627
the data is arranged the same way except that they are separated by a tabulator character.

DAT
this format is a tecplot format that includes that columns that are selected and the properties are added to
the DAT file as DATASETAUXDATA values.
XML
this format is structured as follows:

<?xml version="1.0" encoding="UTF-8"?>


<Export>
<info id="DSExport.XML" version="1">
.
.
.
</info>
<data>
.
.
.
</data>
</Export>

Every property is located in the info tag.


All data columns are located in the data tag.

A property is constructed as follows:

<property name="XXXXX">
<type>XXXXX</type>
<value>XXXXX</value>
</property>

The name attribute denotes the name of the property.


Type can be any of the microsoft defined typecodes1
Value corresponding to the type.

The following shows how the custom properties are arranged:

<property name="Custom Properties" array="true">


<param recordtype="AnalysisEnsemble" category="Coordinates" name="X">
<type>Double</type>
<value>0.034</value>
</param>
<param recordtype="AnalysisEnsemble" category="Coordinates" name="Y">
<type>Double</type>
<value>9.455 </value>

1Empty, Object, DBNull, Boolean, Char, SByte, Byte, Int16, UInt16, Int32, UInt32, Int64, UInt64, Single, Double,
Decimal, DateTime, String

628
</param>
<param recordtype="AnalysisEnsemble" category="Coordinates" name="Z">
<type>Double</type>
<value>6.6</value>
</param>
.
.
.
</property>

The property has the attribute array to indicate that this property is an array of several parameters.
The parameters are NOT necessarily the same type.

Data are arranged in the following way.

<datacolumn title="x">
<values type="Int32" seperator=";">95;98;98;98;98;98;98;98</values>
</datacolumn>
<datacolumn title="y">
<values type="Int32" seperator=";">0;57;58;59;60;61;62;63;64;65</values>
</datacolumn>
<datacolumn title="x (pix)">
<values type="Double" seperator=";">15,5;1583,5;1583,5;1583,5;1583,5;</values>
</datacolumn>

The title indicates the name of the column


The type is specified by microsoft typecodes.
The separator attribute indicates the character used to separate the values.

16.5 FlowManager Database Converter


16.5.1 Converting a FlowManager Database
DynamicStudio allows converting existing databases from FlowManager. Only raw images will be con-
verted meaning that the data analysis and processing including calibration parameter calculation will have
to performed again in DynamicStudio.

From the tool bar, select tools and then FlowManager Database Converter.

629
Select the FlowManager database to be converted

By default, the DynamicStudio database will be saved in the same folder and will named “FlowManager
Database name.converted.dynamix”. To choose another location and change the database name, press
browse. By unchecking "delete original files”, the database will be copied into the new project. If this
"delete original files" option is selected, the data will be moved into the new project. By pressing restore,
it is possible to convert the database back into FlowManager format.

16.5.2 Calibration images and scale factor


The database converter search for calibration files (IMT.DLT, IMF-Ex). If they are available, the following
window appears and the calibration images will be copied into the calibration folder of DynamicStudio
project. The calibration images have to be reprocessed to calculate the calibration parameters.

630
16.5.3 Scale factor
The database converter copies the scale factor from FlowManager into DynamicStudio.

16.5.4 Example 1: database with calibration information


The calibration images (setup 1) are copied from FlowManager (Left hand side) into DynamicStudio (right
hand side). The Project 1 - setup includes a calibration folder with calibration images. From these images,
you need to perform the calibration again by right clicking on the mouse.
The setup 2 are acquisition images. The calibration icon is for changing the properties of the filed of view
(right click).
Note that none of the analysis results are converted.

631
16.5.5 Example 2 : database with no calibration information.
All the raw images are converted into DynamicStudio (left hand side). No calibration folder is created. The
filed of View properties have been copied and can be changed from the calibration icon (right click).

632
17 Displays
Displays are windows for plotting, graphing and visualizing data on the screen. Displays are displayed in
multiple windows inside DynamicStudio.
There are multiple display-types designed to visualize different types of data. A few are dedicated to show
results from one specific analysis, while many are general purpose displays, used to show results from
several different analysis methods.

l Image map
l Scalar map
l Vector map

For most display types there are 'display options' allowing you to adjust or modify the visual appearance of
the display.

17.1 General Display Interaction


There are several ways to interact with a display.

Quick access features for the display are zooming and moving the image around.

Zooming is done by holding the left mouse button down while moving the mouse. This will display a rec-
tangle around the area that will be zoomed to when releasing the mouse button.

Moving the image around is useful when zooming has been done. This is done by holding the Ctrl key and
the left mouse button down while moving the mouse.

Right clicking the mouse makes a context menu pop up.


The context menu makes it easy to access specific dataset display options, coloring, zooming, ruler set-
tings, recipes, analysis and other general display options.

17.2 Using the display from within an analysis method


The view of an image can be controlled through the context menu that appears when right clicking inside
the image. In this popup menu the zoom level, the active frame (for a double frame exposure) and visual
appearance (Color map and histogram) can be adjusted. This functionality is not implemented for the main
display component

633
17.2.1 Zoom
The zoom level can also be adjusted by scrolling the mouse wheel button or by dragging a rectangle
around the desired area to view. The context menu provides some fixed zoom levels as well as an option
to fit the image to the available window area.

17.2.2 Pan
If holding the <Ctrl> key while dragging inside the image the view area can be moved (panned) around.
The window scrollbars can also be used to pan the view of the image.

17.2.3 Magnifying glass


If holding down the <Alt> button while moving the mouse over the image, a "magnifying glass" is shown.
The magnifying glass can be used to quickly inspect details, without changing the zoom level. The image
below is showing the magnifying glass. Grayscale pixel values are shown in each pixel, if zoom level is
sufficiently high.

634
17.2.4 Color map
The color table used to display an image onto the screen can be adjusted by selecting the menu item
"Color map and histogram" from the context menu. The Color map dialog is described in See "Color map
and histogram " on page 676.

17.2.5 Adjusting the ellipse


The position, size and rotation of the ellipse can be changed by mouse interaction. The entire ellipse is
moved by using the left mouse button to drag the ellipse to its new location. When the ellipse is selected
or while hovering the mouse over the ellipse its adjustments handles is displayed as illustrated in the
image below.
By using the mouse to drag these handles to a new location it is possible to change the ellipse size and
rotation. If the <Shift> key is pressed while dragging a corner handle, the center of the ellipse is kept at its
current location. If the <Shift> key is pressed while dragging the rotation handle (the right circle in the
image below) the rotation angle is kept to a multiple of 45°.
When rotating the ellipse, the center of the rotation is the pivot handle seen in the middle of the ellipse. The
pivot handle can be moved to an other location to change the point around which the ellipse is rotated.
If the <Shift> key is pressed while moving the pivot handle the pivot handle will return to the ellipse center.

635
Setting the ellipse properties using the property dialog.
If right clicking the ellipse its context menu will be shown (left image below). When selecting the ‘Prop-
erties…’ menu item the property dialog is shown (right image below):

In the property dialog entering the desired values of the position, rotation and size will adjust the ellipse
accordingly.

17.2.6 Adjusting the polygon


The position, size, rotation and shape of the polygon can be changed by mouse interaction. The entire poly-
gon is moved by using the left mouse button to drag the polygon to its new location. When the polygon is
selected or while hovering the mouse over the polygon its adjustment handles are displayed as illustrated
in the image below.
The polygon adjustment handles can be categorized in two groups.

1. Handles connected to the bounding box of the polygon:


Adjusting these handles will affect the entire polygon, by changing the location, size or rotation.
2. Handles connected to polygon itself:
These handles are point/vertex handles and line segment handles. Adjusting these handles will
only change part of the polygon.

By using the mouse to drag these handles to a new location it is possible to change the polygon shape,
location, size and rotation. If the <Shift> key is pressed while dragging a corner handle (on the polygons
bounding box), the center of the polygon is kept at its current location. If the <Shift> key is pressed while

636
dragging the rotation handle (the right circle on the bounding box in the image below) the rotation angle is
restricted to a multiple of 45°.
The center of rotation (the pivot handle) is represented by two concentric circles (as shown in the center of
the image below). The pivot handle can be moved to another location to change the point around which the
polygon is rotated.
If the <Shift> key is pressed while moving the pivot handle, it will return to the center of the polygon's
bounding box.

Deleting a point
When right clicking on a polygon point/vertex a context menu is displayed as shown in the screen shot
below.
Selecting the menu item "Delete point" will delete the actual polygon point.

Inserting a point in the polygon


When right clicking on a polygon line segment (the line connecting two polygon points) a context menu is
displayed as shown in the screen shot below. 
Selecting the menu item "Split line segment" will create a new polygon point midway between the line seg-
ments start and end points.

637
17.2.7 Adjusting the rectangle
The position, size and rotation of the rectangle can be changed by mouse interaction. The entire rectangle
is moved by using the left mouse button to drag the rectangle to its new location. When the rectangle is
selected or while hovering the mouse over the rectangle its adjustments handles is displayed as illustrated
in the image below.
By using the mouse to drag these handles to a new location it is possible to change the rectangle size and
rotation. If the <Shift> key is pressed while dragging a corner handle, the center of the rectangle is kept at
its current location. If the <Shift> key is pressed while dragging the rotation handle (the right circle in the
image below) the rotation angle is kept to a multiple of 45°.
When rotating the rectangle, the center of the rotation is the pivot handle seen in the center of the rec-
tangle. The pivot handle can be moved to another location to change the point around which the rectangle
is rotated.
If the <Shift> key is pressed while moving the pivot handle the pivot handle will return to the rectangle
center.

Setting the rectangle properties using the property dialog.


If right clicking the rectangle its context menu will be shown (left image below). When selecting the ‘Prop-
erties…’ menu item the property dialog is shown (right image below):

638
In the property dialog entering the desired values of the position, rotation and size will adjust the rectangle
accordingly.

17.3 Correlation Map Display


Right-clicking anywhere within the image display will bring up a context menu.

From this menu it is possible to select the menu-item Cross-Correlation Map (-or Auto-Cor-
relation Map in the case of a single frame image). This will bring up a window that displays a sur-
face plot of the normalized cross-/auto-correlation of a given interrogation area within the image
(the window is shown in the picture below).
A white rectangle in the image display is used to indicate where the correlation is calculated. By
moving the mouse inside the image display the interrogation area will follow the mouse position
and the surface plot will constantly be updated to reflect the correlation at that current position.

Manipulating the view of the correlation map.


The view of the correlation map can be altered in different ways.

639
By clicking inside the window and dragging the mouse, the surface is rotated around the trans-
parent green ball seen in the center on the display above.
The surface can also be zoomed in and out by using the mouse wheel button. Finally the position
of the surface can be changed by holding CTRL button while dragging inside the window.

The context menu


Right clicking the mouse inside the correlation window will display the context menu shown
below.

From this menu various settings can be adjusted.


A brief description of each setting is given below.

l Auto-hide - will hide the surface plot if the parent display loses focus.
l Interrogation Area Width and Interrogation Area Height is used to specify how large
the interrogation area will be.
l Set View - will change the angle at which the surface is viewed. (options: default, top
down view)
l Colors - can change the look of the surface or the background by applying different color
tables.
l Coordinate Axes - shows or hides the coordinate axes.
l Zero Level - shows or hides a transparent plane perpendicular to the z-axis at z=0.
l Surface as Wires - toggle whether surface is drawn as solid or as wires.
l Shiny Surface - toggle whether a shiny look is applied on the surface.
l Animate - starts rotating the surface plot around the Z-axis of the eye-coordinate system.
The rotation speed and direction can be changed by using the mouse.
l Close will close the window.

17.3.1 Normalized Cross Correlation formula

640
F1() and F2() are the interrogation areas from frame 1 and 2 in the image.
F1 and F2 are the mean value of the interrogation area.
s and s are the standard deviation of the interrogation area.
F1 F2
N is the number of pixels.
The resulting correlation map will have values in the range [-1, 1], where 1 mean a perfect cor-
relation/match

17.4 Particle Density Probe


Particle Density belongs to a group of image 'Probing Tools' alongside the See "Correlation Map
Display" on page 639 (=Auto-Correlation Map for Single Frame parent images) . These probing
tools all analyze subsections of the full parent image, provide live feedback in response to mouse
movements etc, but do not store any results in the DynamicStudio database.
As the name implies the 'Particle Density' probe tries to identify seeding particles within the prob-
ing area, show the ones found and compute local particle density.
It aims at finding bright particles on a dark background and will thus not work for shadow images.
Nor will it work for floating point images, where output will be a blank screen.
To access this tool right-click anywhere within the parent image display to bring up a context
menu and select 'Particle Density':

This will open the Particle Density Display and overlay a white square on the parent image to indi-
cate the region currently being probed:

641
642
The Particle Density output window will show the probing area and overlay small circles (Ø5
pixel) where particles have been found. The circles are saturated (=255 for an 8-bit image, 1023
for a 10-bit image and so on), while the parent image is thresholded at one grayvalue lower. This
allows you to have the circles shown in a different color, by manipulating the Color-Map for the
Particle Density display. Right-click it and select 'Color map and histogram':

The color map settings used in the example above are:

Above the probing area display there are two tabs, named 'Output' and 'Settings':

'Output' shows the number of particles found within the current probing area as well as the cor-
responding average 'Seeding Density'.

643
The 'Source Density' indicate the fraction of pixels above the detection threshold, and assuming
particles are round we get the 'Average Size' (diameter) of each particle image by combining Seed-
ing and Source Density.
The 'Settings' tab allow the user to influence the processing, by specifying the 'Probe Area Size'
and the 'S/N Threshold':

Probe Area Size defaults to 192x192 pixel, but may be set to 64x64, 96x96, 128x128 or 256x256
instead. Processing time increases with the size of the probing area and with the grayscale depth
of the parent image, so the largest probing area, 256x256, may respond slowly especially for
images with grayscale depth of more than 8.
The S/N Threshold defaults to 4.0, but can be set to any value in the range 1.0 to 9.0.

17.4.1 Working principle…


Internally Particle Density Probe uses grayscale normalization as described in See "Image Proc-
essing Library (IPL)" on page 329 as 'Pixel Normalization':
g In − MED Ω
gOut =
K ⋅ Max(ϵ, MAD Ω)
where gIn and gOut are Input and Output grayscale values of the pixel in question and MED and
W
MAD are the Median and Median Absolute Deviation in the spatial neighborhood W around this
W
pixel. The minimum noise level e is included to avoid division by (almost) zero in areas with more
or less constant grayscale values and K is a scale factor converting Median Absolute Deviation to
Standard deviation s (assuming background noise is Gaussian).
Both the MED (Median) and the MAD (Median Absolute Deviation) are computed by applying a
13x13 median filter twice. Applying it once with a rectangular neighborhood may leave horizontal
and/or vertical artifacts, which the second pass then removes. The size, 13x13 pixel, has been
chosen as a suitable compromise between robustness and processing speed. Theoretically the
conversion factor K should be 1.4826 if the background is Gaussian, but we use 1.5, which is
good enough for practical purposes.
The minimum noise level e is chosen as roughly 1/4 of the bits available:
Grayscale depth noise bits Epsilon
8-9 2 4
10-13 3 8
14-16 4 16
The number of pixels that exceed the user defined S/N Threshold leads to Source Density and
greyscale peaks above the same threshold identify particles, which in turn leads to Seeding Den-
sity.
For display purposes we return to the parent image, threshold it at one grayvalue below saturation
and superimpose saturated Ø5 circles around each particle found. The particles are identified for
counting purposes only, no attempt is made to locate them with subpixel accuracy, but we strive
to ensure that each particle is counted only once.

17.5 XY Display
The XY display is used to display 2D column based datasets using line, scatter or bar representation.
The XY display has several properties to ease the investigation of the dataset, all described below.

644
17.5.1 Graphical user interface
There are different toggle possibilities making it easy to manage the display.

The following possibilities are available from the context menu shown when right-clicking inside the dis-
play:

l Chart type gives the possibility to show the display as either line or bar (bar chart can't show log-
arithmic x-axis).
l Zoom Out will zoom out to either dataset defined range or user defined range. This can be set in
“Display Options…”.
l Show Legend can be toggled to show the legend.
l Probe is a feature that makes it possible to show the x and y values for the data point closest to
the mouse or the index of the data in the dataset.
l Show Grid is a toggle option to have a grid shown in the display.
l Show Markers is a toggle option to show markers defined by the dataset (can for example be
mean value). Some datasets don’t have markers.
l Info Box shows info defined by the dataset.

17.5.2 Legend
From the legend it is possible to highlight a line in the display by clicking on the line in the legend. This can
be useful if multiple lines are shown, to distinguish between the lines. To disable highlighting right click on
the legend and select “Deselect data”:

645
The selected dataset will be in red.

17.5.3 Info Box


The info box will show information (if available) from the dataset.

646
17.5.4 Zooming
There are two ways of zooming the display. The mouse can be used, by marking a rectangular area in the
display, or by scrolling the mouse wheel button1. The other way is to specify a range in the “Display
Options…” The Zoom Out from the display context menu will zoom back to full range. The image below
shows the rectangle drawn when zooming with the mouse:

17.5.5 Probe
The Probe feature can be used in two different ways. If no line has been selected then the probe works by
showing the x and y value of the plotted point closest to the mouse, disregarding any particular line. If a
line has been selected the probe will show values for only that line.

17.5.6 Default setup


For a newly created XY Display the selected menu items will look like this:

1The mouse wheel button can be used in combination with the <SHIFT> or <CTRL> button, to restrict the
zooming to either horisontal og vertical direction.

647
17.5.7 Display Options
The display options have 4 tabs in order to simplify the user interface.

l Data selection
l Plot setup
l Axis setup
l Line style

All settings can be applied without closing the display options dialog.

17.5.8 Data Selection


The data selection is for choosing what lines should be shown in the plot. It is only possible to select one
x-axis data source.

648
17.5.9 Plot Setup
The plot setup is for setting general plot properties. A title can be written, some datasets supply its own
title, but it can also be edited. The title is a drop down control that makes it possible to always select the
default title (if supplied by the dataset). The editable colors are axis color which include axis lines,
numbers and labels. The title & legend color applies to the title and the legend text. The plot back color
applies to the background color of the entire display. The legend back color applies to the background
color of the legend area. Font type applies to all text in the display except the info window. The currently
selected type is shown as an example beside the drop-down. The font sizes used in title, legend, probe
text and axis labels can be set individually.

649
17.5.10 Axis Setup
The axis setup provides the possibility to edit the axis labels the same way as the title. It is always pos-
sible to get the default axis label from the drop-down. Here it is also possible to edit the range that should
be displayed by the plot. If the Full Range is checked then the full range of the dataset will be used, other-
wise the range specified here will be shown. If toggling the Zoom Out item from the display context menu,
the display will zoom out to the selected range.
The origins of the axis are by default set to the border of the plot area, so that they never interfere with the
plot. This can be changed so the axis cross-section is shown at {0;0}, by checking the Axis origin at zero
checkbox. If the plot is then zoomed and the axis lies outside the zoom area, then the axes are shown at
the border of the plot area. The axis can be selected to be shown as logarithmic if necessary for the x- or
the y-axis or both (bar chart can't show logarithmic x-axis).

17.5.11 Line Style


The style of each line can be changed with the following parameters:

l Line style
l Line color
l Line thickness
l Point symbol
l Symbol color
l Symbol size

By default the first time a plot is shown, the color of the first line is black.

650
17.6 Vector Map Display
The Vector Map Display is used to display the results obtained via a standard 2D-2C PIV or a stereo 2D-
3C stereo PIV.

17.6.1 Vector Map display options


Options menu for the vector map display is opened by double clicking the vector map, or via the context
menu entry "Display Options..."( opened by right clicking on the vector map).
Display Options for the Vector Map consists of several different tabs, that will be described in the fol-
lowing.

651
Scaling

Scaling is necessary to display vectors from the selected variables. Scaling of the vectors can be done in
two different ways:

l Auto-Scaling
l Fixed scaling

Subtract
It is possible to subtract values from the two components of the vectors. For example, convective motion
velocity can be removed from display in order to highlight existing vortices in the vector map. The spatial
mean value of the vector map can also be used, by checking the corresponding or clicking the "Mean" but-
ton, as a specific value to be removed form the display.

652
Colors
Different colors for the vectors can be chosen. It is possible to select to have Rejected, Substituted, Out-
side, and Disabled vectors drawn in special colors.
It is also possible to have the vectors drawn with a color representing the length of the vector

653
Color Vectors
The selection of "Color vectors" will color vectors by its length, thus all other coloring of vectors will be dis-
abled. Different color maps are proposed to color the vectors with.
It is important to notice that the color is set by the length on the displayed vector, for example, if one
desires to plot only one component then the color is set in agreement with the length of the displayed vec-
tor. This feature is very useful especially when the selected variables for the vector plot differs from the
usual U and V components. (See "Examples of realizable displays" on page 659).
To set a range for the coloring of the vectors there are three different possibilities :

l Use fixed min & max found during browsing the ensemble
Here the minimum and maximum for the range is found during browsing the ensemble. When ever
a larger or a lower value is found compared to min and max, the value will be used as min or max
for the range. Outlier vectors will have an influence on the finding of min and max, and will prop-
erly make the range to wide.

l Use individual Vector map min max


Each individual vector map will de displayed with its own range found in the dataset.
As above Outlier vectors will have an influence on the finding of min and max, and will properly
make the range to wide.
l Manual
The entries Minimum and Maximum will be used for the range. Vectors length outside the
range will either be given the color of the minimum or the maximum value.

654
Vector map
By default U vs. V is displayed in the vector map, but it is possible to select other variables for the vector
components. This is done in the Vector map tab.
The number of different variables available will vary depending on the type of vector map displayed (e.g.
Standard 2D-2C PIV or Stereo 3D-3C PIV).
From this tab it is also possible to suppress one or more components by checking the corresponding box..
This results in a pseudo profile plot.
The vector's anchor can also be changed to Tail point or Mid point using the dedicated radio box.
Finally, it is possible to not display all the vectors by using "Index skipping". If optimized Autoskipping is
enabled, vectors will be automatically be hidden in order to keep the frame rate high when showing more
than 50,000 vectors.

Reference Vector
A reference vector can be display in the lower left corner of the display. The vector size can either be set
manually or set to the average vector of the vector dataset.

655
The

Scalar Map
A background scalar map can be enabled by selecting a Variable from a list of available variables in the
vector map. The list can vary depending on the recipe used to calculate the velocity field. For example the
2D LSM directly evaluates gradients from the grayscale particle images, thus gradients will also be avail-
able for display.

656
When selecting a variable different from None it becomes possible to set a number of different levels for
coloring.
Range for coloring the background scalar can be fixed to individual dataset maximum and minimum
values, by checking the check box “Use full range”.
Un-checking “Use full range” makes possible to manually specify a minimum and a maximum value that
will be used for the entire dataset.

Scalar Map Style


Selecting a Variable different from None in the Scalar Map tap will enable settings in the tap Scalar Map
Style

657
In Scalar map Style it is possible to select how the Scalar map in the background is displayed.
The following possibilities are available:

l Discrete
l Follow contours
l Only contour lines

Several different color maps are available for the display of the background scalar map, and can be
changed in "Color use".

Scalar Map Interpolation


Selecting "Follow contours" or "Only contour lines" in the "Scalar Map Style" tab, will enable settings in
the "Scalar Map Interpolation" tab.

658
17.6.2 Examples of realizable displays

Vectors only displays


A pseudo profile plot of a measurement in boundary layer is shown in the figure below.
Only one velocity component is displayed and colored by its length (See "Colors" on page 653 and See
"Vector map" on page 655 ). Note that for a better visualization, all the measured vectors are not plotted
using the "Index skipping" feature.

The figure below represents a pseudo profile plots of RMS values of the main velocity component in a
boundary layer.
The first component of the displayed vector has been set to "Std dev (U)" and no second component is

659
used (See "Vector map" on page 655). The color is displayed according to the length of the displayed vec-
tor (in that case the RMS value of U), thus highlighting the location of the maximum turbulence intensity
locations.

Super Imposed Vectors and Scalar Map


The examples below shows Vector plots of mean velocity values super-imposed on Scalar Maps of RMS
values of velocity, in a boundary layer and in a subsonic jet.

660
17.7 Scalar Map Display
17.7.1 More visualization methods…
Spatial coordinates system and scale bar for the scalar values can be added to the mean/RMS
maps. With the mouse, click on the map and select 'Info box' to get the scalar bar.

Scaling properties are accessed via the 'Display option' or by double-click with the mouse (Left
button) on the scalar map. Adjust the range and the number of levels according to needs.
Any variable that is present in the ensemble can be displayed by selecting it the drop down list.

661
Additional drawing methods are also available in the 'Style' section, where drawing style 'Follow
Contours' is the default. The 'Discrete' style will typically show a pattern of rectangular tiles of uni-
form color corresponding to the scalar value in each point. The other styles interpolate between
discrete scalar values to produce a smoother display, where contour lines can be used instead or
added to the contour plot.
A number of different color schemes are available for showing the scalar map on the screen.

662
Except with the discrete display style, the scalar map display will interpolate between scalar
values to produce a display that varies smoothly. This is accomplished by averaging over a neigh-
borhood around each and every point in the display. The neighborhood size is defined by the so-
called integration step size, where large values will produce smoother displays than small ones.
The software can determine a suitable step size automatically, or let the user select one. The
user defined step size can be entered directly or relative to the average spacing between neigh-
boring values in the scalar map.

663
17.8 3D Display
The 3D Display is used to display volumetric datasets using vectors, iso-surfaces, and contours. The 3D
data can be probed, animated, exported to other windows applications, and viewed in different stereo ren-
dering modes, depending on the dataset being displayed1.

Left: Two contour slices are added at different plane normals, while vectors are displayed. Right: Vectors,
iso-surfaces and a contour slice all displaying different scalar properties using different color maps.

Multiple volumetric datasets can be added as layers to the "master" dataset by dragging dataset from the
database onto the active display window. All datasets will use the same coordinate system, but each data-
set will have separate display options.
To be able to create a 3D Display you must generate a volumetric dataset representing scalar data in X, Y
and Z dimensions. The 3D Display can display both voxels and 3D vectors.
By using the Display Options for each display (layer) respectively, visual properties can be changed:
In case of the voxel volume, the palette and slice sub volume can be defined.
In the case of vectors, one can add iso-surfaces and place contours slices using the build-in probe. The fol-
lowing will explain the different display options for either dataset type:

17.8.1 Voxels
The voxel volume display shows the extent of the voxel volume using a semi transparent box and a slice
of a voxel volume. The slice widget can either be moved freely, scaled and rotated to fit a desired sub-vol-
ume.

664
17.8.2 Interacting with the voxel volume display
The volume slicer can be moved freely by pressing the Shift key while dragging the volume with the left
mouse button. The widget can be rotated around the normal of a selected face of the widget by pressing
the Ctrl key while dragging the mouse up or downwards on the face of the widget. The faces of the slice
volume can be moved by pressing the Altkey and dragging a selected face of the widget. Pressing the
Shift key while doing this will drag the entire volume in the direction of the face normal.

665
17.8.3 The display options for voxel volumes
The display options for the voxel volume enables the user to adjust the settings for the display for better
visualization of the dataset.

The Show probe option hides slice widget, but the sliced volume will still be rendered. This can be used to
remove unnecessary visual information when overlaying for example vectors on top of the voxel volume
display. The show probe option can also be changed from the right click menu on the display.
The Show volume hides the voxels in the volume, and can also be selected from the displays right click
menu.
The Use transparency makes voxels more and more transparent the small the value is. Un-checking the
Use transparency makes all voxels solid.
The slice widget can be reset to a known state from the drop down menu, the thickness of the slice can be
set in the text box to the left of the drop down menu. It is possible to choose to align the slice widget with
any plane spanned by the coordinate axies, or to have the widget fit the whole volume. It is also possible
to select the same option from the displays right click menu.
Selecting Color map and histogram from the right click menu brings up the color map dialog that is similar
to the color map dialog used for images

666
It is also possible to set the color look up table to a given palette or change the range of values to display.
Any value under the minimum value will be set to transparent and any value above the maximum value will
be clamped to the maximum palette color
It is possible to switch between the two frames of a double frame voxel volume using the "t" button, while
having the voxel volume display selected.

667
17.8.4 Images
Imaged that have been dewarped can be show in the 3D view by either right clicking on the image and
choosing 3D view or dragging the image into an existing 3D display window.
The image will be represented to scale (defined by the dewarping). Dewarped source images can therefore
be used as a reference when interpreting for example vector or voxel data.

668
17.8.5 Vectors

The 3D display can show both true 3D vectors and vectors derived from stereo PIV.
The default display for stereo PIV vectors is the 2D view, but right clicking on the stereo PIV dataset and
choosing 3D view will show the data using the 3D view. Alternatively the stereo PIV dataset can be
dragged onto an existing 3D display.
By default volumetric data is displayed as vectors, and the length of the vectors are normalized and
adjusted to the size of the display. By using the Vector page in the Display Options dialog the look and rep-
resentation of the vectors can be changed.
Rejected and substituted vectors can be removed form the display by unchecking the corresponding
boxes.

By changing the size of vectors by using the Vector size slider, they become more visible.

669
Left: A typical default vector display when a new 3D Display is opened. Right: By sizing the vectors they
become more visible.

The Index skipping option can be used to trim down the number of displayed vectors to get a better look of
the data. The display can be translated (rotated, moved and zoomed) by dragging the mouse inside the dis-
play window.

Left: The indexing option reduces the number of visible vectors allowing other displays to become more
visible. Right: By using the mouse the display can be interactively rotated, zoomed etc. based on the cam-
era mode.

The length of the vector will always represent the actually size of the velocity, but the color can be
changed to represent any scalar available in the dataset. By default the color represent the length of the
vectors, but by changing the Scalar vector data property it can e.g the U, V or W-component along with a
number of derivatives. By changing the color map from the default Rainbow style, the interpretation of the
vectors can be made more clear.

670
Left and right: By changing the color map, information and structures in the data can be made more visible.

For comparing vectors from multiple volumetric datasets both the vector color, size and size can be set to
fixed. The fixed vector scale factor maps the size of the vector in mm relative to the axes to a velocity of 1
m/s.

671
17.8.6 Iso-surfaces

As an alternative (or supplement) to the vectors, the volumetric dataset can be displayed using iso-sur-
faces. As for the vectors the color and the color map of the iso-surfaces can be changed to represent any
scalar value available in the volumetric dataset. The number and range of iso-levels can be specified, and
the iso-surfaces can be combined using the smoothing and transparency options.

672
Left: Iso-surfaces created at different levels in the data. Right: By using the Shooting features the iso-sur-
faces are combined into larger structures.

673
17.8.7 Contours

As an third display option planar contour slices can be added to the display. The easiest way to add con-
tours slices is to use the probing feature. The probe is an interactive contour plane that can be positioned
(moved and resized) by using the mouse within the display window. When the probe is in the correct posi-
tion, it can be added as a contour slice, either by using the display options dialog or directly from the con-
text menu. The probe can optionally be restricted to move in a fixed plane, or it can be moved freely. The
probing feature is also available directly from the context menu of the display window.

674
Left: The probe is displayed with resizing handles in each corner and a rotation and tilt handle in the center
of the plane. Right: When converting the probe to a contour slice, the handles are removed.

Contour slices can also be added or updated manually by selecting a fixed plane perpendicular to one of
the 3 axes, or arbitrary by typing in the coordinates of 3 of the corners defining the plane in the display
options dialog. Like the other displays the color and the color map of the contour slices can be changed to
represent any scalar value available in the volumetric dataset. The color map used by the probe will follow
the color map for contours.

Any number of contour slices can be added to the display, and vectors, iso-surfaces and contour slices
can be displayed separately or combined for advanced displays. During investigation of your data you
may need to reset the camera using the context menu, to bring all data into the display.

17.8.8 Stereo Rendering


The 3D Display supports displaying the data in stereo rendering on the screen. There are 3 different
modes supported:

l Red-Blue is supported for legacy use. This format can be used for displaying stereo on a standard
monitor, but the effect is limited and the color spectrum is limited.
l Checkerboard requires active glasses synchronized to the monitor. Select this if your hardware
driver runs in checkerboard 3D mode.
l Shutterglasses requires active glasses synchronized to the monitor. Select this if your hardware
driver runs the shutterglasses mode. This mode gives the best effect and is fully color neutral.

17.8.9 Animation
The 3D Display can be animated, automatically rotating the display to view the data from all sides. This
feature is especially informative when used together with the stereo rending feature.

17.8.10 Camera
Data can be viewed interactively using different camera modes:

l Trackball is a motion sensitive mode where motion occurs when the mouse button is pressed
while the mouse pointer is moved. This is the default mode.
l Joystick is a position sensitive mode where motion occurs continuously as long as the mouse but-
ton is pressed.
l Flight is an animated mode where you interactively fly through you data.

Use Reset view to place your data in the center of the display window.

17.8.11 Export
The default export as image and display from the main menu of DynamicStudio is extended with a local
export option. This allows for more advanced export formats besides the standard bitmap formats. The fol-
lowing formats are supported:

675
PNG Portable Network Graphics (PNG) bitmap format. Provides lossless data
compression, popular for Web contents.

BMP Bitmap Image File (BMP) bitmap format. Includes all pixels uncompressed.
This is probably the most compatible format, but generates large files.

JPEG JPEG (JPG, JPEG) bitmap format. Provides lossy compression generating
very small images only suitable for e.g. E-mail and Web.

TIFF Tagged Image File Format (TIF, TIFF) bitmap format. Used for scientific
imaging mainly.

EMF Enhanced Meta File (EMF) vector and bitmap graphics format. Used under
Windows as the file containing calls directly to the GDI.

EPS Encapsulated PostScript (EPS) text and vector graphics format. Widely used
and compatible with any dtp program.

VRML Virtual Reality Modeling Language (VRML) 3D interactive vector graphics for-
mat. This format is suitable for the Web and for backward compatibility.

X3D X3D (X3D) 3D computer graphics XML-based file format. This format is part
of international ISO- and Open standards.

1Special hardware is required for displaying and viewing data in stereo rendering.

17.9 Color map and histogram

A Lookup Table is used to determine the color and intensity with which a particular pixel value will
be displayed. It does not affect the actual greyscale values in the image, only the way they are
shown on the screen. The Color map and histogram dialog is used to manipulate the lookup table
contents.

To bring up the Color map and histogram right-click anywhere within the image display. This will
bring up a context menu as shown below.

From this menu it is possible to select the menu-item Color map and histogram. Selecting this
menu item will bring up the Color map and histogram dialog as shown in the example below, alter-
natively, double click the image display.

676
The Lookup table contents are shown in a graph:

l The x-axis represents all possible pixel values in the image.


- X is scaled so it goes from 0 to the maximum value supported by the greyscale depth of
the image (i.e. 255/1023/4095 for an image with 8/10/12 bits per pixel).
- In the example above the image has 10 bpp and the x-axis therefore goes from 0 to
1023.
- In the example above a color palette is used. The color of the x-axis shows what color a
particular pixel value will be displayed with when the Lookup table is applied. In this case
the pixel value 866 will be displayed with the color white.
l The y-axis represents the 256 different colors of the image display.
l The histogram shown in the background of the lookup graph is a histogram of the different
pixel values in the image.
The Histogram is normalized so the maximum value is at the top of the y-axis. In some
cases it is preferable to have the Histogram shown with a logarithmic y-axis. In this case
check the "Logarithmic histogram" check box.
l Statistical info shows information about the image:
l Minimum: The minimum pixel value found in the image.

l Maximum: the maximum pixel value found in the image


l Number of different values: The number of different pixel values found in
the display
l Mean: The mean pixel value

677
l RMS: The Root Mean Square of the different pixel values found in the
image
l Peak index: Indicates the pixel value that has the highest pin in the his-
togram, and the number of pixel found with this pixel value.

Color coding
It is possible to select what color is to be displayed when the pixel value falls above or below the
lookup limits.
This functionality is enabled using the check box color coding. The colors can be selected in the
color coding section.

Palette selection
It is possible to chose between a number of different color palettes. To select a color palette click
on the selection box in the bottom right corner and choose a palette.

LUT Types
This control lets you choose between the following types of Lookup tables (LUT's) :

l Straight Line
l Gamma (see Wikipedia explanation of Gamma correction)
l Hyperbolic

To select which type of LUT to use right click on the graph area. The following context menu will
appear:

Fit LUT to Histogram


It is possible to have the LUT auto-adjusted based on the pixel histogram information from the
image. The adjustment ensures that 99.8% of all pixel counts are between the upper and lower
limit of the LUT.

Auto-Hide
It is possible to make the Color map and histogram dialog auto hide when another display is
selected. Right click anywhere in the dialog to bring up the context menu, then check the Auto-
Hide menu item.

Use Global LUT


When checked this LUT lets you define a global LUT for the camera. Each time a new acquired
image ensemble is saved in the database, this LUT will be used. The global LUT is linked to the
camera, so if there are more cameras in the system each camera will have its own global LUT.

678
679
18 Dialogs
You can get help on dialog by pressing the F1 button.

18.1 Field of View


18.1.1 Scaling of measurements to metric units.
The Field of View dialog is used to inspect and if necessary modify origin and scale factor used to convert
pixels into metric units. This can also be done using the See "Measure Scale Factor" on page 681 dialog,
but that requires a calibration image, while 'Field of View' does not.
To display the dialog, right click on a calibration record and select the menu option "Field of View..."
For information on how to create a calibration record please refer to See "Calibration Images" on page 62

This will bring up the Field of View dialog shown below:

The dialog will list all cameras available in the current project folder, so you may inspect and possibly mod-
ify origin and scale factor for each of them.
The Scale Factor can be determined in one of three ways, each of which will automatically update the
other two when you press 'Apply':

l You can specify Scale factor directly and have the resulting image width and height shown.
l You can specify a nominal image height and have the resulting width and scale factor shown.
l You can specify a nominal image width and have the resulting height and scale factor shown.

The origin is only indirectly available in the sense that nominal (x,y)-coordinates for the left and bottom
edge is shown.

680
By default the origin will be in the lower left corner meaning that left edge has x=0 and bottom edge has
y=0, but if origin is elsewhere the bottom and left coordinates will be nonzero.
Information:
When measuring the field of view it is important to measure in a light sheet plane which must be parallel to
the image sensor plane.
Finally press the OK button to leave the dialog and save any changes.See "Measure Scale Factor" on
page 681

18.2 Measure Scale Factor


18.2.1 Scaling of measurements to metric units.
The Measure Scale Factor dialog is used to determine the scale factor converting pixel units into metric
units. For information on how to create a calibration record please refer to See "Calibration Images" on
page 62. 
To display the dialog, right click an image ensemble inside a calibration record and select the menu option
"Measure Scale Factor..."

The Measure Scale Factor dialog will now appear:

681
If the calibration image appears dark it's possible to enhance the image by right-clicking inside the image
and selecting 'Colormap and Histogram'.
Three reference markers labeled O, A & B must be set to determine both origin and scale factor. The
markers are picked and moved around using the mouse; Point a marker, press and hold the left mouse but-
ton while dragging the marker to the new position and release the left mouse button. If two markers are
very close to one another you can pick a specific one by pressing O, A or B on the keyboard while you
click the left mouse button.
To position markers accurately you can zoom in on a detail. There are several ways to zoom; Right-click-
ing inside the display area will open a context menu, from where a number of predefined zooms can be
chosen. Especially 'Fit to Window' is useful to return to a zoom where the entire image is scaled to fit
inside the dialog. You can also zoom by clicking and holding the left mouse button while drawing a rec-
tangle around an area of interest, or you can use the scroll-wheel on the mouse to zoom in or out around
the current mouse position. While zoomed you can pan the image around by pressing the Ctrl-key while
left-clicking and dragging the mouse. Finally you can press the Alt-key to convert the mouse cursor to a
magnifier performing a temporary local zoom while you move the mouse around.
The 'O'-marker determines the location of the origin, which by default will be in the lower left corner of the
image. If (0,0) is outside the image or cannot be identified you can pick some other reference point and
specify it's nominal coordinates as the origin.
Markers 'A' & 'B' determine the scale factor. Simply position the markers at identifiable positions within
the image and specify the nominal distance between the markers. This will automatically compute the
scale factor and show it as read-only in the upper right hand part of the dialog.
Tip:
The precision of the calculated scale factor is proportional with the distance between reference marks A
and B. A large distance will give a better precision.

682
Note:
The value of the scale factor represents the magnification of the lens system. Therefore the physical size
of a pixel element (pixel pitch) is needed to perform the conversion into metric units. The formula is :
unitsmm = ScaleFactor * Pixel_Pitchmm.

To leave the dialog and save the current settings press the OK -button. 
To discard any changes made select the Cancel button.

18.3 Sort
Sorting enables you to change the sorting criterions for an ensemble. The sort option is available through
the context menu of an ensemble. The sort option will not be available if the selected ensemble contains
sub-ensembles or is analyzed.
By default all datasets in an ensemble are sorted after the acquired timestamp. Alternatively it is possible
to sort after a user selected dataset or by a user-defined property value. All sorting options allows for
descending sorting as well.

18.3.1 Sorting by Timestamp


By default sorting by timestamp is selected. The sort will look at the timestamps of all the included data-
sets. Using this option is useful to restore the sorting if another sorting option has been applied or to reor-
der the datasets if the ensemble was constructed by merge of two or more ensembles.

18.3.2 Sorting by Data Column Value


As an advanced option it is possible to sort the ensemble using an external dataset as Selection. The only
restrictions to the external dataset is that it must be generic curve data, and that the number of data points
must match the number of datasets in the ensemble to sort. The external dataset must be made as a user
selection prior to using the sort option. In the sort option it is possible to specify the generic column name
to sort by.
This sorting option is very useful for sorting an ensemble based on analyzed data. As external datasets
the Extract, Fit, and Profile analyzes are useful to define the sort order.

Example
This example shows how Analog Statistics and an Analog Transformation can be used as external input
to the Sort option. The Analog Transformation curve is selected as external user input to the Sort. A dupli-
cate is made of the original analog data, and the curve is used for sorting. As a result the data cor-
responding to the smallest values are now listed on top of the sorted list.

683
18.3.3 Sort by Property Value
Sorting by a property value enables you to sort an ensemble based on property inputs. Custom Properties
can be dynamically added during saving data to the database or when using the acquisition manager. The
property type must be defined as a number value.

After sorting a special sort property is added to the record properties for the sorted ensemble. The sort
property contains the value used for the sorting, either from a timestamp, data column value, or property
value. The Split option uses this value to perform a custom split, and the value can be read and used by
analysis methods.

18.4 Split
Splitting enables you to split an ensemble into two or more ensembles. The split option is available
through the context menu of an ensemble. The split option will not be available if the selected ensemble
contains sub-ensembles or is analyzed.
The splitting will split the ensemble based on the current sort order. The current ensemble will be included
in the split as the first ensemble.

18.4.1 Split at Index


By default the ensemble is indexed starting at #1. Splitting at index defines a number of split widths cor-
responding to number of datasets.

18.4.2 Split at Sort Property Value


The sort property value is added to the record properties of the ensemble, when an ensemble is sorted
using the sort option. Splitting at sort property value defines a number of split widths corresponding to the
range of the sort property values.

18.4.3 Automatic
By default the split widths are calculated automatically defining a linear distribution of the widths over the
range. Using this option will ensure an equidistant and valid splitting.

18.4.4 Custom
Using the custom split widths, you must define your individual split widths in the list. In this case you
must ensure that the range is within the total range. Select each of the width values in the list to set new
values, click, type or press F2 to enter new values.

684
As a help defining the split widths the total range and the current range is automatically calculated and dis-
played. If the current split is invalid, the conflicting position will be marked with a red color in the list. Typ-
ically error situations are when too many splits are defined, or that some of the splits become empty.

18.5 Merge
Merging enables you to combine results from two or more ensembles into one. The merge option is avail-
able through the context menu of multiple selected ensembles. A number of limitations are applied to merg-
ing ensembles; the data in the ensembles must be of similar type and size.

685

También podría gustarte