image_analysis.pipeline.feature.Feature(key_name, batch_op=False, frame_op=False, save=False)[source]¶Bases: object
| batch_op: | boolean to say the feature runs on batches of frames |
|---|---|
| frame_op: | boolean to say the feature runs on each frame |
| save: | boolean check to save feature in output dict |
image_analysis.pipeline.fft.FFT(inputshape, usegpu=False, nthreads=1)[source]¶Bases: image_analysis.pipeline.feature.Feature
| inputshape: | we need to know in advance the shape of input for performance reason |
|---|---|
| usegpu: | #TODO use gpu for implementation or not |
| nthreads: | number of threads to run fft in multhreaded mode |
image_analysis.pipeline.orientation_filter.OrientationFilter(mask='bowtie', center_orientation=90, orientation_width=20, high_cutoff=None, low_cutoff=0.1, target_size=None, falloff='')[source]¶Bases: image_analysis.pipeline.feature.Feature
| inputshape: | shape of then input for pyfftw builder |
|---|---|
| center_orientation: | |
| int for the center orientation (0-180) | |
| orientation_width: | |
| int for the orientation width of the filter | |
| high_cutoff: | int high spatial frequency cutoff |
| low_cutoff: | int low spatial frequency cutoff |
| target_size: | int total size. |
| falloff: | string ‘triangle’ or ‘rectangle’ shape of the filter falloff from the center. |
| nthreads: | number of multithreads |
bowtie(center_orientation, orientation_width, high_cutoff, low_cutoff, target_size, falloff='')[source]¶| center_orientation: | |
|---|---|
| int for the center orientation (0-180). | |
| orientation_width: | |
| int for the orientation width of the filter. | |
| high_cutoff: | int high spatial frequency cutoff. |
| low_cutoff: | int low spatial frequency cutoff. |
| target_size: | int total size. |
| falloff: | string ‘triangle’ or ‘rectangle’ shape of the filter falloff from the center. |
RETURNS:
the bowtie shaped filter.
extract(frame)[source]¶| input_frame: | (m x n) numpy array |
|---|---|
| mask: | int determining the type of filter to implement, where 1 = iso (noize amp) and 2 = horizontal decrement (bowtie) |
image_analysis.pipeline.pipeline.Pipeline(data=None, ops=None, seq=None, save_all=None, models=None)[source]¶Bases: object
| data: | image data |
|---|---|
| ops: | features to run on images. These ops don’t have dependencies |
| seq: | features to run in sequential way (output is input to another) |
| save_all: | boolean check to save all features ran |
| models: | dictionary of statistical models to run on the data |
as_ndarray(frame_key=None, batch_key=None, seq_key=None)[source]¶| frame_key: | key of frame feature to get |
|---|---|
| batch_key: | key of batch feature to get |
| seq_key: | key of seq operations to get |
extract(keep_input_data=True)[source]¶| keep_input_data: | |
|---|---|
| boolean check whether we want to keep original data | |
predict(X, model='')[source]¶| X: | the data X to predict labels for |
|---|---|
| model: | optional parameter to predict using a specific model only or all if none is specified |
set_batch_ops(batch_ops=None)[source]¶| batch_ops: | features to put in batch_ops |
|---|
set_empty_frame(batch_ops, frame_ops, seq_ops)[source]¶| batch_ops: | features to extract from batches |
|---|---|
| frame_ops: | features to extract from frames |
| seq_ops: | features to extract sequentially |
set_frame_ops(frame_ops=None)[source]¶| frame_ops: | features to put in frame_ops |
|---|
set_ops(ops=None, seq=None)[source]¶| ops: | features to run on images. These ops don’t have dependencies |
|---|---|
| seq: | features to run in sequential way (output is input to another) |
image_analysis.pipeline.svm.SVM(gamma=0.001)[source]¶Bases: image_analysis.pipeline.feature.Feature
| gamma: | kernel coefficient |
|---|