spike package

Subpackages

Submodules

spike.FTICR module

This file implements all the tools for handling FT-ICR data-sets

It allows to work with 1D and 2D

To use it :

import FTICR d = FTICR.FTICRData(…) # There are several possible initialisation : empty, from file play with d

d will allows all NPKData methods, plus a few specific ones.

alternatively, use an importer : from File.(Importer_name) import Import_1D d = Import_1D(“filename)”)

Created by Marc-André on 2014-08 Copyright (c) 2014 IGBMC. All rights reserved.

class spike.FTICR.FTICRAxis(itype=0, currentunit='points', size=1024, specwidth=1000000.0, offsetfreq=0.0, left_point=0.0, highmass=10000.0, calibA=100000000.0, calibB=0.0, calibC=0.0, lowfreq=10000.0, highfreq=1000000.0)[source]

Bases: spike.FTMS.FTMSAxis

hold information for one FT-ICR axis used internally

htomz(value)[source]

return m/z (mz) from hertz value (h)

mztoh(value)[source]

return Hz value (h) from m/z (mz)

report()[source]

high level reporting

class spike.FTICR.FTICRData(dim=1, shape=None, mode='memory', group='resol1', buffer=None, name=None, debug=0)[source]

Bases: spike.FTMS.FTMSData

subclass of FTMS.FTMSData, meant for handling FT-ICR data allows 1D and 2D data-sets

property Bo

estimate Bo from internal calibration

report()[source]

returns a description string of the dataset

setBo(Bovalue)[source]

set internal calibration from Bo using physical constants

class spike.FTICR.FTICR_Tests(methodName='runTest')[source]

Bases: unittest.case.TestCase

announce()[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_atob()[source]

testing unit conversion functions

test_axis()[source]

testing FTICRAxis object

test_saving_1D()[source]

Testing how save_msh5 works on 1D spectrum

test_trim1D()[source]

Test trimz

test_trim2D()[source]

Test trimz

spike.FTMS module

FTMS.py

This file defines generic classes for FT-MS Spectroscopy (FT-ICR and Orbitrap)

not meant to be used directly, but rather called from either Orbitrap.py or FTICR.py

Created by Marc-André on 2014-08 Copyright (c) 2014 IGBMC. All rights reserved.

class spike.FTMS.FTMSAxis(itype=0, currentunit='points', size=1024, specwidth=1000000.0, offsetfreq=0.0, left_point=0.0, highmass=10000.0, calibA=1000000.0, calibB=0.0, calibC=0.0)[source]

Bases: spike.NPKData.Axis

hold information for one FT-MS axis used internally

Hz_axis()

return axis containing Hz values, can be used for display

property borders

the (min, max) available windows, used typically for display

deltamz(mz_value)[source]

computes the theorical maximum resolution in m/z at m/z location

display_icalib(xref, mzref, symbol='bo')

generates a plot of the current calibration xref : list of point coordinates of the reference points mzref: list of reference m/z symbol: matplotlib notation for points (default is blue rounds)

extract(zoom)[source]

redefines the axis parameters so that the new axe is extracted for the points [start:end] zoom is defined in current axis unit

freq_axis()[source]

return axis containing Hz values, can be used for display

htoi(value)[source]

returns point value (i) from Hz value (h)

htomz(value)[source]

return m/z (mz) from Hz value

imzmeas = array([ 4.68866700e-310, 0.00000000e+000, -6.50177594e-031, 6.90737806e-310, 6.90737447e-310, -4.52200996e+015, 6.90737806e-310, 6.90737447e-310, -3.73243176e+258, 6.90737806e-310])
itoh(value)[source]

returns Hz value (h) from point value (i)

itomz(value)[source]

return m/z (mz) from point value (i)

itos(value)[source]

returns time value (s) from point value - valid for transients

property lowmass

highest mass of interest - defined by the Nyquist frequency limit

mass_axis()[source]

return axis containing m/z values, can be used for display

mz_axis()

return axis containing m/z values, can be used for display

mzref = array([4.68866821e-310, 0.00000000e+000, 6.90739742e-310, 6.90737806e-310, 6.90737447e-310, 6.90739744e-310, 6.90739743e-310, 6.90739743e-310, 6.90739743e-310, 6.90739742e-310])
mztoh(value)[source]

return Hz value from m/z (mz)

mztoi(value)[source]

return point value (i) from m/z (mz)

ppm(xref, mzref)

computes the mean error in ppm from a array of positions (xref) and the theoretical m/z (mzref) uses l1 norm ! xref : array of point coordinates of the reference points mzref: array of reference m/z

ppm_error(xref, mzref)

computes the error from a array of positions (xref) and the theoretical m/z (mzref) returns an array with errors in ppm xref : array of point coordinates of the reference points mzref: array of reference m/z

report()[source]

high level reporting

stoi(value)[source]

returns point value (i) from time value (s)

class spike.FTMS.FTMSData(dim=1, shape=None, mode='memory', buffer=None, name=None, debug=0)[source]

Bases: spike.NPKData._NPKData

subclass of NPKData, meant for handling FT-MS data allows 1D and 2D data-sets

property highmass

copy highmass to all the axes

property ref_freq

copy ref_freq to all the axes

property ref_mass

copy ref_mass to all the axes

save_msh5(name, compressed=False)[source]

save data to a HDF5 file

if compressed is True, file is internally compressed using HDF5 compressors (currently zlib) not final version !!

property specwidth

copy specwidth to all the axes

trimz(axis=0)[source]

extract the data so as to keep only lowmass-highmass range axis determines which axis to trim, axis=0 (default) indicates all axes

spike.NMR module

NMR.py

This file defines generic classes for NMR Spectroscopy

Used to be inside NPKData

class spike.NMR.NMRAxis(size=64, specwidth=6283.185307179586, offset=0.0, frequency=400.0, itype=0, currentunit='points')[source]

Bases: spike.NPKData.Axis

hold information for one NMR axis used internally

Hz_axis()

return axis containing Hz values, can be used for display

extract(zoom)[source]

redefines the axis parameters so that the new axis is extracted for the points [start:end]

zoom is given in current unit - does not modify the Data, only the axis definition

freq_axis()[source]

return axis containing Hz values, can be used for display

htoi(value)[source]

returns point value (i) from Hz value (h)

htop(value)[source]

returns ppm value (p) from Hz value (h)

itoh(value)[source]

returns Hz value (h) from point value (i)

itop(value)[source]

returns ppm value (p) from point value (i)

itos(value)[source]

returns time value (s) from point value

ppm_axis()[source]

return axis containing ppm values, can be used for display

ptoh(value)[source]

returns Hz value (h) from ppm value (p)

ptoi(value)[source]

returns point value (i) from ppm value (p)

report()[source]

high level reporting

stoi(value)[source]

returns point value (i) from time value (s)

class spike.NMR.NMRData(dim=1, shape=None, buffer=None, name=None, debug=0)[source]

Bases: spike.NPKData._NPKData

a working data used by the NPK package

The data is a numpy array, found in self.buffer can also be accessed directly d[i], d[i,j], …

1D 2D and 3D are handled, 3 axes are defined : axis1 axis2 axis3 axes are defined as in NMR in 1D, every is in axis1 in 2D, the fastest varying dimension is in axis2, the slowest in axis1 in 3D, the fastest varying dimension is in axis3, the slowest in axis1 see axis_index typical properties and methods are : utilities:

.display() .check()

properties

.itype .dim .size1, .size2, .size3 …

moving data :

.row(i) .col(i) .set_row(i) .set_col(i) .copy() .load() .save()

processing :

.fft() .rfft() .modulus() .apod_xxx() sg() transpose() …

arithmetics :

.fill() .mult .add() also direct arithmetics : f = 2*d+e

all methods return self, so computation can be piped etc…

copy()[source]

return a copy of itself

htoi(axis, value)[source]
htop(axis, value)[source]
itoh(axis, value)[source]
itop(axis, value)[source]
ptoh(axis, value)[source]
ptoi(axis, value)[source]
class spike.NMR.NMRDataTests(methodName='runTest')[source]

Bases: unittest.case.TestCase

test_load()[source]
  • Testing load methods

test_unitval()[source]

testing unit conversion functions

spike.NPKConfigParser module

A utility that wraps ConfigParser for NPK

Typical use is :

cp = NPKConfigParser() cp.readfp(open(configfilename)) # where configfilename is the name of the config file var1 = cp.get( section, varname1) var2 = cp.get( section, varname2, default_value) … you can use get() getint() getfloat() getboolean() see details in methods.

Created by Marc-André on 2011-10-14. Copyright (c) 2011 IGBMC. All rights reserved.

MAD modif on 21 - may 2012 - added getword and removing trailing comments MAD, in April 2017 : adapted (painfully) to python3

class spike.NPKConfigParser.NPKConfigParser(defaults=None, dict_type=<class 'dict'>, allow_no_value=False, *, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True, default_section='DEFAULT', interpolation=<object object>, converters=<object object>)[source]

Bases: configparser.ConfigParser

this is a subclass of ConfigParser.ConfigParser, providing default value for values will never raise an error on missing values

get(section, option, default=None, raw=0, verbose=False, fallback=<object object>)[source]

read a value from the configuration, with a default value

getboolean(section, option, default='OFF', raw=0, verbose=False)[source]

read a boolean value from the configuration, with a default value

getfloat(section, option, default=0.0, raw=0, verbose=False)[source]

read a float value from the configuration, with a default value

getint(section, option, default=0, raw=0, verbose=False)[source]

read a int value from the configuration, with a default value understands 123 16k 32M 4G 2T … (units in power of 2, not power of 10 !!)

getword(section, option, default=None, raw=0, verbose=False)[source]

read a value from the configuration, with a default value - takes the first word of the string

class spike.NPKConfigParser.Tests(methodName='runTest')[source]

Bases: unittest.case.TestCase

announce()[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_def()[source]

testing configparser default values

test_read()[source]

testing configparser - getting values from file

spike.NPKData module

NPKData.py

Implement the basic mechanisms for spectral data-sets

First version created by Marc-André and Marie-Aude on 2010-03-17.

class spike.NPKData.Axis(size=64, itype=0, currentunit='points')[source]

Bases: object

hold information for one spectral axis used internally - a template for other axis types

property borders

the (min, max) available windows, used typically for display

check_zoom(zoom)[source]

check whether a zoom window (or any slice), given as (low,high) is valid - check low<high and within axis size - check that it starts on a real index if itype is complex return a boolean

copy()[source]
property cpxsize

returns size of complex entries this is different from size, size == cpxsize if axis is real size == 2*cpxsize if axis is complex

ctoi(val)[source]

converts into point value (i) from currentunit (c)

ctoix(val)[source]

converts into complex point value (ix) from currentunit

property currentunit

get the current unit for this axis, to be chosen in axis.units.keys()

extract(zoom)[source]

redefines the axis parameters so that the new axis is extracted for the points [start:end]

zoom is given in current unit - does not modify the Data, only the axis definition

This definition should be overloaded for each new axis, as the calibration system, associated to unit should be updated.

get_sampling()[source]

returns the sampling scheme contained in current axis

getslice(zoom)[source]

given a zoom window (or any slice), given as (low,high) in CURRENT UNIT,

returns the value pair in index, as (star,end) which insures that - low<high and within axis size - that it starts on a real index if itype is complex - that it fits in the data-set raise error if not possible

itoc(val)[source]

converts point value (i) to currentunit (c)

itoix(val)[source]

converts point value (i) to complex value (ix) i.e. divide by 2 if axis is complex

ixtoc(val)[source]

converts point value (i) to complex value (cps) i.e. multiply by 2 if axis is complex, and return index of real part

ixtoi(val)[source]

converts point value (i) from complex value (ix) i.e. multiply by 2 if axis is complex, and return index of real part

load_sampling(filename)[source]

loads the sampling scheme contained in an external file file should contain index values, one per line, comment lines start with a # complex axes should be sampled by complex pairs, and indices go up to self.size1/2

sampling is loaded into self.sampling and self.sampling_info is a dictionnary with information

points_axis()[source]

return axis in points currentunit, actually 0..size-1

report()[source]
property sampled

true is sampled axis

set_sampling(sampling)[source]

sets the sampling scheme contained in current axis

typestr()[source]

returns its type (real or complex) as a string

unit_axis()[source]

returns an axis in the unit defined in self.currentunit

class spike.NPKData.LaplaceAxis(size=64, dmin=1.0, dmax=10.0, dfactor=1.0, currentunit='damping')[source]

Bases: spike.NPKData.Axis

hold information for one Laplace axis (such as DOSY) used internally

D_axis()[source]

return axis containing Diffusion values, can be used for display

dtoi(value)[source]

returns point value (i) from damping value (d)

itod(value)[source]

returns damping value (d) from point value (i)

load_qvalues(fname)[source]

doc

report()[source]

hight level report

spike.NPKData.NPKData(*arg, **kw)[source]

trick to insure compatibility for modified code

class spike.NPKData.NPKDataTests(methodName='runTest')[source]

Bases: unittest.case.TestCase

  • Testing NPKData basic behaviour -

test_TimeAxis()[source]

test TimeAxis

test_dampingunit()[source]

test itod and dtoi

test_fft()[source]
  • Testing FFT methods -

test_flatten()[source]

test the flatten utility

test_hypercomplex_modulus()[source]

Test of hypercomplex modulus

test_math()[source]
  • Testing dataset arithmetics -

test_plugin()[source]

Test of plugin mechanism

test_zf()[source]
spike.NPKData.NPKData_plugin(name, method, verbose=False)[source]

This function allows to register a new method inside the NPKData class.

for instance - define myfunc() anywhere in your code :

def myfunc(npkdata, args):

“myfunc doc” …do whatever, assuming npkdata is a NPKData return npkdata # THIS is important, that is the standard NPKData mechanism

then elsewhere do : NPKData_plugin(“mymeth”, myfunc)

then all NPKData created will have the method .mymeth()

look at .plugins/__init__.py for details

class spike.NPKData.TimeAxis(size=32, tabval=None, importunit='sec', currentunit='sec', scale='linear')[source]

Bases: spike.NPKData.Axis

Not implmented yet hold information for one sampled time axis (such as chromato of T2 relax) time values should be given as a list of values

property Tmax

larger tabulated time value

property Tmin

smaller tabulated time value

comp_interpolate()[source]

computes an interpolater if possible

htoi(value)[source]

hours to index

htos(value)[source]
itoh(value)[source]

index to hours

itom(value)[source]

index to minutes

itoms(value)[source]

index to millisec

itos(value)[source]

returns time from point value (i) - interpolated if possible

load_tabval(fname, importunit='sec')[source]

load tabulated time values form a file - plain text, one entry per line

importunit is the unit in the tabval series unit is chosen in (‘msec’, sec’, ‘min’, ‘hours’)

mstoi(value)[source]

millisec to index

mstos(value)[source]
mtoi(value)[source]

minutes to index

mtos(value)[source]
report()[source]

hight level report

set_lindisplay()[source]

set display in linear spacing

set_logdisplay()[source]

set display in log spacing

stoh(value)[source]
stoi(value)[source]

returns point value (i) from time - interpolated if possible

stom(value)[source]
stoms(value)[source]
class spike.NPKData.Unit(name='points', converter=<function ident>, bconverter=<function ident>, reverse=False, scale='linear')[source]

Bases: object

a small class to hold parameters for units name: the name of the “unit” converter: a function converting from points to “unit” bconverter: a function converting from “unit” to points reverse: direction in which axis are displayed (True means right to left) scale: scale along this axis, possible values are ‘linear’ or ‘log’

spike.NPKData.as_cpx(arr)[source]

interpret arr as a complex array useful to move between complex and real arrays (see as_float)

>>> print as_cpx(np.arange(4.0))
[ 0.+1.j  2.+3.j]
spike.NPKData.as_float(arr)[source]

interpret arr as a float array useful to move between complex and real arrays (see as_float)

>>> print as_float(np.arange(4)*(1+1j))
[ 0.  0.  1.  1.  2.  2.  3.  3.]
spike.NPKData.conj_ip(a)[source]

computes conjugate() in-place

>>> conj_ip(np.arange(4)*(1+1j))
[ 0.-0.j  1.-1.j  2.-2.j  3.-3.j]
spike.NPKData.copyaxes(inp, out)[source]

copy axes values from NPKDAta in to out.

internal use

spike.NPKData.flatten(*arg)[source]

flatten recursively a list of lists

>>>print flatten( ( (1,2), 3, (4, (5,), (6,7) ) ) ) [1, 2, 3, 4, 5, 6, 7]

spike.NPKData.hypercomplex_modulus(arr, size1, size2)[source]

Calculates the modulus of an array of hypercomplex numbers. input:

arr : hypercomplex array size1 : size counting horizontally each half quadrant. size2 : siez counting vertically each half quadrant.

eg:

arr = np.array([[1, 4],[3, 7],[1, 9],[5, 7]]) is an hypercomplex with size1 = 2 and size2 = 2

spike.NPKData.ident(v)[source]

a identity function used by default converter

spike.NPKData.parsezoom(npkd, zoom)[source]

takes zoom (in currentunit) for NPKData npkd, and return either in 1D : zlo, zup in 2D : z1lo, z1up, z2lo, z2up if zoom is None, it returns the full zone

spike.NPKData.warning(msg)[source]

issue a warning message to the user

spike.NPKError module

untitled.py

Created by Marc-André on 2010-07-20. Copyright (c) 2010 IGBMC. All rights reserved.

exception spike.NPKError.NPKError(msg='', data=None)[source]

Bases: Exception

implements NPK generic exception adds the named argument data, which can be used to describe the NPKData involved

spike.Orbitrap module

This file implements all the tools for handling Orbitrap data-sets

To use it :

import Orbitrap d = Orbitrap.OrbiData(…) # There are several possible initialisation : empty, from file play with d

d will allows all NPKData methods, plus a few specific ones.

alternatively, use an importer : from File.(Importer_name) import Import_1D d = Import_1D(“filename)”)

Created by Marc-Andre’ on 2014-09 Copyright (c) 2014 IGBMC. All rights reserved.

class spike.Orbitrap.OrbiAxis(itype=0, currentunit='points', size=1024, specwidth=1000000.0, offsetfreq=0.0, left_point=0.0, highmass=10000.0, calibA=0.0, calibB=100000000000000.0, calibC=0.0)[source]

Bases: spike.FTMS.FTMSAxis

hold information for one Orbitrap axis used internally

htomz(value)[source]

return m/z (mz) from Hertz value (h)

mztoh(value)[source]

return Hz value (h) from m/z (mz)

report()[source]

high level reporting

class spike.Orbitrap.OrbiData(dim=1, shape=None, mode='memory', buffer=None, name=None, debug=0)[source]

Bases: spike.FTMS.FTMSData

subclass of FTMS.FTMSData, meant for handling Orbitrap data doc to be written …

class spike.Orbitrap.Orbi_Tests(methodName='runTest')[source]

Bases: unittest.case.TestCase

announce()[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_atob()[source]

testing unit conversion functions

spike.Tests module

Tests.py

Created by Marc-André on 2010-07-20.

Runs tests on selected modules using the integrated unittests in the different SPIKE modules.

most default values can be overloaded with run time arguments

Example on a module :

python -m spike.Tests -D DATA_test -t File.Apex

class spike.Tests.NPKTest(methodName='runTest')[source]

Bases: unittest.case.TestCase

overload unittest.TestCase for default verbosity - Not Used -

announce()[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

spike.Tests.cleandir(verbose=True)[source]

checking files in DATA_dir directory and removes files created by previous tests

spike.Tests.cleanspike()[source]

Removes the .pyc in spike

spike.Tests.directory()[source]

returns the location of the directory containing dataset for tests

spike.Tests.do_Test()[source]

Performs all tests then indicates if successfull. Gives total time elapsed.

spike.Tests.filename(name)[source]

returns the full name of a test dataset located in the test directory

spike.Tests.main()[source]
spike.Tests.msg(st, sep='=')[source]

Message in Tests.py

spike.dev_setup module

dev_setup.py

To be called any time a new version is rolled out !

Created by Marc-Andre’ on 2010-07-20.

spike.dev_setup.copynb()[source]

copie les notebooks

spike.dev_setup.do(arg)[source]

print and execute

spike.dev_setup.generate_file(fname)[source]

write version to the file “name”, usually “version.py”, used later on then version.py is imported at SPIKE initialization. No revision version is included

spike.dev_setup.generate_file_rev(fname)[source]

write version to the file “name”, usually “version.py”, used later on then version.py is imported at SPIKE initialization.

it assumes hg is used on client side ! - no svn version.

spike.dev_setup.generate_notes(fname)[source]

write the release notes file

spike.dev_setup.generate_version()[source]

generates version string, revision id and data

spike.dev_setup.plier()[source]

fabrique le zip

spike.processing module

Processing.py

This program makes the processing of a 2D-FTICR dataset

First version by Marc-Andre on 2011-09-23.

class spike.processing.Proc_Parameters(configfile=None, verif=True)[source]

Bases: object

this class is a container for processing parameters

from_json(jsontxt)[source]

updates attributes from json text input

load(cp, verif=True)[source]

load from cp config file - should have been opened with ConfigParser() first if verif == True (by default) the configuration is check for integrity

report()[source]

print a formatted report

to_json()[source]

creates a json output of self

verify()[source]

performs internal coherence of parameters

spike.processing.Report_Table_Param()[source]
spike.processing.Set_Table_Param()[source]
class spike.processing.Test(methodName='runTest')[source]

Bases: unittest.case.TestCase

tests

test_intelli()[source]

testing ‘intelligent’ rounding

test_proc()[source]

apply a complete processing test

test_zf()[source]

testing zerofilling computation

spike.processing.apod(d, size, axis=0)[source]

apply apodisation and change size 4 cases

  • 2D F1 or F2

  • 1D coming from F1 or F2

1D or 2D in F2 are default - apodisation in apod_sin(0.5) in 2D F1 (axis=1) - apodisation is kaiser(5)

3 situations

size after > size before size after < size before size after == size before

spike.processing.comp_sizes(d0, zflist=None, szmlist=None, largest=8589934592, sizemin=1024, vignette=True)[source]
return a list with data-sizes, computed either

zflist : from zerofilling index eg : (1,0,-1) szmlist : from multiplicant pairs eg : (2,2)

largest determines the largest dataset allowed sizemini determines the minimum size when downzerofilling when vignette == True (default) a minimum size data (defined by sizemini) is appended to the list

spike.processing.do_proc_F1(dinp, doutp, parameter)[source]

scan all cols of dinp, apply proc() and store into doutp

spike.processing.do_proc_F1_demodu_modu(dinp, doutp, parameter)[source]

as do_proc_F1, but applies demodu and then complex modulus() at the end

spike.processing.do_proc_F1_modu(dinp, doutp, parameter)[source]

as do_proc_F1, but applies hypercomplex modulus() at the end

spike.processing.do_proc_F2(dinp, doutp, parameter)[source]

do the F2 processing - serial code

spike.processing.do_proc_F2mp(dinp, doutp, parameter)[source]

do the F2 processing in MP

spike.processing.do_process2D(dinp, datatemp, doutp, parameter)[source]

apply the processing to an input 2D data set : dinp result is found in an output file : doutp

dinp and doutp should have been created before, size of doutp will determine the processing will use a temporay file if needed

spike.processing.downsample2D(data, outp, n1, n2, compress=False, compress_level=3.0)[source]

takes data (a 2D) and generate a smaller dataset downsampled by factor (n1,n2) on each axis then returned data-set is n1*n2 times smaller - do a filtered decimation along n2 - simply takes the mean along n1 - set to zero all entries below 3*sigma if compress is True ** Not fully tested on non powers of 2 **

spike.processing.hmclear(d)[source]

given a 1D spectrum d, set to zeros all points betwen freq 0 and highmass helps compression

spike.processing.intelliround(x)[source]

returns a number rounded to the nearest ‘round’ (easy to FT) integer

spike.processing.iterarg(dinp, rot, size, parameter)[source]

an iterator used by the processing to allow multiprocessing or MPI set-up

spike.processing.iterargF2(dinp, size, scan)[source]

an iterator used by the F2 processing to allow multiprocessing or MPI set-up

spike.processing.load_input(name)[source]

load input file and returns it, in read-only mode

spike.processing.main(argv=None)[source]

Does the whole on-file processing, syntax is processing.py [ configuration_file.mscf ] if no argument is given, the standard file : process.mscf is used.

spike.processing.pred_sizes(d0, szmult=1, 1, sizemin=1024)[source]
given an input data set, determines the optimum size s1,s2 to process it

with a size multiplicant of szmult

szmult (szm1, szm2) where szm1 is multiplicant for s1 and szm2 for s2 szmx = 1 : no change / 2 : size doubling / 0.5 : size halving any strictly positive value is possible, 0.2 0.33 1.1 2 2.2 5 etc…

however, axes can never get smaller than sizemin returns (si1, si2, …) as the dataset dimension

spike.processing.pred_sizes_zf(d0, zf=0, sizemin=1024)[source]

given an input data set, determines the optimum size s1,s2 to process it with a zerofilling of zf zf = +n is doubling n times along each axis zf = -n is halving n times along each axis zf = 0 is no zerofiling however, axes can never get smaller than sizemin returns (si1, si2, …) as the dataset dimension

spike.processing.print_time(t, st='Processing time')[source]

prints processing time, t is in seconds

spike.processingPH module

spike.version module

spike.version.report()[source]

prints version name when SPIKE starts

spike.version_rev module

spike.version_rev.report()[source]

prints version name when SPIKE starts

Module contents

The Spike Package