spike package¶
Subpackages¶
- spike.Algo package
- Submodules
- spike.Algo.BC module
- spike.Algo.CS_transformations module
- spike.Algo.Cadzow module
- spike.Algo.Cadzow_mpi module
- spike.Algo.Cadzow_mpi2 module
- spike.Algo.Laplace module
- spike.Algo.Linpredic module
- spike.Algo.SL0 module
- spike.Algo.SL0_test module
- spike.Algo.maxent module
- spike.Algo.rQRd module
- spike.Algo.sane module
- spike.Algo.sane_old module
- spike.Algo.savitzky_golay module
- spike.Algo.urQRd module
- Module contents
- spike.Display package
- spike.File package
- Submodules
- spike.File.Apex module
- spike.File.Apex0 module
- spike.File.BrukerMS module
- spike.File.BrukerNMR module
- spike.File.BrukerSMX module
- spike.File.GifaFile module
- spike.File.HDF5File module
- spike.File.Solarix module
- spike.File.Spinit module
- spike.File.Thermo module
- spike.File.csv module
- spike.File.mzXML module
- Module contents
- spike.Interactive package
- Submodules
- spike.Interactive.FTICR_INTER module
- spike.Interactive.FTICR_INTER_v2 module
- spike.Interactive.INTER copy module
- spike.Interactive.INTER module
- spike.Interactive.INTER_2D module
- spike.Interactive.INTER_MS (copy) module
- spike.Interactive.INTER_MS module
- spike.Interactive.INTER_MS_turn module
- spike.Interactive.ipyfilechooser module
- Module contents
- spike.Miscellaneous package
- spike.plugins package
- Submodules
- spike.plugins.Fitter module
- spike.plugins.Linear_prediction module
- spike.plugins.Peaks module
- spike.plugins.bcorr module
- spike.plugins.fastclean module
- spike.plugins.gaussenh module
- spike.plugins.rem_ridge module
- spike.plugins.sane module
- spike.plugins.sg module
- spike.plugins.test module
- spike.plugins.urQRd module
- spike.plugins.zoom3D module
- Module contents
- spike.util package
- Submodules
- spike.util.ConfigForm module
- spike.util.ValErr module
- spike.util.compat module
- spike.util.counter module
- spike.util.debug_tools module
- spike.util.dynsubplot module
- spike.util.hashcheck module
- spike.util.log_all module
- spike.util.log_stderr module
- spike.util.log_stdout module
- spike.util.mail module
- spike.util.mail_error_logger module
- spike.util.mpiutil module
- spike.util.near_prime module
- spike.util.progrbarUi module
- spike.util.progressbar module
- spike.util.readwrite_tools module
- spike.util.sendgmail module
- spike.util.shift_comments module
- spike.util.signal_tools module
- spike.util.simple_logger module
- spike.util.simple_logger2 module
- spike.util.widgets module
- Module contents
- spike.v1 package
- Submodules
- spike.v1.Bruker module
- spike.v1.Generic module
- spike.v1.GenericDosy module
- spike.v1.GenericMaxEnt module
- spike.v1.Kore module
- spike.v1.Launch module
- spike.v1.NPKv1 module
- spike.v1.Nucleus module
- spike.v1.Param module
- spike.v1.Process1D module
- spike.v1.Process2D module
- spike.v1.Process3D module
- spike.v1.ProcessDosy module
- spike.v1.mflops module
- spike.v1.test module
- spike.v1.v1_Tests module
- Module contents
Submodules¶
spike.FTICR module¶
This file implements all the tools for handling FT-ICR data-sets
It allows to work with 1D and 2D
To use it :
import FTICR d = FTICR.FTICRData(…) # There are several possible initialisation : empty, from file play with d
d will allows all NPKData methods, plus a few specific ones.
alternatively, use an importer : from File.(Importer_name) import Import_1D d = Import_1D(“filename)”)
Created by Marc-André on 2014-08 Copyright (c) 2014 IGBMC. All rights reserved.
-
class
spike.FTICR.
FTICRAxis
(itype=0, currentunit='points', size=1024, specwidth=1000000.0, offsetfreq=0.0, left_point=0.0, highmass=10000.0, calibA=100000000.0, calibB=0.0, calibC=0.0, lowfreq=10000.0, highfreq=1000000.0)[source]¶ Bases:
spike.FTMS.FTMSAxis
hold information for one FT-ICR axis used internally
-
class
spike.FTICR.
FTICRData
(dim=1, shape=None, mode='memory', group='resol1', buffer=None, name=None, debug=0)[source]¶ Bases:
spike.FTMS.FTMSData
subclass of FTMS.FTMSData, meant for handling FT-ICR data allows 1D and 2D data-sets
-
property
Bo
¶ estimate Bo from internal calibration
-
property
spike.FTMS module¶
FTMS.py
This file defines generic classes for FT-MS Spectroscopy (FT-ICR and Orbitrap)
not meant to be used directly, but rather called from either Orbitrap.py or FTICR.py
Created by Marc-André on 2014-08 Copyright (c) 2014 IGBMC. All rights reserved.
-
class
spike.FTMS.
FTMSAxis
(itype=0, currentunit='points', size=1024, specwidth=1000000.0, offsetfreq=0.0, left_point=0.0, highmass=10000.0, calibA=1000000.0, calibB=0.0, calibC=0.0)[source]¶ Bases:
spike.NPKData.Axis
hold information for one FT-MS axis used internally
-
Hz_axis
()¶ return axis containing Hz values, can be used for display
-
property
borders
¶ the (min, max) available windows, used typically for display
-
display_icalib
(xref, mzref, symbol='bo')¶ generates a plot of the current calibration xref : list of point coordinates of the reference points mzref: list of reference m/z symbol: matplotlib notation for points (default is blue rounds)
-
extract
(zoom)[source]¶ redefines the axis parameters so that the new axe is extracted for the points [start:end] zoom is defined in current axis unit
-
imzmeas
= array([ 4.68866700e-310, 0.00000000e+000, -6.50177594e-031, 6.90737806e-310, 6.90737447e-310, -4.52200996e+015, 6.90737806e-310, 6.90737447e-310, -3.73243176e+258, 6.90737806e-310])¶
-
property
lowmass
¶ highest mass of interest - defined by the Nyquist frequency limit
-
mz_axis
()¶ return axis containing m/z values, can be used for display
-
mzref
= array([4.68866821e-310, 0.00000000e+000, 6.90739742e-310, 6.90737806e-310, 6.90737447e-310, 6.90739744e-310, 6.90739743e-310, 6.90739743e-310, 6.90739743e-310, 6.90739742e-310])¶
-
ppm
(xref, mzref)¶ computes the mean error in ppm from a array of positions (xref) and the theoretical m/z (mzref) uses l1 norm ! xref : array of point coordinates of the reference points mzref: array of reference m/z
-
ppm_error
(xref, mzref)¶ computes the error from a array of positions (xref) and the theoretical m/z (mzref) returns an array with errors in ppm xref : array of point coordinates of the reference points mzref: array of reference m/z
-
-
class
spike.FTMS.
FTMSData
(dim=1, shape=None, mode='memory', buffer=None, name=None, debug=0)[source]¶ Bases:
spike.NPKData._NPKData
subclass of NPKData, meant for handling FT-MS data allows 1D and 2D data-sets
-
property
highmass
¶ copy highmass to all the axes
-
property
ref_freq
¶ copy ref_freq to all the axes
-
property
ref_mass
¶ copy ref_mass to all the axes
-
save_msh5
(name, compressed=False)[source]¶ save data to a HDF5 file
if compressed is True, file is internally compressed using HDF5 compressors (currently zlib) not final version !!
-
property
specwidth
¶ copy specwidth to all the axes
-
property
spike.NMR module¶
NMR.py
This file defines generic classes for NMR Spectroscopy
Used to be inside NPKData
-
class
spike.NMR.
NMRAxis
(size=64, specwidth=6283.185307179586, offset=0.0, frequency=400.0, itype=0, currentunit='points')[source]¶ Bases:
spike.NPKData.Axis
hold information for one NMR axis used internally
-
Hz_axis
()¶ return axis containing Hz values, can be used for display
-
-
class
spike.NMR.
NMRData
(dim=1, shape=None, buffer=None, name=None, debug=0)[source]¶ Bases:
spike.NPKData._NPKData
a working data used by the NPK package
The data is a numpy array, found in self.buffer can also be accessed directly d[i], d[i,j], …
1D 2D and 3D are handled, 3 axes are defined : axis1 axis2 axis3 axes are defined as in NMR in 1D, every is in axis1 in 2D, the fastest varying dimension is in axis2, the slowest in axis1 in 3D, the fastest varying dimension is in axis3, the slowest in axis1 see axis_index typical properties and methods are : utilities:
.display() .check()
- properties
.itype .dim .size1, .size2, .size3 …
- moving data :
.row(i) .col(i) .set_row(i) .set_col(i) .copy() .load() .save()
- processing :
.fft() .rfft() .modulus() .apod_xxx() sg() transpose() …
- arithmetics :
.fill() .mult .add() also direct arithmetics : f = 2*d+e
all methods return self, so computation can be piped etc…
spike.NPKConfigParser module¶
A utility that wraps ConfigParser for NPK
Typical use is :
cp = NPKConfigParser() cp.readfp(open(configfilename)) # where configfilename is the name of the config file var1 = cp.get( section, varname1) var2 = cp.get( section, varname2, default_value) … you can use get() getint() getfloat() getboolean() see details in methods.
Created by Marc-André on 2011-10-14. Copyright (c) 2011 IGBMC. All rights reserved.
MAD modif on 21 - may 2012 - added getword and removing trailing comments MAD, in April 2017 : adapted (painfully) to python3
-
class
spike.NPKConfigParser.
NPKConfigParser
(defaults=None, dict_type=<class 'dict'>, allow_no_value=False, *, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True, default_section='DEFAULT', interpolation=<object object>, converters=<object object>)[source]¶ Bases:
configparser.ConfigParser
this is a subclass of ConfigParser.ConfigParser, providing default value for values will never raise an error on missing values
-
get
(section, option, default=None, raw=0, verbose=False, fallback=<object object>)[source]¶ read a value from the configuration, with a default value
-
getboolean
(section, option, default='OFF', raw=0, verbose=False)[source]¶ read a boolean value from the configuration, with a default value
-
getfloat
(section, option, default=0.0, raw=0, verbose=False)[source]¶ read a float value from the configuration, with a default value
-
spike.NPKData module¶
NPKData.py
Implement the basic mechanisms for spectral data-sets
First version created by Marc-André and Marie-Aude on 2010-03-17.
-
class
spike.NPKData.
Axis
(size=64, itype=0, currentunit='points')[source]¶ Bases:
object
hold information for one spectral axis used internally - a template for other axis types
-
property
borders
¶ the (min, max) available windows, used typically for display
-
check_zoom
(zoom)[source]¶ check whether a zoom window (or any slice), given as (low,high) is valid - check low<high and within axis size - check that it starts on a real index if itype is complex return a boolean
-
property
cpxsize
¶ returns size of complex entries this is different from size, size == cpxsize if axis is real size == 2*cpxsize if axis is complex
-
property
currentunit
¶ get the current unit for this axis, to be chosen in axis.units.keys()
-
extract
(zoom)[source]¶ redefines the axis parameters so that the new axis is extracted for the points [start:end]
zoom is given in current unit - does not modify the Data, only the axis definition
This definition should be overloaded for each new axis, as the calibration system, associated to unit should be updated.
-
getslice
(zoom)[source]¶ given a zoom window (or any slice), given as (low,high) in CURRENT UNIT,
returns the value pair in index, as (star,end) which insures that - low<high and within axis size - that it starts on a real index if itype is complex - that it fits in the data-set raise error if not possible
-
itoix
(val)[source]¶ converts point value (i) to complex value (ix) i.e. divide by 2 if axis is complex
-
ixtoc
(val)[source]¶ converts point value (i) to complex value (cps) i.e. multiply by 2 if axis is complex, and return index of real part
-
ixtoi
(val)[source]¶ converts point value (i) from complex value (ix) i.e. multiply by 2 if axis is complex, and return index of real part
-
load_sampling
(filename)[source]¶ loads the sampling scheme contained in an external file file should contain index values, one per line, comment lines start with a # complex axes should be sampled by complex pairs, and indices go up to self.size1/2
sampling is loaded into self.sampling and self.sampling_info is a dictionnary with information
-
property
sampled
¶ true is sampled axis
-
property
-
class
spike.NPKData.
LaplaceAxis
(size=64, dmin=1.0, dmax=10.0, dfactor=1.0, currentunit='damping')[source]¶ Bases:
spike.NPKData.Axis
hold information for one Laplace axis (such as DOSY) used internally
-
class
spike.NPKData.
NPKDataTests
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
Testing NPKData basic behaviour -
-
spike.NPKData.
NPKData_plugin
(name, method, verbose=False)[source]¶ This function allows to register a new method inside the NPKData class.
for instance - define myfunc() anywhere in your code :
- def myfunc(npkdata, args):
“myfunc doc” …do whatever, assuming npkdata is a NPKData return npkdata # THIS is important, that is the standard NPKData mechanism
then elsewhere do : NPKData_plugin(“mymeth”, myfunc)
then all NPKData created will have the method .mymeth()
look at .plugins/__init__.py for details
-
class
spike.NPKData.
TimeAxis
(size=32, tabval=None, importunit='sec', currentunit='sec', scale='linear')[source]¶ Bases:
spike.NPKData.Axis
Not implmented yet hold information for one sampled time axis (such as chromato of T2 relax) time values should be given as a list of values
-
property
Tmax
¶ larger tabulated time value
-
property
Tmin
¶ smaller tabulated time value
-
property
-
class
spike.NPKData.
Unit
(name='points', converter=<function ident>, bconverter=<function ident>, reverse=False, scale='linear')[source]¶ Bases:
object
a small class to hold parameters for units name: the name of the “unit” converter: a function converting from points to “unit” bconverter: a function converting from “unit” to points reverse: direction in which axis are displayed (True means right to left) scale: scale along this axis, possible values are ‘linear’ or ‘log’
-
spike.NPKData.
as_cpx
(arr)[source]¶ interpret arr as a complex array useful to move between complex and real arrays (see as_float)
>>> print as_cpx(np.arange(4.0)) [ 0.+1.j 2.+3.j]
-
spike.NPKData.
as_float
(arr)[source]¶ interpret arr as a float array useful to move between complex and real arrays (see as_float)
>>> print as_float(np.arange(4)*(1+1j)) [ 0. 0. 1. 1. 2. 2. 3. 3.]
-
spike.NPKData.
conj_ip
(a)[source]¶ computes conjugate() in-place
>>> conj_ip(np.arange(4)*(1+1j)) [ 0.-0.j 1.-1.j 2.-2.j 3.-3.j]
-
spike.NPKData.
flatten
(*arg)[source]¶ flatten recursively a list of lists
>>>print flatten( ( (1,2), 3, (4, (5,), (6,7) ) ) ) [1, 2, 3, 4, 5, 6, 7]
-
spike.NPKData.
hypercomplex_modulus
(arr, size1, size2)[source]¶ Calculates the modulus of an array of hypercomplex numbers. input:
arr : hypercomplex array size1 : size counting horizontally each half quadrant. size2 : siez counting vertically each half quadrant.
- eg:
arr = np.array([[1, 4],[3, 7],[1, 9],[5, 7]]) is an hypercomplex with size1 = 2 and size2 = 2
spike.NPKError module¶
untitled.py
Created by Marc-André on 2010-07-20. Copyright (c) 2010 IGBMC. All rights reserved.
spike.Orbitrap module¶
This file implements all the tools for handling Orbitrap data-sets
To use it :
import Orbitrap d = Orbitrap.OrbiData(…) # There are several possible initialisation : empty, from file play with d
d will allows all NPKData methods, plus a few specific ones.
alternatively, use an importer : from File.(Importer_name) import Import_1D d = Import_1D(“filename)”)
Created by Marc-Andre’ on 2014-09 Copyright (c) 2014 IGBMC. All rights reserved.
-
class
spike.Orbitrap.
OrbiAxis
(itype=0, currentunit='points', size=1024, specwidth=1000000.0, offsetfreq=0.0, left_point=0.0, highmass=10000.0, calibA=0.0, calibB=100000000000000.0, calibC=0.0)[source]¶ Bases:
spike.FTMS.FTMSAxis
hold information for one Orbitrap axis used internally
-
class
spike.Orbitrap.
OrbiData
(dim=1, shape=None, mode='memory', buffer=None, name=None, debug=0)[source]¶ Bases:
spike.FTMS.FTMSData
subclass of FTMS.FTMSData, meant for handling Orbitrap data doc to be written …
spike.Tests module¶
Tests.py
Created by Marc-André on 2010-07-20.
Runs tests on selected modules using the integrated unittests in the different SPIKE modules.
most default values can be overloaded with run time arguments
- Example on a module :
python -m spike.Tests -D DATA_test -t File.Apex
-
class
spike.Tests.
NPKTest
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
overload unittest.TestCase for default verbosity - Not Used -
-
spike.Tests.
cleandir
(verbose=True)[source]¶ checking files in DATA_dir directory and removes files created by previous tests
-
spike.Tests.
do_Test
()[source]¶ Performs all tests then indicates if successfull. Gives total time elapsed.
spike.dev_setup module¶
dev_setup.py
To be called any time a new version is rolled out !
Created by Marc-Andre’ on 2010-07-20.
-
spike.dev_setup.
generate_file
(fname)[source]¶ write version to the file “name”, usually “version.py”, used later on then version.py is imported at SPIKE initialization. No revision version is included
spike.processing module¶
Processing.py
This program makes the processing of a 2D-FTICR dataset
First version by Marc-Andre on 2011-09-23.
-
class
spike.processing.
Proc_Parameters
(configfile=None, verif=True)[source]¶ Bases:
object
this class is a container for processing parameters
-
spike.processing.
apod
(d, size, axis=0)[source]¶ apply apodisation and change size 4 cases
2D F1 or F2
1D coming from F1 or F2
1D or 2D in F2 are default - apodisation in apod_sin(0.5) in 2D F1 (axis=1) - apodisation is kaiser(5)
- 3 situations
size after > size before size after < size before size after == size before
-
spike.processing.
comp_sizes
(d0, zflist=None, szmlist=None, largest=8589934592, sizemin=1024, vignette=True)[source]¶ - return a list with data-sizes, computed either
zflist : from zerofilling index eg : (1,0,-1) szmlist : from multiplicant pairs eg : (2,2)
largest determines the largest dataset allowed sizemini determines the minimum size when downzerofilling when vignette == True (default) a minimum size data (defined by sizemini) is appended to the list
-
spike.processing.
do_proc_F1
(dinp, doutp, parameter)[source]¶ scan all cols of dinp, apply proc() and store into doutp
-
spike.processing.
do_proc_F1_demodu_modu
(dinp, doutp, parameter)[source]¶ as do_proc_F1, but applies demodu and then complex modulus() at the end
-
spike.processing.
do_proc_F1_modu
(dinp, doutp, parameter)[source]¶ as do_proc_F1, but applies hypercomplex modulus() at the end
-
spike.processing.
do_process2D
(dinp, datatemp, doutp, parameter)[source]¶ apply the processing to an input 2D data set : dinp result is found in an output file : doutp
dinp and doutp should have been created before, size of doutp will determine the processing will use a temporay file if needed
-
spike.processing.
downsample2D
(data, outp, n1, n2, compress=False, compress_level=3.0)[source]¶ takes data (a 2D) and generate a smaller dataset downsampled by factor (n1,n2) on each axis then returned data-set is n1*n2 times smaller - do a filtered decimation along n2 - simply takes the mean along n1 - set to zero all entries below 3*sigma if compress is True ** Not fully tested on non powers of 2 **
-
spike.processing.
hmclear
(d)[source]¶ given a 1D spectrum d, set to zeros all points betwen freq 0 and highmass helps compression
-
spike.processing.
intelliround
(x)[source]¶ returns a number rounded to the nearest ‘round’ (easy to FT) integer
-
spike.processing.
iterarg
(dinp, rot, size, parameter)[source]¶ an iterator used by the processing to allow multiprocessing or MPI set-up
-
spike.processing.
iterargF2
(dinp, size, scan)[source]¶ an iterator used by the F2 processing to allow multiprocessing or MPI set-up
-
spike.processing.
main
(argv=None)[source]¶ Does the whole on-file processing, syntax is processing.py [ configuration_file.mscf ] if no argument is given, the standard file : process.mscf is used.
-
spike.processing.
pred_sizes
(d0, szmult=1, 1, sizemin=1024)[source]¶ - given an input data set, determines the optimum size s1,s2 to process it
with a size multiplicant of szmult
szmult (szm1, szm2) where szm1 is multiplicant for s1 and szm2 for s2 szmx = 1 : no change / 2 : size doubling / 0.5 : size halving any strictly positive value is possible, 0.2 0.33 1.1 2 2.2 5 etc…
however, axes can never get smaller than sizemin returns (si1, si2, …) as the dataset dimension
-
spike.processing.
pred_sizes_zf
(d0, zf=0, sizemin=1024)[source]¶ given an input data set, determines the optimum size s1,s2 to process it with a zerofilling of zf zf = +n is doubling n times along each axis zf = -n is halving n times along each axis zf = 0 is no zerofiling however, axes can never get smaller than sizemin returns (si1, si2, …) as the dataset dimension
spike.processingPH module¶
Module contents¶
The Spike Package