spike.File package

Submodules

spike.File.Apex module

Utility to Handle Apex files

class spike.File.Apex.Apex_Tests(methodName='runTest')[source]

Bases: unittest.case.TestCase

announce()[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_Import_2D()[source]

Test and time routine that import 2D from the MS-FTICR folder to the file given as second argument

test_Import_2D_keep_mem()[source]

Test and time routine that import 2D from the MS-FTICR folder in memory

spike.File.Apex.Import_1D(inifolder, outfile='')[source]

Entry point to import 1D spectra It returns a FTICRData It writes a HDF5 file if an outfile is mentionned

spike.File.Apex.Import_2D(folder, outfile='', F1specwidth=None)[source]

Entry point to import 2D spectra It returns a FTICRData It writes a HDF5 file if an outfile is mentionned

spike.File.Apex.get_param(param, names, values)[source]

From params, this function returns the value of the given param

spike.File.Apex.locate_acquisition(folder)[source]

From the given folder this function return the absolute path to the apexAcquisition.method file It should always be in a subfolder

spike.File.Apex.read_2D(sizeF1, sizeF2, filename='ser')[source]

Reads in a Apex 2D fid

sizeF1 is the number of fid sizeF2 is the number of data-points in the fid uses array

spike.File.Apex.read_3D(sizeF1, sizeF2, sizeF3, filename='ser')[source]

Ebauche de fonction

Reads in a Apex 3D fid

uses array

spike.File.Apex.read_param(filename)[source]

Open the given file and retrieve all parameters written initially for apexAcquisition.method NC is written when no value for value is found

structure : <param><name>C_MsmsE</name><value>0.0</value></param>

read_param returns values in a dictionnary

spike.File.Apex.read_scan(filename)[source]

Function that returns the number of scan that have been recorded It is used to see wether the number of recorded points correspond to the L_20 parameter

spike.File.Apex.write_ser(bufferdata, filename='ser')[source]

Write a ser file from FTICRData

spike.File.Apex0 module

Utility to Handle old Apex files - “NMR style”

spike.File.Apex0.Import_1D(filename='fid', verbose=False)[source]

Imports a 1D Bruker fid as a ftICRData

spike.File.Apex0.read_param(filename='acqus')[source]

get the acqus file and return a dictionary

spike.File.BrukerMS module

Utility to import Bruker MS files

A wrapper around Solarix and Apex modules

Created by MAD on 03-2019.

Copyright (c) 2019 IGBMC. All rights reserved.

spike.File.BrukerMS.Import_1D(*arg, **kword)[source]

Entry point to import 1D spectra It returns a FTICRData It writes a HDF5 file if an outfile is mentionned

spike.File.BrukerMS.Import_2D(*arg, **kword)[source]

Entry point to import 2D spectra It returns a FTICRData It writes a HDF5 file if an outfile is mentionned

compression (compress=True) is efficient, but takes more time.

spike.File.BrukerNMR module

partly based on NPK v1 code

class spike.File.BrukerNMR.Bruker_Tests(methodName='runTest')[source]

Bases: unittest.case.TestCase

A FAIRE

announce()[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_import()[source]
spike.File.BrukerNMR.Export_proc(d, filename, template=None, verbose=False)[source]

Exports a 1D or a 2D NMRData to a Bruker 1r / 2rr file, using templname as a template

filename and template are procno : datadir/my_experiment/expno/pdata/procno/ and the files are created in the filename directory a pdata/procno should already exists in template for templating

if d contains metadata parameters from Bruker, there will be used, however all files common to fname and templname expno will not be updated

if fname and templname are exactly the same, (or templname is None)

only 1r / 2rr proc and procs files will be overwriten

spike.File.BrukerNMR.FnMODE(acqu, proc)[source]

complex type along F1 for a 2D or 3D search FnMODE in acqu and if absent, search MC2 in proc None 0 QF 1 QSEQ 2 TPPI 3 States 4 States-TPPI 5 Echo-AntiEcho 6

returns either 0 (real) or 1 (complex)

spike.File.BrukerNMR.Import_1D(filename='fid', outfile=None, verbose=False)[source]

Imports a 1D Bruker fid as a NMRData

spike.File.BrukerNMR.Import_1D_proc(filename='1r', verbose=False)[source]

Imports a 1D Bruker 1r processed file as a NMRData if 1i exists imports the complex spectrum

spike.File.BrukerNMR.Import_2D(filename='ser', outfile=None, verbose=False)[source]

Imports a 2D Bruker ser

spike.File.BrukerNMR.Import_2D_proc(filename='2rr', outfile=None, verbose=False)[source]

Imports a 2D Bruker 2rr files if 2ri 2ir 2ii files exist, will imports the (hyper)complex spectrum

spike.File.BrukerNMR.calibdosy(file='acqus')[source]

get the parameters from the acqus file and compute the calibration (dfactor)

returns : (BigDelta, litteldelta, recovery, sequence, nucleus)

assumes that you are using the standard Bruker set-up, and have started the dosy acquisition ith the dosy macro.

from calibdosy.g in Gifa 5.2006

spike.File.BrukerNMR.find_acqu(dir='.')[source]

find a Bruker acqu file associated to the directory dir and return its name

spike.File.BrukerNMR.find_acqu2(dir='.')[source]

find a Bruker acqu2 file associated to the directory dir and return its name

spike.File.BrukerNMR.find_acqu3(dir='.')[source]

find a Bruker acqu3 file associated to the directory dir and return its name

spike.File.BrukerNMR.find_acqu_proc_gene(dir, acqulist)[source]

find a Bruker acqu or proc file associated to the directory dir and return its name

spike.File.BrukerNMR.find_proc(dir='.', down=True)[source]

find a Bruker proc file associated to the directory dir and return its name if down is True - search in dir/pdata/* other searches here

spike.File.BrukerNMR.find_proc2(dir='.', down=True)[source]

find a Bruker proc file associated to the directory dir and return its name

spike.File.BrukerNMR.find_proc3(dir='.', down=True)[source]

find a Bruker proc file associated to the directory dir and return its name

spike.File.BrukerNMR.find_proc_down(dire, proclist)[source]

find a Bruker proc file associated to the directory dir and return its name

search in pdada/PROCNO and returns the first one

spike.File.BrukerNMR.offset(acqu, proc)[source]

computes the offset from Bruker to spike

spike.File.BrukerNMR.read_1D(size, filename='fid', bytorda=1, dtypa=0, uses='struct')[source]

Reads in a Bruker 1D fid as a numpy float array

size is the number of data-points in the fid uses struct or numpy / numpy is non-standard but ~2x faster dtypa = 0 => int4

= 2 => float8

does not check endianess

spike.File.BrukerNMR.read_2D(sizeF1, sizeF2, filename='ser', bytorda=1, dtypa=0, uses='struct')[source]

Reads in a Bruker 2D fid as a numpy float array

sizeF1 is the number of fid sizeF2 is the number of data-points in the fid

spike.File.BrukerNMR.read_param(filename='acqus', get_title=True)[source]

load a Bruker acqu or proc file as a dictionnary

arrayed values are stored in python array

comments (lines starting with $$) are stored in the special entry [comments]

get_title == False does not try to access the title file, thus allowing to read a stand alone parameter file.

M-A Delsuc jan 2006 oct 2006 : added support for array

spike.File.BrukerNMR.read_title(filename)[source]

infer and load title of imported experiment

spike.File.BrukerNMR.revoffset(loffset, acqu, proc)[source]

computes the Bruker OFFSET (ppm of left most point) from spike axis offset value (Hz of rightmost point)

spike.File.BrukerNMR.write_file(bytordp, data, filename)[source]

data written as integers.

spike.File.BrukerNMR.write_param(param, filename)[source]

writes back a acqu/proc param file

spike.File.BrukerNMR.zerotime(acqu)[source]

get digital filter parameters, if any

The zerotime function computes the correction for the Bruker figital filter. the phase correction to apply is computed given the 3 parameters : DSPFIRM DSPFVS DECIM as found in the acqus parameter file in XwinNMR

correction is then -360*zerotime in firstorder correction

dspfvs is not used so far oct 2006

spike.File.BrukerSMX module

Code to handle 2rr submatrix Bruker file format

used internally by BrukerNMR, not for final use

Original code from L.Chiron adapted my M-A Delsuc

class spike.File.BrukerSMX.BrukerSMXHandler(addrproc)[source]

Bases: object

Reads/writes Bruker 2rr files

Writing not fully implemented

prepare_mat()[source]

sub_per_dim : [nb of submatrices in t1, nb of submatrices in t2] self.si2 == self.proc[‘$SI’] : dimension 2 of the 2D data. self.si1 == self.proc2[‘$SI’] : dimension 1 of the 2D data. self.xd2 == self.acqu[‘$XDIM’] : size submatrix in F2 self.xd1 == self.acqu2[‘$XDIM’] : size submatrix in F1

read_file(filename)[source]

data read as integers.

read_smx()[source]

Reads the 2D “smx” (2rr 2ri 2ir 2ii) files and keeps it in self.data_2d_2xx data taken as integers.

reorder_bck_subm(data)[source]

Reorder flat matrix back to sbmx Bruker data. self.sub_per_dim : [nb of submatrices in t1, nb of submatrices in t2] self.nsubs : total number of submatrices self.proc[‘$SI’] : shape of the 2D data. self.acqu[‘$XDIM’] : size submatrix

reorder_subm(data)[source]

Reorder sbmx binary Bruker data to flat matrix. self.sub_per_dim : [nb of submatrices in t1, nb of submatrices in t2] self.nsubs : total number of submatrices self.proc[‘$SI’] : shape of the 2D data. self.acqu[‘$XDIM’] : size submatrix

write_file(data, filename)[source]

data written as integers.

write_smx()[source]

writes the prepared self.data_2d_2xx (x in {r,i} ) into 2xx files

should have been prepared elsewhere missing data are not written

spike.File.GifaFile module

GifaFile.py

Created by Marc-André on 2010-03-17. Copyright (c) 2010 IGBMC. All rights reserved.

This module provides a simple access to NMR files in the Gifa format.

class spike.File.GifaFile.GifaFile(fname, access='r', debug=0)[source]

Bases: object

defines the interface to simply (read/write) acces Gifa v4 files standard methods are load() and save()

standard sequence to read is F = GifaFile(filename,”r”) B = F.get_data() # B is a NPKdata F.close()

or F = GifaFile(filename,”r”) F.load() B = F.data # B is a NPKdata F.close()

and to write F = GifaFile(filename,”w”) F.set_data(B) # where B is a NPKdata; do not use F.data = B F.save() F.close()

The file consists of a header (of size headersize) and data The header is handled as a dictionnary self.header data is handled as a NPKdata self.data

so numpy ndarray are in self.data.buffer

property byte_order

pour intel

close()[source]

closes the associated file

copyaxesfromheader(n_axis)[source]

get values from axis “n_axis” from header, and creates and returns a new (NMRAxis) axis with this values itype is not handled (not coded per axis in header) used internally

copydiffaxesfromheader()[source]

get values from axis “n” from header, and creates and returns a new (LaplaceAxis) axis with this values used internally

property dim

dimensionality of the dataset 1 2 or 3

get_data()[source]

returns the NPKdata attached to the (read) file

property itype

Real/complex type of the dataset in 1D : 0 : real 1: complex in 2D : 0 : real on both;

1 : complex on F2 2 : complex on F1 3 : complex on both

in 3D0real on all;

1 : complex on F3 2 : complex on F2 3 : complex on F3-F2 4 : complex on F1 5 : complex on F1-F3 6 : complex on F1-F2 7 : complex on all

load()[source]

creates a NPKdata loaded with the file content

load_header()[source]

load the header from file and set-up every thing

property nblock1

number of data block on disk along F1 axis

property nblock2

number of data block on disk along F2 axis

property nblock3

number of data block on disk along F3 axis

read_header()[source]

return a dictionnary of the file header internal use

readc()[source]

read a file in Gifa format, and returns the binary buffer as a numpy array internal use - use load()

report()[source]

prints a little debugging report

save()[source]

save the NPKdata to the file

set_data(buff)[source]

sets the NPKdata attached to the (to be written) file

setup_header()[source]

setup file header, from self.data

property size1

size along the F1 axis (either 1D, or slowest varyong axis in nD)

property size2

size along the F2 axis (fastest varying in 2D)

property size3

size along the F3 axis (fastest varying in 3D)

property szblock1

size of data block on disk along F1 axis

property szblock2

size of data block on disk along F2 axis

property szblock3

size of data block on disk along F3 axis

write_header()[source]

write file header setup_header() should have been called first

write_header_line(key)[source]

write into the header the entry key returns the number of byte written internal use

writec()[source]

write a file in Gifa format internal use - use save()

class spike.File.GifaFile.GifaFileTests(methodName='runTest')[source]

Bases: unittest.case.TestCase

  • Testing GifaFile on various 1D and 2D files -

announce()[source]
base()[source]

test basic function

test_read()[source]
  • testing read capacities -

test_write1D()[source]
  • test 1D write capacities -

test_write2D()[source]
  • testing 2D read/write capacities -

verbose = 1

spike.File.HDF5File module

HDF5File.py

Created by Marc-André Delsuc, Marie-Aude Coutouly on 2011-07-13.

API dealing with HDF5File. For now it is non surclassing tables, you have to use *.hf. to access all tables functionalities

class spike.File.HDF5File.HDF5File(fname, access='r', info=None, nparray=None, fticrd=None, compress=False, debug=0, verbose=False)[source]

Bases: object

defines the interface to simply (read/write) access HDF5 files standard methods are load() and save()

standard sequence to read is H = HDF5File(filename,”r”) B = H.get_data() # B is a FTICRdata H.close()

or H = HDF5File(filename,”r”) H.load() B = H.data # B is a FTICRdata H.close()

and to write H = HDF5File(filename,”w”) H.set_data(B) # where B is a FTICRdata; do not use H.data = B H.save() H.close()

HDF5File have the capacity to store and retrieve complete files and python objects:

with

lis = [any kind of list or tuple] # works also with dict and nested list and dict

then

H.store_internal_object(lis, “name_of_storage”)

will store the object, and

lis_back = H.retrieve_object(“name_of_storage”)

will retrieve it

data are stored using JSON, so anything compatible will do

axes_update(group='resol1', axis=2, infos=None)[source]

routine called when you want to modify the information on a given axis group is the group name, default is resol1 axis is the dimension we want to adjust infos is a dictionnary with al fields we want to adjust

checkversion()[source]

check file version and exit if incompatible

close()[source]

Closes HDF5File

create_HDF5_info()[source]

Creates a HDF5 file, takes info as parameter

create_HDF5_nparray()[source]

Creates a HDF5 file, takes nparray as parameters

create_carray(where, name, data_type, shape, chunk=None)[source]

Create a CArray in the given hf_file

create_from_template(data, group='resol1')[source]

Take params from the empty FTICR data and put all the informations in the HDF5File creates an empty data, and attach it to data.buffer data is created in group, with default value ‘resol1’

create_generic(owner=None)[source]

A table is created with all generic informations about the file : owner, method, HDF5 Release, CreationDate, Last modification

create_group(where, name)[source]

Create a group in the given hf_file

create_table(where, name, description)[source]

Create a Table in the given hf_file at the given position with the right description

create_tables()[source]

Creates the different tables needed in a HDF5File/FTICR

determine_chunkshape(sizeF1=None, sizeF2=None)[source]

Determine a good chunkshape according to the size of each axis

fill_table(table, infos)[source]

Fill in the given table. Axis is the dimension we are processing

flush()[source]

flushes all nodes but does not close

get_data(group='resol1', mode='onfile')[source]

loads and returns the FTICRdata attached to the self file same parameters as load()

get_file_infos()[source]

Read the generic_table and return the informations

get_info()[source]

Retrieve info from self.nparray

load(group='resol1', mode='onfile')[source]

loads the data into memory, set self.data as a FTICRData

group defines which group is loaded (default is resol1) mode defines how it is loaded in memory,

“onfile” (default ) the data is kept on file and loaded only on demand.

the capability of modifying the data is determined by the way the file was opened the data cannot be modified unless the file was opened with access=’w’ or ‘rw’

“memory” the data is copied to a memroy buffer and can be freely modified

warning - may saturate the computer memory, there is no control

if you want to load data into memory after having opened in ‘onfile” mode, then do the following : h.load(mode=”onfile”) b = d.data.buffer[…] # data are now copied into a new memory buffer b using ellipsis syntax d.data.buffer = b # and b is used as the data buffer.

open_internal_file(h5name, access='r', where='/attached')[source]

opens a node called h5name in the file, which can be accessed as a file. returns a file stram which can be used as a classical file.

access is either

‘r’ : for reading an existing node ‘w’ : create a node for writing into it ‘a’ : for appending in an existing node

file is stored in a h5 group called h5name

eg. F = h5.open_internal_file(‘myfile.txt’, ‘w’, where=’/files’) # create a node called ‘/files/myfile.txt’ (node ‘myfile.txt’ in the group ‘/files’) F.writelines(text) F.close() # and write some text into it

# then, latter on F = h5.open_internal_file(‘myfile.txt’, ‘r’, where=’/files’) textback = F.read() F.close()

This is used to add parameter files, audit_trail, etc… to spike/hdf5 files

it is based on the filenode module from pytables

position_array(group='resol1')[source]

Fill in the HDF5 file with the given buffer, HDF5 file is created with the given numpy array and the corresponding tables

retrieve_internal_file(h5name, where='/attached')[source]

returns the content of a n internal file stored with store_internal_file() or written directly

retrieve_object(h5name, where='/', access='r')[source]

retrieve a python object stored with store_internal_object()

save(ser_file, group='resol1')[source]

save the ser_file to the HDF5 file

save_fticrd()[source]

save the FTICRData to the H5F file

set_compression(On=False)[source]

sets Carray HDF5 file compression to zlib if On is True; to none otherwise

set_data(data, group='resol1')[source]

Take the ser_file and the params and put all the informations in the HDF5File

set_data_from_fticrd(buff, group='resol1')[source]

sets the FTICRdata attached to the (to be written) file

store_internal_file(filename, h5name=None, where='/attached')[source]
Store a (text) file into the hdf5 file,

filename: name of the file to be copied h5name: is its internal name (more limitation than in regular filesystems)

copied from os.path.basename(filename) by default

where: group where the file is copied into the hdf5

file content will be retrieved using open_internal_file(h5name,’r)

store_internal_object(obj, h5name, where='/')[source]

store a python object into the hdf5 file object are then retrieve with retrieve_object()

uses JSON to serialize obj
  • so works only on values, lists, dictionary, etc… but not functions or methods

table_update(group='resol1', axis=2, key='highmass', value=4000.0)[source]

Microchangement in the wanted table

class spike.File.HDF5File.HDF5_Tests(methodName='runTest')[source]

Bases: unittest.case.TestCase

announce()[source]
classmethod setUpClass()[source]

This one is called before running all the tests

test_axes_update()[source]

Test routine that overloads the parameter from an axis

test_create_from_fticr()[source]

Test routine that creates a HDF5 file according to a given FTICRData

test_filenodes()[source]

Test routines that work with filenodes

test_get_data()[source]

Test routine that opens a HDF5 file reading, gets the headers and the buffer

test_nparray_to_fticr()[source]

Test routine that creates a HDF5 file according to a given nparray and returns the buffer (FTICRData)

spike.File.HDF5File.determine_chunkshape(size1, size2)[source]

returns optimum size for chuncks for a dataset of file size1, size2 and update cachesize for accomodating dataset

spike.File.HDF5File.nparray_to_fticrd(name, nparray)[source]
spike.File.HDF5File.syntax(prgm)[source]
spike.File.HDF5File.up0p6_to_0p7(fname, debug=1)[source]

docstring for up0p6_to_0p7 Function that deals with changing HDF5 files created with file_version 0.6 to be read with 0.7 It modifies

spike.File.HDF5File.up0p7_to_0p8(fname, debug=1)[source]

docstring for up0p7_to_0p8 Function that deals with changing HDF5 files created with file_version 0.7 to be read with 0.8

spike.File.HDF5File.up0p8_to_0p9(fname, debug=1)[source]

Function that deals with changing HDF5 files created with file_version 0.8 to be read with 0.9 lib

spike.File.HDF5File.update(fname, debug=1)[source]

update so that the file is up to date

spike.File.Solarix module

Solarix.py

Utility to Handle Solarix files

Created by mac on 2013-05-24. updated may 2017 to python 3, added compress option

Copyright (c) 2013 __NMRTEC__. All rights reserved.

spike.File.Solarix.Import_1D(inifolder, outfile='', compress=False)[source]

Entry point to import 1D spectra It returns a FTICRData It writes a HDF5 file if an outfile is mentionned

spike.File.Solarix.Import_2D(folder, outfile='', F1specwidth=None, compress=False)[source]

Entry point to import 2D spectra It returns a FTICRData It writes a HDF5 file if an outfile is mentionned

compression (compress=True) is efficient, but takes a lot of time.

class spike.File.Solarix.Solarix_Tests(methodName='runTest')[source]

Bases: unittest.case.TestCase

announce()[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

spike.File.Solarix.get_param(param, names, values)[source]

From params, this function returns the value of the given param

spike.File.Solarix.locate_ExciteSweep(folder)[source]

From the given folder this function return the absolute path to the ExciteSweep file It should always be in a subfolder

spike.File.Solarix.locate_acquisition(folder)[source]

From the given folder this function return the absolute path to the apexAcquisition.method file It should always be in a subfolder

spike.File.Solarix.read_2D(sizeF1, sizeF2, filename='ser')[source]

Reads in a Solarix 2D fid

sizeF1 is the number of fid sizeF2 is the number of data-points in the fid uses array

spike.File.Solarix.read_3D(sizeF1, sizeF2, sizeF3, filename='ser')[source]

Ebauche de fonction

Reads in a Apex 3D fid

uses array

spike.File.Solarix.read_ExciteSweep(filename)[source]

Function that returns the lower and higher frequency of the pulse generator

spike.File.Solarix.read_param(parfilename)[source]

Open the given file and retrieve all parameters from apexAcquisition.method NC is written when no value for value is found

structure : <param name = “AMS_ActiveExclusion”><value>0</value></param>

read_param returns values in a dictionnary

spike.File.Solarix.read_scan(filename)[source]

Function that returns the number of scan that have been recorded It is used to see wether the number of recorded points corresponds to the L_20 parameter

spike.File.Solarix.write_ser(bufferdata, filename='ser')[source]

Write a ser file from FTICRData

spike.File.Spinit module

Utility to Handle NMR Spinit files

spike.File.Spinit.Export_1D(d, filename='data.dat', template='header.xml', kind=None)[source]

export a 1D NMRData as a spinit kind: 1DFID, 1DSPEC

spike.File.Spinit.Export_2D(d, filename='data.dat', template='header.xml', kind=None, debug=0)[source]

export a 2D NMRData as a spinit

spike.File.Spinit.Import_1D(filename='data.dat')[source]

Imports a 1D spinit fid as a NMRData

spike.File.Spinit.Import_2D(filename='data.dat')[source]

Imports a 2D spinit fid as a NMRData

spike.File.Spinit.add_data_representation(headertree, value)[source]

add an entry from a headertree loaded with load_header() value is a list of strings

spike.File.Spinit.add_state(headertree, value)[source]

add an entry from a headertree loaded with load_header() value is a list of strings

spike.File.Spinit.data_representation(val)[source]

Produces the entry block for DATA_REPRESENTATION from the given val (type is list).

spike.File.Spinit.ftF1_spinit(data, debug=0)[source]

Plugin spinit for perfoming the FT along the F1 axis according to the kind of acquisition. Cases taken in account : TPPI COMPLEX PHASE_MODU COMPLEX_TPPI ECHO_ANTIECHO

spike.File.Spinit.get_acquisition_mode(acqu)[source]

Retrieves the acquisition mode list from the header file. known values: REAL, COMPLEX, TPPI, COMPLEX_TPPI, PHASE_MODULATION, ECHO_ANTIECHO

spike.File.Spinit.get_data_representation(acqu)[source]

Retrieves the data representation list from the header file. known values: REAL, COMPLEX

spike.File.Spinit.has_param(headertree, key)[source]

Checks if the parameter (key) exists

spike.File.Spinit.load_header(filename='header.xml')[source]

loads a header.xml as an ET.tree, and keep it in memory

returns headertree xml parameters are found in headertree.getroot()

spike.File.Spinit.modify_val(headertree, key, values)[source]

modify an entry from a headertree loaded with load_header() key should be present in headertree values is either a single value or a list, depending on key type, and lengths should match key and val are both strings

headertree can then written back to disk with

headertree.write(‘header.xml’)

spike.File.Spinit.offset(acqu, proc)[source]

Computes the offset from spinit to spike

spike.File.Spinit.read_1D(size, filename='data.dat', debug=0)[source]

Reads in a Spinit 1D fid as a numpy float array

size is the number of data-points in the fid uses struct does not check endianess

spike.File.Spinit.read_2D(sizeF1, sizeF2, filename='data.dat')[source]

Reads the 2D files and return a buffer

spike.File.Spinit.read_param(filename='header.xml')[source]

loads a header.xml as a dictionnary key:value

spike.File.Spinit.read_serie(filename)[source]

loads a Serie.xml as a dictionnary key:value

keep only first level strings

spike.File.Spinit.state(val)[source]

Produces the entry block for STATE from the given list val.

spike.File.Spinit.zerotime(acqu)[source]

get digital filter parameters, if any

spike.File.Thermo module

Utility to Handle Thermofisher files

Marc-André from first draft by Lionel

spike.File.Thermo.Import_1D(filename)[source]

Entry point to import 1D spectra It returns an Orbitrap data

class spike.File.Thermo.Thermo_Tests(methodName='runTest')[source]

Bases: unittest.case.TestCase

A FAIRE

announce()[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

spike.File.Thermo.read_data(F, typ='float')[source]

given F, an opened file, reads the values and read_param returns values in a dictionnary

spike.File.Thermo.read_param(F)[source]

given F, an opend file , retrieve all parameters found in file header

read_param returns values in a plain dictionnary

spike.File.Thermo.read_thermo(filename)[source]

reads a thermofisher orbitrap file

spike.File.csv module

Utility to import and export data in text and csv files

all functions compress transparently if the filenames end with .gz Marc-André adapted from some Lionel code

spike.File.csv.Import_1D(filename, column=0, delimiter=',')[source]

import a 1D file stored as csv header as comments (#) parameters in pseudocomments :

#$key value

then one value per line column and delimiter as in load()

class spike.File.csv.csvTests(methodName='runTest')[source]

Bases: unittest.case.TestCase

  • Testing NPKData basic behaviour -

test_csv()[source]
spike.File.csv.do_open(filename, flag)[source]

opens regular and gz files

spike.File.csv.load(filename, column=0, delimiter=',')[source]

load 1D data from txt or csv file, attribute are in pseudo-comments starting with #$ value are in columns, separated by delimiter - only the column given in arg will be loaded column = 0 is fine for text files column = 1 is fine for csv files with currentunit returns a numpy buffer and an attribute dictionary

spike.File.csv.save(data, filename, delimiter=',', fmt='%.18E')[source]

save 1D data in txt, single column, no unit - with attributes as pseudo comments

spike.File.csv.save_unit(data, filename, delimiter=',', fmt='%.18E')[source]

save 1D data in csv, in 2 columns, with attributes as pseudo comments

spike.File.mzXML module

class spike.File.mzXML.mzXML(filename)[source]

Bases: object

Draft for reading mzXML files

get_params()[source]

Extract parameters for limits and size

np2txt(nump)[source]

from numpy to mzXML Float 32, big endian. Returns text format.

save_mzXML(x, y, namefile_final)[source]

Save data in the new mzXML file.

scattr(scan, param, kind=None)[source]

Scan attributes kind specifies if we want integer, float etc..

txt2np(txt)[source]

from mzXML to numpy Float 32, big endian. Returns numpy format.

class spike.File.mzXML.mzXML_Tests(methodName='runTest')[source]

Bases: unittest.case.TestCase

test_simple()[source]

Module contents

File Utilities

Created by Marc-André on 2011-03-20. Copyright (c) 2011 IGBMC. All rights reserved.