spike.File package¶
Submodules¶
spike.File.Apex module¶
Utility to Handle Apex files
-
class
spike.File.Apex.
Apex_Tests
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
-
spike.File.Apex.
Import_1D
(inifolder, outfile='')[source]¶ Entry point to import 1D spectra It returns a FTICRData It writes a HDF5 file if an outfile is mentionned
-
spike.File.Apex.
Import_2D
(folder, outfile='', F1specwidth=None)[source]¶ Entry point to import 2D spectra It returns a FTICRData It writes a HDF5 file if an outfile is mentionned
-
spike.File.Apex.
get_param
(param, names, values)[source]¶ From params, this function returns the value of the given param
-
spike.File.Apex.
locate_acquisition
(folder)[source]¶ From the given folder this function return the absolute path to the apexAcquisition.method file It should always be in a subfolder
-
spike.File.Apex.
read_2D
(sizeF1, sizeF2, filename='ser')[source]¶ Reads in a Apex 2D fid
sizeF1 is the number of fid sizeF2 is the number of data-points in the fid uses array
-
spike.File.Apex.
read_3D
(sizeF1, sizeF2, sizeF3, filename='ser')[source]¶ Ebauche de fonction
Reads in a Apex 3D fid
uses array
-
spike.File.Apex.
read_param
(filename)[source]¶ Open the given file and retrieve all parameters written initially for apexAcquisition.method NC is written when no value for value is found
structure : <param><name>C_MsmsE</name><value>0.0</value></param>
read_param returns values in a dictionnary
spike.File.Apex0 module¶
Utility to Handle old Apex files - “NMR style”
spike.File.BrukerMS module¶
Utility to import Bruker MS files
A wrapper around Solarix and Apex modules
Created by MAD on 03-2019.
Copyright (c) 2019 IGBMC. All rights reserved.
spike.File.BrukerNMR module¶
partly based on NPK v1 code
-
class
spike.File.BrukerNMR.
Bruker_Tests
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
A FAIRE
-
spike.File.BrukerNMR.
Export_proc
(d, filename, template=None, verbose=False)[source]¶ Exports a 1D or a 2D NMRData to a Bruker 1r / 2rr file, using templname as a template
filename and template are procno : datadir/my_experiment/expno/pdata/procno/ and the files are created in the filename directory a pdata/procno should already exists in template for templating
if d contains metadata parameters from Bruker, there will be used, however all files common to fname and templname expno will not be updated
- if fname and templname are exactly the same, (or templname is None)
only 1r / 2rr proc and procs files will be overwriten
-
spike.File.BrukerNMR.
FnMODE
(acqu, proc)[source]¶ complex type along F1 for a 2D or 3D search FnMODE in acqu and if absent, search MC2 in proc None 0 QF 1 QSEQ 2 TPPI 3 States 4 States-TPPI 5 Echo-AntiEcho 6
returns either 0 (real) or 1 (complex)
-
spike.File.BrukerNMR.
Import_1D
(filename='fid', outfile=None, verbose=False)[source]¶ Imports a 1D Bruker fid as a NMRData
-
spike.File.BrukerNMR.
Import_1D_proc
(filename='1r', verbose=False)[source]¶ Imports a 1D Bruker 1r processed file as a NMRData if 1i exists imports the complex spectrum
-
spike.File.BrukerNMR.
Import_2D
(filename='ser', outfile=None, verbose=False)[source]¶ Imports a 2D Bruker ser
-
spike.File.BrukerNMR.
Import_2D_proc
(filename='2rr', outfile=None, verbose=False)[source]¶ Imports a 2D Bruker 2rr files if 2ri 2ir 2ii files exist, will imports the (hyper)complex spectrum
-
spike.File.BrukerNMR.
calibdosy
(file='acqus')[source]¶ get the parameters from the acqus file and compute the calibration (dfactor)
returns : (BigDelta, litteldelta, recovery, sequence, nucleus)
assumes that you are using the standard Bruker set-up, and have started the dosy acquisition ith the dosy macro.
from calibdosy.g in Gifa 5.2006
-
spike.File.BrukerNMR.
find_acqu
(dir='.')[source]¶ find a Bruker acqu file associated to the directory dir and return its name
-
spike.File.BrukerNMR.
find_acqu2
(dir='.')[source]¶ find a Bruker acqu2 file associated to the directory dir and return its name
-
spike.File.BrukerNMR.
find_acqu3
(dir='.')[source]¶ find a Bruker acqu3 file associated to the directory dir and return its name
-
spike.File.BrukerNMR.
find_acqu_proc_gene
(dir, acqulist)[source]¶ find a Bruker acqu or proc file associated to the directory dir and return its name
-
spike.File.BrukerNMR.
find_proc
(dir='.', down=True)[source]¶ find a Bruker proc file associated to the directory dir and return its name if down is True - search in dir/pdata/* other searches here
-
spike.File.BrukerNMR.
find_proc2
(dir='.', down=True)[source]¶ find a Bruker proc file associated to the directory dir and return its name
-
spike.File.BrukerNMR.
find_proc3
(dir='.', down=True)[source]¶ find a Bruker proc file associated to the directory dir and return its name
-
spike.File.BrukerNMR.
find_proc_down
(dire, proclist)[source]¶ find a Bruker proc file associated to the directory dir and return its name
search in pdada/PROCNO and returns the first one
-
spike.File.BrukerNMR.
read_1D
(size, filename='fid', bytorda=1, dtypa=0, uses='struct')[source]¶ Reads in a Bruker 1D fid as a numpy float array
size is the number of data-points in the fid uses struct or numpy / numpy is non-standard but ~2x faster dtypa = 0 => int4
= 2 => float8
does not check endianess
-
spike.File.BrukerNMR.
read_2D
(sizeF1, sizeF2, filename='ser', bytorda=1, dtypa=0, uses='struct')[source]¶ Reads in a Bruker 2D fid as a numpy float array
sizeF1 is the number of fid sizeF2 is the number of data-points in the fid
-
spike.File.BrukerNMR.
read_param
(filename='acqus', get_title=True)[source]¶ load a Bruker acqu or proc file as a dictionnary
arrayed values are stored in python array
comments (lines starting with $$) are stored in the special entry [comments]
get_title == False does not try to access the title file, thus allowing to read a stand alone parameter file.
M-A Delsuc jan 2006 oct 2006 : added support for array
-
spike.File.BrukerNMR.
revoffset
(loffset, acqu, proc)[source]¶ computes the Bruker OFFSET (ppm of left most point) from spike axis offset value (Hz of rightmost point)
-
spike.File.BrukerNMR.
zerotime
(acqu)[source]¶ get digital filter parameters, if any
The zerotime function computes the correction for the Bruker figital filter. the phase correction to apply is computed given the 3 parameters : DSPFIRM DSPFVS DECIM as found in the acqus parameter file in XwinNMR
correction is then -360*zerotime in firstorder correction
dspfvs is not used so far oct 2006
spike.File.BrukerSMX module¶
Code to handle 2rr submatrix Bruker file format
used internally by BrukerNMR, not for final use
Original code from L.Chiron adapted my M-A Delsuc
-
class
spike.File.BrukerSMX.
BrukerSMXHandler
(addrproc)[source]¶ Bases:
object
Reads/writes Bruker 2rr files
Writing not fully implemented
-
prepare_mat
()[source]¶ sub_per_dim : [nb of submatrices in t1, nb of submatrices in t2] self.si2 == self.proc[‘$SI’] : dimension 2 of the 2D data. self.si1 == self.proc2[‘$SI’] : dimension 1 of the 2D data. self.xd2 == self.acqu[‘$XDIM’] : size submatrix in F2 self.xd1 == self.acqu2[‘$XDIM’] : size submatrix in F1
-
read_smx
()[source]¶ Reads the 2D “smx” (2rr 2ri 2ir 2ii) files and keeps it in self.data_2d_2xx data taken as integers.
-
reorder_bck_subm
(data)[source]¶ Reorder flat matrix back to sbmx Bruker data. self.sub_per_dim : [nb of submatrices in t1, nb of submatrices in t2] self.nsubs : total number of submatrices self.proc[‘$SI’] : shape of the 2D data. self.acqu[‘$XDIM’] : size submatrix
-
spike.File.GifaFile module¶
GifaFile.py
Created by Marc-André on 2010-03-17. Copyright (c) 2010 IGBMC. All rights reserved.
This module provides a simple access to NMR files in the Gifa format.
-
class
spike.File.GifaFile.
GifaFile
(fname, access='r', debug=0)[source]¶ Bases:
object
defines the interface to simply (read/write) acces Gifa v4 files standard methods are load() and save()
standard sequence to read is F = GifaFile(filename,”r”) B = F.get_data() # B is a NPKdata F.close()
or F = GifaFile(filename,”r”) F.load() B = F.data # B is a NPKdata F.close()
and to write F = GifaFile(filename,”w”) F.set_data(B) # where B is a NPKdata; do not use F.data = B F.save() F.close()
The file consists of a header (of size headersize) and data The header is handled as a dictionnary self.header data is handled as a NPKdata self.data
so numpy ndarray are in self.data.buffer
-
property
byte_order
¶ pour intel
-
copyaxesfromheader
(n_axis)[source]¶ get values from axis “n_axis” from header, and creates and returns a new (NMRAxis) axis with this values itype is not handled (not coded per axis in header) used internally
-
copydiffaxesfromheader
()[source]¶ get values from axis “n” from header, and creates and returns a new (LaplaceAxis) axis with this values used internally
-
property
dim
¶ dimensionality of the dataset 1 2 or 3
-
property
itype
¶ Real/complex type of the dataset in 1D : 0 : real 1: complex in 2D : 0 : real on both;
1 : complex on F2 2 : complex on F1 3 : complex on both
- in 3D0real on all;
1 : complex on F3 2 : complex on F2 3 : complex on F3-F2 4 : complex on F1 5 : complex on F1-F3 6 : complex on F1-F2 7 : complex on all
-
property
nblock1
¶ number of data block on disk along F1 axis
-
property
nblock2
¶ number of data block on disk along F2 axis
-
property
nblock3
¶ number of data block on disk along F3 axis
-
readc
()[source]¶ read a file in Gifa format, and returns the binary buffer as a numpy array internal use - use load()
-
property
size1
¶ size along the F1 axis (either 1D, or slowest varyong axis in nD)
-
property
size2
¶ size along the F2 axis (fastest varying in 2D)
-
property
size3
¶ size along the F3 axis (fastest varying in 3D)
-
property
szblock1
¶ size of data block on disk along F1 axis
-
property
szblock2
¶ size of data block on disk along F2 axis
-
property
szblock3
¶ size of data block on disk along F3 axis
-
property
spike.File.HDF5File module¶
HDF5File.py
Created by Marc-André Delsuc, Marie-Aude Coutouly on 2011-07-13.
API dealing with HDF5File. For now it is non surclassing tables, you have to use *.hf. to access all tables functionalities
-
class
spike.File.HDF5File.
HDF5File
(fname, access='r', info=None, nparray=None, fticrd=None, compress=False, debug=0, verbose=False)[source]¶ Bases:
object
defines the interface to simply (read/write) access HDF5 files standard methods are load() and save()
standard sequence to read is H = HDF5File(filename,”r”) B = H.get_data() # B is a FTICRdata H.close()
or H = HDF5File(filename,”r”) H.load() B = H.data # B is a FTICRdata H.close()
and to write H = HDF5File(filename,”w”) H.set_data(B) # where B is a FTICRdata; do not use H.data = B H.save() H.close()
- HDF5File have the capacity to store and retrieve complete files and python objects:
with
- lis = [any kind of list or tuple] # works also with dict and nested list and dict
then
- H.store_internal_object(lis, “name_of_storage”)
will store the object, and
- lis_back = H.retrieve_object(“name_of_storage”)
will retrieve it
data are stored using JSON, so anything compatible will do
-
axes_update
(group='resol1', axis=2, infos=None)[source]¶ routine called when you want to modify the information on a given axis group is the group name, default is resol1 axis is the dimension we want to adjust infos is a dictionnary with al fields we want to adjust
-
create_carray
(where, name, data_type, shape, chunk=None)[source]¶ Create a CArray in the given hf_file
-
create_from_template
(data, group='resol1')[source]¶ Take params from the empty FTICR data and put all the informations in the HDF5File creates an empty data, and attach it to data.buffer data is created in group, with default value ‘resol1’
-
create_generic
(owner=None)[source]¶ A table is created with all generic informations about the file : owner, method, HDF5 Release, CreationDate, Last modification
-
create_table
(where, name, description)[source]¶ Create a Table in the given hf_file at the given position with the right description
-
determine_chunkshape
(sizeF1=None, sizeF2=None)[source]¶ Determine a good chunkshape according to the size of each axis
-
get_data
(group='resol1', mode='onfile')[source]¶ loads and returns the FTICRdata attached to the self file same parameters as load()
-
load
(group='resol1', mode='onfile')[source]¶ loads the data into memory, set self.data as a FTICRData
group defines which group is loaded (default is resol1) mode defines how it is loaded in memory,
- “onfile” (default ) the data is kept on file and loaded only on demand.
the capability of modifying the data is determined by the way the file was opened the data cannot be modified unless the file was opened with access=’w’ or ‘rw’
- “memory” the data is copied to a memroy buffer and can be freely modified
warning - may saturate the computer memory, there is no control
if you want to load data into memory after having opened in ‘onfile” mode, then do the following : h.load(mode=”onfile”) b = d.data.buffer[…] # data are now copied into a new memory buffer b using ellipsis syntax d.data.buffer = b # and b is used as the data buffer.
-
open_internal_file
(h5name, access='r', where='/attached')[source]¶ opens a node called h5name in the file, which can be accessed as a file. returns a file stram which can be used as a classical file.
- access is either
‘r’ : for reading an existing node ‘w’ : create a node for writing into it ‘a’ : for appending in an existing node
file is stored in a h5 group called h5name
eg. F = h5.open_internal_file(‘myfile.txt’, ‘w’, where=’/files’) # create a node called ‘/files/myfile.txt’ (node ‘myfile.txt’ in the group ‘/files’) F.writelines(text) F.close() # and write some text into it
# then, latter on F = h5.open_internal_file(‘myfile.txt’, ‘r’, where=’/files’) textback = F.read() F.close()
This is used to add parameter files, audit_trail, etc… to spike/hdf5 files
it is based on the filenode module from pytables
-
position_array
(group='resol1')[source]¶ Fill in the HDF5 file with the given buffer, HDF5 file is created with the given numpy array and the corresponding tables
-
retrieve_internal_file
(h5name, where='/attached')[source]¶ returns the content of a n internal file stored with store_internal_file() or written directly
-
retrieve_object
(h5name, where='/', access='r')[source]¶ retrieve a python object stored with store_internal_object()
-
set_compression
(On=False)[source]¶ sets Carray HDF5 file compression to zlib if On is True; to none otherwise
-
set_data
(data, group='resol1')[source]¶ Take the ser_file and the params and put all the informations in the HDF5File
-
set_data_from_fticrd
(buff, group='resol1')[source]¶ sets the FTICRdata attached to the (to be written) file
-
store_internal_file
(filename, h5name=None, where='/attached')[source]¶ - Store a (text) file into the hdf5 file,
filename: name of the file to be copied h5name: is its internal name (more limitation than in regular filesystems)
copied from os.path.basename(filename) by default
where: group where the file is copied into the hdf5
file content will be retrieved using open_internal_file(h5name,’r)
-
class
spike.File.HDF5File.
HDF5_Tests
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
-
test_create_from_fticr
()[source]¶ Test routine that creates a HDF5 file according to a given FTICRData
-
-
spike.File.HDF5File.
determine_chunkshape
(size1, size2)[source]¶ returns optimum size for chuncks for a dataset of file size1, size2 and update cachesize for accomodating dataset
-
spike.File.HDF5File.
up0p6_to_0p7
(fname, debug=1)[source]¶ docstring for up0p6_to_0p7 Function that deals with changing HDF5 files created with file_version 0.6 to be read with 0.7 It modifies
-
spike.File.HDF5File.
up0p7_to_0p8
(fname, debug=1)[source]¶ docstring for up0p7_to_0p8 Function that deals with changing HDF5 files created with file_version 0.7 to be read with 0.8
spike.File.Solarix module¶
Solarix.py
Utility to Handle Solarix files
Created by mac on 2013-05-24. updated may 2017 to python 3, added compress option
Copyright (c) 2013 __NMRTEC__. All rights reserved.
-
spike.File.Solarix.
Import_1D
(inifolder, outfile='', compress=False)[source]¶ Entry point to import 1D spectra It returns a FTICRData It writes a HDF5 file if an outfile is mentionned
-
spike.File.Solarix.
Import_2D
(folder, outfile='', F1specwidth=None, compress=False)[source]¶ Entry point to import 2D spectra It returns a FTICRData It writes a HDF5 file if an outfile is mentionned
compression (compress=True) is efficient, but takes a lot of time.
-
spike.File.Solarix.
get_param
(param, names, values)[source]¶ From params, this function returns the value of the given param
-
spike.File.Solarix.
locate_ExciteSweep
(folder)[source]¶ From the given folder this function return the absolute path to the ExciteSweep file It should always be in a subfolder
-
spike.File.Solarix.
locate_acquisition
(folder)[source]¶ From the given folder this function return the absolute path to the apexAcquisition.method file It should always be in a subfolder
-
spike.File.Solarix.
read_2D
(sizeF1, sizeF2, filename='ser')[source]¶ Reads in a Solarix 2D fid
sizeF1 is the number of fid sizeF2 is the number of data-points in the fid uses array
-
spike.File.Solarix.
read_3D
(sizeF1, sizeF2, sizeF3, filename='ser')[source]¶ Ebauche de fonction
Reads in a Apex 3D fid
uses array
-
spike.File.Solarix.
read_ExciteSweep
(filename)[source]¶ Function that returns the lower and higher frequency of the pulse generator
-
spike.File.Solarix.
read_param
(parfilename)[source]¶ Open the given file and retrieve all parameters from apexAcquisition.method NC is written when no value for value is found
structure : <param name = “AMS_ActiveExclusion”><value>0</value></param>
read_param returns values in a dictionnary
spike.File.Spinit module¶
Utility to Handle NMR Spinit files
-
spike.File.Spinit.
Export_1D
(d, filename='data.dat', template='header.xml', kind=None)[source]¶ export a 1D NMRData as a spinit kind: 1DFID, 1DSPEC
-
spike.File.Spinit.
Export_2D
(d, filename='data.dat', template='header.xml', kind=None, debug=0)[source]¶ export a 2D NMRData as a spinit
-
spike.File.Spinit.
add_data_representation
(headertree, value)[source]¶ add an entry from a headertree loaded with load_header() value is a list of strings
-
spike.File.Spinit.
add_state
(headertree, value)[source]¶ add an entry from a headertree loaded with load_header() value is a list of strings
-
spike.File.Spinit.
data_representation
(val)[source]¶ Produces the entry block for DATA_REPRESENTATION from the given val (type is list).
-
spike.File.Spinit.
ftF1_spinit
(data, debug=0)[source]¶ Plugin spinit for perfoming the FT along the F1 axis according to the kind of acquisition. Cases taken in account : TPPI COMPLEX PHASE_MODU COMPLEX_TPPI ECHO_ANTIECHO
-
spike.File.Spinit.
get_acquisition_mode
(acqu)[source]¶ Retrieves the acquisition mode list from the header file. known values: REAL, COMPLEX, TPPI, COMPLEX_TPPI, PHASE_MODULATION, ECHO_ANTIECHO
-
spike.File.Spinit.
get_data_representation
(acqu)[source]¶ Retrieves the data representation list from the header file. known values: REAL, COMPLEX
-
spike.File.Spinit.
load_header
(filename='header.xml')[source]¶ loads a header.xml as an ET.tree, and keep it in memory
returns headertree xml parameters are found in headertree.getroot()
-
spike.File.Spinit.
modify_val
(headertree, key, values)[source]¶ modify an entry from a headertree loaded with load_header() key should be present in headertree values is either a single value or a list, depending on key type, and lengths should match key and val are both strings
- headertree can then written back to disk with
headertree.write(‘header.xml’)
-
spike.File.Spinit.
read_1D
(size, filename='data.dat', debug=0)[source]¶ Reads in a Spinit 1D fid as a numpy float array
size is the number of data-points in the fid uses struct does not check endianess
-
spike.File.Spinit.
read_2D
(sizeF1, sizeF2, filename='data.dat')[source]¶ Reads the 2D files and return a buffer
-
spike.File.Spinit.
read_param
(filename='header.xml')[source]¶ loads a header.xml as a dictionnary key:value
spike.File.Thermo module¶
Utility to Handle Thermofisher files
Marc-André from first draft by Lionel
-
spike.File.Thermo.
Import_1D
(filename)[source]¶ Entry point to import 1D spectra It returns an Orbitrap data
-
class
spike.File.Thermo.
Thermo_Tests
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
A FAIRE
-
spike.File.Thermo.
read_data
(F, typ='float')[source]¶ given F, an opened file, reads the values and read_param returns values in a dictionnary
spike.File.csv module¶
Utility to import and export data in text and csv files
all functions compress transparently if the filenames end with .gz Marc-André adapted from some Lionel code
-
spike.File.csv.
Import_1D
(filename, column=0, delimiter=',')[source]¶ import a 1D file stored as csv header as comments (#) parameters in pseudocomments :
#$key value
then one value per line column and delimiter as in load()
-
class
spike.File.csv.
csvTests
(methodName='runTest')[source]¶ Bases:
unittest.case.TestCase
Testing NPKData basic behaviour -
-
spike.File.csv.
load
(filename, column=0, delimiter=',')[source]¶ load 1D data from txt or csv file, attribute are in pseudo-comments starting with #$ value are in columns, separated by delimiter - only the column given in arg will be loaded column = 0 is fine for text files column = 1 is fine for csv files with currentunit returns a numpy buffer and an attribute dictionary
spike.File.mzXML module¶
Module contents¶
File Utilities
Created by Marc-André on 2011-03-20. Copyright (c) 2011 IGBMC. All rights reserved.