...Cannot close file!

Forums

Hello,

Trying to scale up my refinement, I have an error message "cannot close file". I don't know where it comes from. I have plenty of space on my hard drive and it was working fine when doing a test run with only 200 particles.
Any help welcome!

Amedee

Opening MRC/CCP4 file for READ...
File : /master.raid/home_master/frealign_v8.08_2/spiderinput/hr8_0.spi
NX, NY, NZ: 350 350 350
MODE : real
Min, max : -0.2297778E-02 0.2791442E-02
Mean, RMS : 0.2085688E-04 0.3075726E-03
TITLE 1: Created by IMAGIC: SPIDER image = bpr06.dat 02-09-20 11:47:19

***WARNING*** Circumference STD
significant compared with volume STD, could indicate
noisy reference or density gradient !!!

Mean density of 3D volume: 0.00002086
STD of 3D volume: 0.00030757
Mean density of circumference: -0.00000234
STD of circumference: 0.00017811

3D WEIGHTS FILE FOR OUTPUT?
3D RECONSTRUCTION HALFSET 1 FOR OUTPUT ?
3D RECONSTRUCTION HALFSET 2 FOR OUTPUT ?
3D PHASE RESIDUAL FILE FOR OUTPUT ?
3D POINT SPREAD FUNCTION FOR OUTPUT ?
Cannot close file ...

It looks like you are trying to use the MRC/CCP4 image format (option M) but your data is in Spider format (.spi extension). Try changing to S for the format or reformat your data into MRC/CCP4 format.

In reply to by niko

the image was originally created in spider, but I converted it with em2em.
I'm about certain that everything is MRC. And it was working when using just the first 200 images on one node.
The problem appearded when I scaled up to 20 nodes and all my particles.

In reply to by adesgeorges

In that case the error might be due to the use of multiple nodes. If several instances of Frealign are running, you need to make sure they do not all try to write to the same output file. The way to do this is to assign different output file names for the different jobs. Your error might mean that you use the same file names for the WEIGHTS file and the other files that are opened at the end of the Frealign run.

Hello,

It is somewhere pleasant to find someone with the same problem.

I writed on the forum some months before about my "cannot close file" problem.
It seems that you have the same problem than me. At this time, I doesn't find the solution.
Perhaps we could exchange our configuration to try to resolve our problem.
Niko tried to gived me a solution without success.

I use a Torque cluster with pbs/maui sheduler.
I have no problem with the "pdh" sample wich use 200 particles.
I used my own files with more than 5000 particles and I use different parameters for increment and last_particle with first_particle=1 (see the three trial above)
It seems that the problem is link to the size of the particle file but 50 process with 10 particles each is not a big system and it is not working

A : 10 processus of 50 particles
last_particle 500
increment 50
I have 10 process starting and only 4 continue, the other 6 stop with "cannot close file" in the log file

B: 25 processus of 20 particles
last_particle 500
increment 20
I have 25 process starting and only 10 continue, the other 15 stop with "cannot close file" in the log file

B: 50 processus of 10 particles
last_particle 500
increment 10
I have 50 process starting and only 20 continue, the other stop with "cannot close file" in the log file

We haven't no explications about the reason of the "cannot close file" : Multiple access, right access, file exist ?
If you have found a solution I would be very grateful if you could give me a explanation.
Have you tried another parameters or file format ?

Thanks

E. Richard

In reply to by erichard

We have not had this problem, so I can only speculate what the reason for the error message might be: When we run multiple Frealign jobs, we have never had a problem with files being read by multiple jobs. However, if the parameters are not set correctly, multiple Frealign jobs might also try to WRITE to files, namely the 3D reconstruction file and the other output files (weights, map1, map2, point spread function). Therefore, you must make sure you run the Frealign jobs without generating output files (except the alignment parameter files). To avoid generating output files, the last CARD 6 input must be

-100., 0., 0., 0., 0., 0., 0., 0.

(see the mrefine_n.com script in the examples/multiprocessor_refinement directory). The multiple Frealign jobs will calculate new alignment parameters which will then be used by another run of Frealign to calculate a reconstruction. This final single job will generate the 3D reconstruction and other output files. CARD 6 for the reconstruction job is

0., 0., 0., 0., 0., 0., 0., 0.

(see the mreconstruct.com script in the examples/multiprocessor_refinement directory).

Hope this helps.

In reply to by niko

I did not change the CARD 6 parameters. The parameters are the good ones

I would like to contact directly "adesgeorges" to try to resolve our similar problem.
Is it possible that he had found a solution without consult this forum. Could you transmit him my email adress or give me his adress because it is not possible to send mail directly by this forum.

Thanks for your help

In reply to by erichard

If you have had success in sorting out the problem with adesgeorges please post the solution here when you have a moment.

If you have not been able to sort out this problem, please try to use the MRC file format (use em2em to convert file formats). It is possible that the I/O routines for Spider and IMAGIC format have problems since these formats are not used regularly in our lab.

In reply to by niko

The mutli-processor frealign needed some adjustment to run on our cluster.
The originally scripts used only sge scheduler and was writed for one workstation with several processors
With our cluster composed by several workstation and managed by Rocks system, jobs launched on nodes did not found the "scratch" folder. Qsub command options are also different.
I made changes in scripts to consider those differences.
To adapt the program to our file data and to launch the first run, I made change only in parameter file. I replaced names, first and last particles and increment, then I launched scripts. I thought that it was enough because the "pdh" example worked fine.

The "cannot close file" error which appended with our data never suggest me some problem with the script file. It suggest some right access or wrong file format.

The problem finally came from the NIN parameter in msearch_n.sh that was not equal to the total number of particle fixed in parameter file
This NIN parameter would be positioned directly from the parameter file or indicated like comment that NIN must be positionned to avoid this error at new users

It works now perfectly.

Thanks for your help

Hi everyone,

I just had the same problem suddenly appearing. I wanted to switch from IMAGIC to FREALIGN and did an IFLAG 1 run. Everything worked fine so far, just that the reconstruction was not what should be expected. So I reconstructed part of my dataset using IFLAG 0 to test, if the IMAGIC to FREALIGN conversion worked well. When I saw that it did work, I wanted to try different settings for refinement going back to IFLAG 1, just that now I got the file closing error. At some place in the forum, Niko writes that it should be a problem of the stack, so I recreated it and everything works fine now. So, if your are sure that everything works fine with your parallelization scripts, look at you stack.

Best regards,

Michael

In reply to by msaur

Hi everyone....again,

I just have been careless enough to use a higher number of particles as ILAST than were actually in my last parameter file (ILAST was set equal to the number of particles in my stack, though). For some of my log-out files I got the "Cannot close file" error again. Just to enrich the forum with another situation where you might encounter the error.

Cheers,

Michael