Page tree

The license could not be verified: License Certificate has expired!

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »









Sets metadata I/O mode for read operations to collective or independent (default)


H5P_SET_ALL_COLL_METADATA_OPS ( accpl_id, is_collective )


herr_t H5Pset_all_coll_metadata_ops(
              hid_t accpl_id,
              hbool_t is_collective

Fortran Interface: h5pset_all_coll_metadata_ops_f

  SUBROUTINE h5pset_all_coll_metadata_ops_f(plist_id, is_collective, hdferr)
    INTEGER(HID_T)  , INTENT(IN)  :: plist_id
    LOGICAL, INTENT(IN)           :: is_collective
    INTEGER, INTENT(OUT)          :: hdferr

  plist_id       - File access property list identifier.
  is_collective  - Indicates if metadata writes are collective or not.

  hdferr         - Returns 0 if successful and -1 if fails.

hid_t accpl_idIN: File, group, dataset, datatype, link, or attribute access property list identifier
hbool_t is_collectiveIN: Boolean value indicating whether metadata reads are collective (TRUE) or independent (FALSE)
Default mode: Independent (FALSE)


H5Pset_all_coll_metadata_ops sets the metadata I/O mode for read operations in the access property list accpl.

When engaging in parallel I/O, all metadata write operations must be collective. If is_collective is TRUE, this property specifies that the HDF5 Library will perform all metadata read operations collectively; if is_collective is FALSE, such operations may be performed independently.

Users must be aware that several HDF5 operations can potentially issue metadata reads. These include opening a dataset, datatype, or group; reading an attribute; or issuing a get info call such as getting information for a group with H5Gget_info. Collective I/O requirements must be kept in mind when issuing such calls in the context of parallel I/O.

If this property is set to true on a file access property list that is used in creating or opening a file, then the HDF5 Library will assume that all metadata read operations issued on that file identifier will be issued collectively from all ranks irrespective of the individual setting of a particular operation. If this assumption is not adhered to, corruption will be introduced in the metadata cache and HDF5’s behavior will be undefined.

Alternatively, a user may wish to avoid setting this property globally on the file access property list, and individually set it on particular object access property lists (dataset, group, link, datatype, attribute access property lists) for certain operations. This will indicate that only the operations issued with such an access property list will be called collectively and other operations may potentially be called independently. There are, however, several HDF5 operations that can issue metadata reads but have no property list in their function signatures to allow passing the collective requirement property. For those operations, the only option is to set the global collective requirement property on the file access property list; otherwise the metadata reads that can be triggered from those operations will be done independently by each process.

Functions that do not accommodate an access property list but that might issue metadata reads are listed in “Functions with No Access Property List Parameter that May Generate Metadata Reads.”


As noted above, corruption will be introduced into the metadata cache and HDF5 Library behavior will be undefined when both of the following conditions exist:

  • A file is created or opened with a file access property list in which the collective metadata I/O property is set to TRUE.
  • Any function is called that triggers an independent metadata read while the file remains open with that file access property list.

An approach that avoids this corruption risk is described above.


Returns a non-negative value if successful; otherwise returns a negative value.


examples / h5_subset.c [32:42]  1.10/master  HDFFV/hdf5
main (void)
    hsize_t     dims[2], dimsm[2];   
    int         data[DIM0][DIM1];           /* data to write */
    int         sdata[DIM0_SUB][DIM1_SUB];  /* subset to write */
    int         rdata[DIM0][DIM1];          /* buffer for read */
    hid_t       file_id, dataset_id;        /* handles */
    hid_t       dataspace_id, memspace_id; 


     USE HDF5 ! This module contains all necessary modules


     CHARACTER(LEN=11), PARAMETER :: filename = "compound.h5" ! File name
     CHARACTER(LEN=8), PARAMETER :: dsetname = "Compound"     ! Dataset name
     INTEGER, PARAMETER :: dimsize = 6 ! Size of the dataset

     INTEGER(HID_T) :: file_id       ! File identifier

c++ / examples / create.cpp [33:43]  1.10/master  HDFFV/hdf5
int main (void)
    * Data initialization.
   int i, j;
   int data[NX][NY];          // buffer for data to write
   for (j = 0; j < NX; j++)
      for (i = 0; i < NY; i++)

public class H5Ex_D_Chunk {
    private static String FILENAME = "H5Ex_D_Chunk.h5";
    private static String DATASETNAME = "DS1";
    private static final int DIM_X = 6;
    private static final int DIM_Y = 8;
    private static final int CHUNK_X = 4;
    private static final int CHUNK_Y = 4;
    private static final int RANK = 2;
    private static final int NDIMS = 2;

Release    Change
1.10.0C function and Fortran wrapper introduced with this release.

--- Last Modified: November 20, 2017 | 02:51 PM