Next: , Previous: , Up: Variables   [Index]


4.29 Nonblockingly Read a List of Subarrays: ncmpi_iget_varn_<type>

The ncmpi_iget_varn_<type> family of functions is the nonblocking version of ncmpi_get_varn_<type> family. The functions read, in a nonblocking fashion, a list of subarrays of a netCDF variable in an opened netCDF file.

A nonblocking get call indicates that the PnetCDF may start using (possible altering the contents of) the get buffer. The caller should not access any part of the get buffer after a nonblocking get is called, until the get completes.

The part of the netCDF variable to read is specified by giving a list of subarrays and each subarray is specified by a corner and a vector of edge lengths that refer to an array section of the netCDF variable. For each subarray, the values to be read are associated with the netCDF variable by assuming that the last dimension of the netCDF variable varies fastest in the C interface.

This API essentially has the same effect of making multiple calls to ncmpi_get_vara_<type> with the same variable ID.

Data types

<type > for API names <C type> for API arguments
text char
schar signed char
short short
int int
float float
double double
uchar unsigned char
ushort unsigned short
uint unsigned int
longlong long long
ulonglong unsigned longlong

Operational Mode

Prior to version 1.7.0, nonblocking APIs must be called while the netCDF file is in data mode. Starting from 1.7.0, nonblocking APIs can be called in either define or data mode. This API family can be called in either collective or independent data mode.

Usage

int ncmpi_iget_varn_<type>     (int               ncid,
                               int               varid,
                               int               num,
                               const MPI_Offset  starts[num][],
                               const MPI_Offset  counts[num][],
                               <C type>         *buf,
                               int              *request);

int ncmpi_iget_varn            (int               ncid,
                               int               varid,
                               int               num,
                               const MPI_Offset  starts[num][],
                               const MPI_Offset  counts[num][],
                               void             *buf,
                               MPI_Offset        bufcount,
                               MPI_Datatype      buftype,
                               int              *request);
ncid

NetCDF ID, from a previous call to ncmpi_open or ncmpi_create.

varid

Variable ID. Different MPI processes may use different variable IDs.

num

Nnumber of subarray requests.

starts

A double pointer that mimics a 2D array of size [num][ndims]. See below for example of how to allocate space and construct such a 2D array. Each starts[i] is a vector specifying the index in the variable where the first of the data values will be read. The indices are relative to 0, so for example, the first data value of a variable would have index (0, 0, ... , 0). The size of starts must be [num][ndims], where ndims is the number of dimensions of the specified variable. The elements of each starts[i] must correspond to the variable’s dimensions in order. Hence, if the variable is a record variable, the first index of each starts[i] would correspond to the starting record number for writing the data values.

counts

A double pointer that mimics a 2D array of size [num][ndims]. See below for example of how to allocate space and construct such a 2D array. Each counts[i] is a vector specifying the edge lengths along each dimension of the block of data values to be read. To write a single value, for example, specify count as (1, 1, ... , 1). The size of counts should be [num][ndims], where ndims is the number of dimensions of the specified variable. The elements of count correspond to the variable’s dimensions. Hence, if the variable is a record variable, the first element of each counts[i] corresponds to a count of the number of records to write. This argument can be NULL, in which case it is equivalent to providing counts[*][*]=(1, 1, ... , 1).

buf

A pointer to the memory address that contains a buffer space to store the data values to be read from the opened file. If the type of data values differs from the netCDF variable type, type conversion will occur.

bufcount

An integer indicates the number of MPI derived data type elements to be read from the file and stored in the buffer pointed by buf.

buftype

An MPI derived data type that describes the memory layout of buf. Starting from PnetCDF version 1.6.0, buftype can be MPI_DATATYPE_NULL. In this case, bufcount is ignored and the buf’s data type must match the type of the variable defined in the file - no data conversion will be done.

Return Error Codes

ncmpi_iget_varn_<type> returns the value NC_NOERR if no errors occurred. Otherwise, the returned status indicates an error. Possible causes of errors include:

Example

This example shows how to use a single call of ncmpi_iget_varn_float_all() to read a sequence of requests with arbitrary array indices.

#include <pnetcdf.h>
    ...
#define MAX_NUM_REQS 6
#define NDIMS        2

int  i, j;
int  rank;                         /* MPI process rank ID */
int  err;                          /* error status */
int  ncid;                         /* netCDF ID */
int  varid;                        /* variable ID */
int  num_reqs;                     /* number of subarray requests */
int  dimid[2];                     /* dimension IDs */
int  request[1];                   /* nonblocking request ID */
int  st[1];                        /* nonblocking request status */
int  buf_len;                      /* number of elements in buffer */
float *buffer;                     /* read buffer to hold values */
MPI_Offset **starts, **counts;
    ...
MPI_Comm_rank(MPI_COMM_WORLD, &rank);

err = ncmpi_open(MPI_COMM_WORLD, "foo.nc", NC_NOWRITE, MPI_INFO_NULL, &ncid);
if (err != NC_NOERR) handle_error(err);

/* allocate starts and counts */
starts    = (MPI_Offset**) malloc(MAX_NUM_REQS*       sizeof(MPI_Offset*));
starts[0] = (MPI_Offset*)  calloc(MAX_NUM_REQS*NDIMS, sizeof(MPI_Offset));
for (i=1; i<MAX_NUM_REQS; i++)
    starts[i] = starts[i-1] + NDIMS;

counts    = (MPI_Offset**) malloc(MAX_NUM_REQS*       sizeof(MPI_Offset*));
counts[0] = (MPI_Offset*)  calloc(MAX_NUM_REQS*NDIMS, sizeof(MPI_Offset));
for (i=1; i<MAX_NUM_REQS; i++)
    counts[i] = counts[i-1] + NDIMS;

if (rank == 0) {
    num_reqs = 4;
    starts[0][0] = 0; starts[0][1] = 5; counts[0][0] = 1; counts[0][1] = 2;
    starts[1][0] = 1; starts[1][1] = 0; counts[1][0] = 1; counts[1][1] = 1;
    starts[2][0] = 2; starts[2][1] = 6; counts[2][0] = 1; counts[2][1] = 2;
    starts[3][0] = 3; starts[3][1] = 0; counts[3][0] = 1; counts[3][1] = 3;
    /* rank 0 is reading the following locations: ("-" means skip)
              -  -  -  -  -  0  0  -  -  - 
              0  -  -  -  -  -  -  -  -  - 
              -  -  -  -  -  -  0  0  -  - 
              0  0  0  -  -  -  -  -  -  - 
     */
} else if (rank == 1) {
    num_reqs = 6;
    starts[0][0] = 0; starts[0][1] = 3; counts[0][0] = 1; counts[0][1] = 2;
    starts[1][0] = 0; starts[1][1] = 8; counts[1][0] = 1; counts[1][1] = 2;
    starts[2][0] = 1; starts[2][1] = 5; counts[2][0] = 1; counts[2][1] = 2;
    starts[3][0] = 2; starts[3][1] = 0; counts[3][0] = 1; counts[3][1] = 2;
    starts[4][0] = 2; starts[4][1] = 8; counts[4][0] = 1; counts[4][1] = 2;
    starts[5][0] = 3; starts[5][1] = 4; counts[5][0] = 1; counts[5][1] = 3;
    /* rank 1 is reading the following locations: ("-" means skip)
              -  -  -  1  1  -  -  -  1  1 
              -  -  -  -  -  1  1  -  -  - 
              1  1  -  -  -  -  -  -  1  1 
              -  -  -  -  1  1  1  -  -  - 
    */
} else if (rank == 2) {
    num_reqs = 5;
    starts[0][0] = 0; starts[0][1] = 7; counts[0][0] = 1; counts[0][1] = 1;
    starts[1][0] = 1; starts[1][1] = 1; counts[1][0] = 1; counts[1][1] = 3;
    starts[2][0] = 1; starts[2][1] = 7; counts[2][0] = 1; counts[2][1] = 3;
    starts[3][0] = 2; starts[3][1] = 2; counts[3][0] = 1; counts[3][1] = 1;
    starts[4][0] = 3; starts[4][1] = 3; counts[4][0] = 1; counts[4][1] = 1;
    /* rank 2 is reading the following locations: ("-" means skip)
              -  -  -  -  -  -  -  2  -  - 
              -  2  2  2  -  -  -  2  2  2 
              -  -  2  -  -  -  -  -  -  - 
              -  -  -  2  -  -  -  -  -  - 
     */
} else if (rank == 3) {
    num_reqs = 4;
    starts[0][0] = 0; starts[0][1] = 0; counts[0][0] = 1; counts[0][1] = 3;
    starts[1][0] = 1; starts[1][1] = 4; counts[1][0] = 1; counts[1][1] = 1;
    starts[2][0] = 2; starts[2][1] = 3; counts[2][0] = 1; counts[2][1] = 3;
    starts[3][0] = 3; starts[3][1] = 7; counts[3][0] = 1; counts[3][1] = 3;
    /* rank 3 is reading the following locations: ("-" means skip)
              3  3  3  -  -  -  -  -  -  - 
              -  -  -  -  3  -  -  -  -  - 
              -  -  -  3  3  3  -  -  -  - 
              -  -  -  -  -  -  -  3  3  3 
     */
} else {
    num_reqs = 0;
}

/* allocate read buffer */
buf_len = 0;
for (i=0; i<num_reqs; i++) {
    MPI_Offset r_req_len=1;
    for (j=0; j<NDIMS; j++)
        r_req_len *= counts[i][j];
    buf_len += r_req_len;
}
buffer = (float*) malloc(buf_len * sizeof(float));

/* reads values into netCDF variable */
err = ncmpi_iget_varn_float(ncid, varid, num_reqs, starts, counts, buffer, &request[0]);
if (err != NC_NOERR) handle_error(err);
    ...
 /* wait for the nonblocking operation to complete */
 status = ncmpi_wait_all(ncid, 1, request, st);
 if (status != NC_NOERR) handle_error(status);

err = ncmpi_close(ncid);
if (err != NC_NOERR) handle_error(err);

/*    % ncmpidump foo.nc
 *    netcdf foo {
 *    dimensions:
 *             Y = 4 ;
 *             X = 10 ;
 *    variables:
 *             float var(Y, X) ;
 *    data:
 *
 *     var =
 *       3, 3, 3, 1, 1, 0, 0, 2, 1, 1,
 *       0, 2, 2, 2, 3, 1, 1, 2, 2, 2,
 *       1, 1, 2, 3, 3, 3, 0, 0, 1, 1,
 *       0, 0, 0, 2, 1, 1, 1, 3, 3, 3 ;
 *    }
 */

Next: , Previous: , Up: Variables   [Index]