datavizhub.processing package¶
- class datavizhub.processing.DataProcessor[source]¶
Bases:
ABC
Abstract base for data processors in the DataVizHub pipeline.
This base class defines the minimal contract that processing components must fulfill when transforming raw inputs (typically acquired by
datavizhub.acquisition
) into outputs consumable by downstream visualization and publishing steps.Processors standardize three phases:
load(input_source)
: prepare or ingest the input sourceprocess(**kwargs)
: perform the core processing/transformationsave(output_path=None)
: persist results if applicable
An optional
validate()
hook is provided for input checking.- Parameters:
... – Concrete processors define their own initialization parameters according to their domain (e.g., input directories, output paths, catalog URLs). There is no common constructor on the base class.
Examples
Use a processor in a pipeline with an acquirer:
from datavizhub.acquisition.ftp_manager import FTPManager from datavizhub.processing.video_processor import VideoProcessor # Acquire frames with FTPManager(host="ftp.example.com") as ftp: ftp.fetch("/pub/frames/img_0001.png", "./frames/img_0001.png") # ...fetch remaining frames... # Process frames into a video vp = VideoProcessor(input_directory="./frames", output_file="./out/movie.mp4") vp.load("./frames") vp.process() vp.save("./out/movie.mp4")
- property features: set[str]¶
Set of feature strings supported by this processor.
- Returns:
The class-level
FEATURES
set if defined, else an empty set.- Return type:
set of str
- abstract load(input_source: Any) None [source]¶
Load or prepare the input source for processing.
- Parameters:
input_source (Any) – Location or handle of input data (e.g., directory path, file path, URL, opened handle). Interpretation is processor-specific.
- abstract process(**kwargs: Any) Any [source]¶
Execute core processing and return results if applicable.
- Parameters:
**kwargs (Any) – Processor-specific options that influence the run (e.g., flags, thresholds, file paths).
- Returns:
A result object if the processor yields one (e.g., arrays), otherwise
None
.- Return type:
Any
- abstract save(output_path: str | None = None) str | None [source]¶
Persist results and return the output path.
- Parameters:
output_path (str, optional) – Destination path to write results. If omitted, implementations may use their configured default.
- Returns:
The final output path on success;
None
if nothing was written.- Return type:
str or None
- class datavizhub.processing.DecodedGRIB(backend: str, dataset: Any | None = None, messages: List[Any] | None = None, path: str | None = None, meta: Dict[str, Any] | None = None)[source]¶
Bases:
object
Container for decoded GRIB2 content.
- backend¶
The backend used to decode the data: “cfgrib”, “pygrib” or “wgrib2”.
- Type:
str
- dataset¶
An xarray.Dataset when using the cfgrib backend, if available.
- Type:
Any, optional
- messages¶
A list of pygrib messages when using the pygrib backend.
- Type:
list, optional
- path¶
Path to a temporary GRIB2 file on disk. Some backends keep file access.
- Type:
str, optional
- meta¶
Optional metadata extracted by a CLI tool such as wgrib2.
- Type:
dict, optional
- backend: str¶
- dataset: Any | None = None¶
- messages: List[Any] | None = None¶
- meta: Dict[str, Any] | None = None¶
- path: str | None = None¶
- class datavizhub.processing.GRIBDataProcessor(catalog_url: str | None = None)[source]¶
Bases:
DataProcessor
Process GRIB data and THREDDS catalogs.
Reads GRIB files into NumPy arrays and provides utilities for working with THREDDS data catalogs. Supports simple longitude shifts for global grids and helper functions for stacking 2D slices over time.
- Parameters:
catalog_url (str, optional) – Optional THREDDS catalog URL for dataset listing.
Examples
Read a GRIB file into arrays:
from datavizhub.processing.grib_data_processor import GRIBDataProcessor gp = GRIBDataProcessor() data, dates = gp.process(grib_file_path="/path/to/file.grib2", shift_180=True)
- FEATURES = {'load', 'process', 'save', 'validate'}¶
- static combine_into_3d_array(directory: str, file_pattern: str)[source]¶
Combine 2D grids from multiple files into a 3D array.
- Parameters:
directory (str) – Root directory to search.
file_pattern (str) – Glob pattern to match files (used with
Path.rglob
).
- Returns:
A 3D array stacking the first returned field across files.
- Return type:
numpy.ndarray
- list_datasets() None [source]¶
List datasets from a THREDDS data server catalog configured by
catalog_url
.
- load(input_source: Any) None [source]¶
Load or set a THREDDS catalog URL.
- Parameters:
input_source (Any) – Catalog URL string (converted with
str()
).
- static load_data_from_file(file_path: str, short_name: str, shift_180: bool = False)[source]¶
Load one field by
short_name
returning data and coordinates.- Parameters:
file_path (str) – Path to the GRIB file.
short_name (str) – GRIB message shortName to select (e.g.,
"tc_mdens"
).shift_180 (bool, default=False) – If True, shift longitudes and roll the data accordingly.
- Returns:
(data, lats, lons)
arrays when found; otherwise(None, None, None)
.- Return type:
tuple
- process(**kwargs: Any)[source]¶
Process a GRIB file into arrays and timestamps.
- Parameters:
grib_file_path (str, optional) – Path to a GRIB file to parse.
shift_180 (bool, default=False) – If True, roll the longitudes by 180 degrees for global alignment.
- Returns:
(data_list, dates)
whengrib_file_path
is provided, wheredata_list
is a list of 2D arrays. OtherwiseNone
.- Return type:
tuple or None
- static process_grib_files_wgrib2(grib_dir: str, command: list[str], output_file: str) None [source]¶
Invoke an external wgrib2 command for each file in a directory.
- Parameters:
grib_dir (str) – Directory containing input GRIB files.
command (list of str) – Command and arguments to execute (
wgrib2
invocation).output_file (str) – Path to write outputs to, according to the command.
- static read_grib_file(file_path: str) None [source]¶
Print basic metadata for each GRIB message.
- Parameters:
file_path (str) – Path to a GRIB file to inspect.
- static read_grib_to_numpy(grib_file_path: str, shift_180: bool = False) Tuple[list | None, list | None] [source]¶
Convert a GRIB file into a list of 2D arrays and dates.
- Parameters:
grib_file_path (str) – Path to a GRIB file to read.
shift_180 (bool, default=False) – If True, roll the longitudes by half the width for -180/180 alignment.
- Returns:
(data_list, dates)
on success;(None, None)
on error.- Return type:
tuple of list or (None, None)
- save(output_path: str | None = None) str | None [source]¶
Save processed data to a NumPy
.npy
file if available.- Parameters:
output_path (str, optional) – Destination filename to write array data to.
- Returns:
The path written to, or
None
if there is no data.- Return type:
str or None
- static shift_data_180(data: numpy.ndarray, lons: numpy.ndarray) numpy.ndarray [source]¶
Shift global longitude grid by 180 degrees.
- Parameters:
data (numpy.ndarray) – 2D array of gridded values with longitudes varying along axis=1.
lons (numpy.ndarray) – 2D array of longitudes corresponding to
data
(unused for the basic roll operation but included for interface symmetry).
- Returns:
Rolled data such that a 0–360 grid is centered at -180..180.
- Return type:
numpy.ndarray
- exception datavizhub.processing.VariableNotFoundError[source]¶
Bases:
KeyError
Raised when a requested GRIB variable cannot be found.
- class datavizhub.processing.VideoProcessor(input_directory: str, output_file: str, basemap: str | None = None, fps: int = 30)[source]¶
Bases:
DataProcessor
Create videos from image sequences via FFmpeg.
Processes image frames into a cohesive video file. Optionally overlays a static basemap beneath the frames. Requires system FFmpeg and FFprobe to be installed and accessible on PATH.
- Parameters:
input_directory (str) – Directory where input images are stored.
output_file (str) – Destination path for the rendered video file.
basemap (str, optional) – Optional background image path to overlay beneath frames.
Examples
Render a video from PNG frames:
from datavizhub.processing.video_processor import VideoProcessor vp = VideoProcessor(input_directory="./frames", output_file="./out.mp4") vp.load("./frames") vp.process() vp.save("./out.mp4")
- FEATURES = {'load', 'process', 'save', 'validate'}¶
- check_ffmpeg_installed() bool [source]¶
Check that FFmpeg and FFprobe are available on PATH.
- Returns:
True when both tools return version info without error.
- Return type:
bool
- load(input_source: Any) None [source]¶
Set or update the input directory.
- Parameters:
input_source (Any) – Path to the directory containing input frames. Converted to str.
- process(**kwargs: Any) str | None [source]¶
Compile image frames into a video.
- Returns:
The output file path on success;
None
if processing failed.- Return type:
str or None
- process_video(*, fps: int | None = None) bool [source]¶
Build the video using FFmpeg from frames in
input_directory
.Notes
Uses glob pattern matching on the first file’s extension to include all frames with that extension in the directory.
When
basemap
is provided, overlays frames on top of the basemap.Logs FFmpeg output lines at debug level.
- save(output_path: str | None = None) str | None [source]¶
Finalize the configured output path.
- Parameters:
output_path (str, optional) – If provided, updates the configured output path before returning it.
- Returns:
The output path the processor will write or has written to.
- Return type:
str or None
- validate() bool [source]¶
Check FFmpeg/FFprobe availability.
- Returns:
True if FFmpeg and FFprobe executables are available.
- Return type:
bool
- validate_frame_count(video_file: str, expected_frame_count: int) bool [source]¶
Validate the total number of frames in a video.
- Parameters:
video_file (str) – Path to the video file to inspect.
expected_frame_count (int) – Expected total number of frames.
- Returns:
True when expected frame count matches the probed value.
- Return type:
bool
- datavizhub.processing.convert_to_format(decoded: DecodedGRIB, format_type: str, var: str | None = None) Any [source]¶
Convert decoded GRIB to a requested format.
Supported formats: - “dataframe”: Pandas DataFrame (requires xarray backend) - “xarray”: xarray.Dataset or DataArray - “netcdf”: NetCDF bytes (xarray+netcdf4 or wgrib2 fallback) - “geotiff”: Bytes of a GeoTIFF (requires rioxarray or rasterio). Only a
single variable is supported; specify
var
if the dataset has multiple variables.- Parameters:
decoded (DecodedGRIB) – Object returned by
grib_decode
.format_type (str) – Target format: one of {“dataframe”, “xarray”, “netcdf”, “geotiff”}.
var (str, optional) – When multiple variables are present, choose one by exact or regex pattern. If omitted for xarray backend, returns the dataset as-is.
- Returns:
Converted object or bytes depending on
format_type
.- Return type:
Any
- Raises:
ValueError – If an unsupported format is requested or required dependencies are missing.
RuntimeError – If conversion fails.
- datavizhub.processing.convert_to_grib2(dataset: Any) bytes [source]¶
Convert a NetCDF dataset to GRIB2 via external tooling.
Note: wgrib2 does not support generic NetCDF→GRIB2 conversion. Common practice is to use CDO (Climate Data Operators) with something like
cdo -f grb2 copy in.nc out.grb2
. This function will attempt to use CDO if available; otherwise it raises a clear error and asks the caller to specify the desired tool.- Parameters:
dataset (xarray.Dataset) – Dataset to convert.
- Returns:
Raw GRIB2 file content.
- Return type:
bytes
- Raises:
RuntimeError – If no supported CLI is available or the conversion fails.
- datavizhub.processing.extract_metadata(decoded: DecodedGRIB) Dict[str, Any] [source]¶
Extract common metadata from a decoded GRIB subset.
Returned keys include: - model_run: string or datetime-like when available - forecast_hour: integer forecast step when available - variables: list of variable names - grid: projection/grid information when available - bbox: (min_lon, min_lat, max_lon, max_lat) if coordinates present
- Parameters:
decoded (DecodedGRIB) – Decoded GRIB container.
- Returns:
Metadata dictionary. Missing fields may be absent or set to None.
- Return type:
dict
- datavizhub.processing.extract_variable(decoded: DecodedGRIB, var_name: str) Any [source]¶
Extract a single variable by exact name or regex.
- Parameters:
decoded (DecodedGRIB) – The decoded GRIB content returned by
grib_decode
.var_name (str) – Either an exact name or a Python regex pattern. For the cfgrib backend, matches
dataset.data_vars
. For pygrib, matchesshortName
or fullname
.
- Returns:
Backend-specific object: an xarray.DataArray (cfgrib) or a list of pygrib messages matching the pattern.
- Return type:
Any
- Raises:
VariableNotFoundError – If no variable matches the selection.
- datavizhub.processing.grib_decode(data: bytes, backend: str = 'cfgrib') DecodedGRIB [source]¶
Decode GRIB2 bytes into Python structures.
Prefers xarray+cfgrib when available, with fallbacks to pygrib and the wgrib2 CLI for difficult edge-cases.
- Parameters:
data (bytes) – Raw GRIB2 file content (possibly subsetted by byte ranges).
backend (str, default "cfgrib") – One of: “cfgrib”, “pygrib”, or “wgrib2”.
- Returns:
A container describing what was decoded and how to access it.
- Return type:
- Raises:
ValueError – If an unsupported backend is requested.
RuntimeError – If decoding fails for the chosen backend.
- datavizhub.processing.interpolate_time_steps(data, current_interval_hours: int = 6, new_interval_hours: int = 1)[source]¶
Interpolate a 3D time-sequenced array to a new temporal resolution.
- Parameters:
data (numpy.ndarray) – 3D array with time as the first dimension.
current_interval_hours (int, default=6) – Current spacing in hours between frames in
data
.new_interval_hours (int, default=1) – Desired spacing in hours for interpolated frames.
- Returns:
Interpolated array with adjusted time dimension length.
- Return type:
numpy.ndarray
- datavizhub.processing.load_netcdf(path_or_bytes: str | bytes) Iterator[Any] [source]¶
Context manager that opens a NetCDF dataset from a path or bytes.
Uses xarray under the hood. For byte inputs, a temporary file is created. Always closes the dataset and removes any temporary file when the context exits.
- Parameters:
path_or_bytes (str or bytes) – Filesystem path to a NetCDF file or the raw bytes of one.
- Yields:
xarray.Dataset – The opened dataset, valid within the context.
- Raises:
RuntimeError – If the dataset cannot be opened or xarray is missing.
- datavizhub.processing.subset_netcdf(dataset: Any, variables: Iterable[str] | None = None, bbox: Tuple[float, float, float, float] | None = None, time_range: Tuple[Any, Any] | None = None) Any [source]¶
Subset an
xarray.Dataset
by variables, spatial extent, and time.Applies up to three filters in order: variable selection, spatial bounding box, and time window. Any filter can be omitted by passing
None
.- Parameters:
dataset (xarray.Dataset) – Dataset returned by
load_netcdf
or other xarray operations.variables (iterable of str, optional) – Variable names to keep. If
None
, keep all variables.bbox (tuple of float, optional) – Spatial bounding box as
(min_lon, min_lat, max_lon, max_lat)
. Requires the dataset to havelat
/latitude
andlon
/longitude
coordinates for selection.time_range (tuple[Any, Any], optional) – Start and end values compatible with
xarray
time selection, e.g. strings, datetimes, or numpy datetime64.
- Returns:
A new dataset view with the requested subset applied.
- Return type:
xarray.Dataset
- Raises:
ValueError – If
bbox
is provided but the dataset does not expose recognizable latitude/longitude coordinates for spatial selection.
Examples
Select temperature over a region and time range:
>>> ds = subset_netcdf(ds, variables=["t2m"], bbox=(-110, 30, -90, 40), time_range=("2024-01-01", "2024-01-02"))
- datavizhub.processing.validate_subset(decoded: DecodedGRIB, expected_fields: List[str]) None [source]¶
Validate that a decoded subset contains expected variables and shapes.
This function currently validates variable presence. Shape and timestep validation is backend- and dataset-specific, and can be extended when stricter contracts are needed.
- Parameters:
decoded (DecodedGRIB) – Decoded GRIB container.
expected_fields (list of str) – Variable names that must be present. Regex patterns are allowed.
- Raises:
AssertionError – If one or more variables are missing.
Modules¶
- class datavizhub.processing.base.DataProcessor[source]¶
Bases:
ABC
Abstract base for data processors in the DataVizHub pipeline.
This base class defines the minimal contract that processing components must fulfill when transforming raw inputs (typically acquired by
datavizhub.acquisition
) into outputs consumable by downstream visualization and publishing steps.Processors standardize three phases:
load(input_source)
: prepare or ingest the input sourceprocess(**kwargs)
: perform the core processing/transformationsave(output_path=None)
: persist results if applicable
An optional
validate()
hook is provided for input checking.- Parameters:
... – Concrete processors define their own initialization parameters according to their domain (e.g., input directories, output paths, catalog URLs). There is no common constructor on the base class.
Examples
Use a processor in a pipeline with an acquirer:
from datavizhub.acquisition.ftp_manager import FTPManager from datavizhub.processing.video_processor import VideoProcessor # Acquire frames with FTPManager(host="ftp.example.com") as ftp: ftp.fetch("/pub/frames/img_0001.png", "./frames/img_0001.png") # ...fetch remaining frames... # Process frames into a video vp = VideoProcessor(input_directory="./frames", output_file="./out/movie.mp4") vp.load("./frames") vp.process() vp.save("./out/movie.mp4")
- property features: set[str]¶
Set of feature strings supported by this processor.
- Returns:
The class-level
FEATURES
set if defined, else an empty set.- Return type:
set of str
- abstract load(input_source: Any) None [source]¶
Load or prepare the input source for processing.
- Parameters:
input_source (Any) – Location or handle of input data (e.g., directory path, file path, URL, opened handle). Interpretation is processor-specific.
- abstract process(**kwargs: Any) Any [source]¶
Execute core processing and return results if applicable.
- Parameters:
**kwargs (Any) – Processor-specific options that influence the run (e.g., flags, thresholds, file paths).
- Returns:
A result object if the processor yields one (e.g., arrays), otherwise
None
.- Return type:
Any
- abstract save(output_path: str | None = None) str | None [source]¶
Persist results and return the output path.
- Parameters:
output_path (str, optional) – Destination path to write results. If omitted, implementations may use their configured default.
- Returns:
The final output path on success;
None
if nothing was written.- Return type:
str or None
- class datavizhub.processing.video_processor.VideoProcessor(input_directory: str, output_file: str, basemap: str | None = None, fps: int = 30)[source]¶
Bases:
DataProcessor
Create videos from image sequences via FFmpeg.
Processes image frames into a cohesive video file. Optionally overlays a static basemap beneath the frames. Requires system FFmpeg and FFprobe to be installed and accessible on PATH.
- Parameters:
input_directory (str) – Directory where input images are stored.
output_file (str) – Destination path for the rendered video file.
basemap (str, optional) – Optional background image path to overlay beneath frames.
Examples
Render a video from PNG frames:
from datavizhub.processing.video_processor import VideoProcessor vp = VideoProcessor(input_directory="./frames", output_file="./out.mp4") vp.load("./frames") vp.process() vp.save("./out.mp4")
- FEATURES = {'load', 'process', 'save', 'validate'}¶
- check_ffmpeg_installed() bool [source]¶
Check that FFmpeg and FFprobe are available on PATH.
- Returns:
True when both tools return version info without error.
- Return type:
bool
- load(input_source: Any) None [source]¶
Set or update the input directory.
- Parameters:
input_source (Any) – Path to the directory containing input frames. Converted to str.
- process(**kwargs: Any) str | None [source]¶
Compile image frames into a video.
- Returns:
The output file path on success;
None
if processing failed.- Return type:
str or None
- process_video(*, fps: int | None = None) bool [source]¶
Build the video using FFmpeg from frames in
input_directory
.Notes
Uses glob pattern matching on the first file’s extension to include all frames with that extension in the directory.
When
basemap
is provided, overlays frames on top of the basemap.Logs FFmpeg output lines at debug level.
- save(output_path: str | None = None) str | None [source]¶
Finalize the configured output path.
- Parameters:
output_path (str, optional) – If provided, updates the configured output path before returning it.
- Returns:
The output path the processor will write or has written to.
- Return type:
str or None
- validate() bool [source]¶
Check FFmpeg/FFprobe availability.
- Returns:
True if FFmpeg and FFprobe executables are available.
- Return type:
bool
- validate_frame_count(video_file: str, expected_frame_count: int) bool [source]¶
Validate the total number of frames in a video.
- Parameters:
video_file (str) – Path to the video file to inspect.
expected_frame_count (int) – Expected total number of frames.
- Returns:
True when expected frame count matches the probed value.
- Return type:
bool
- class datavizhub.processing.grib_data_processor.GRIBDataProcessor(catalog_url: str | None = None)[source]¶
Bases:
DataProcessor
Process GRIB data and THREDDS catalogs.
Reads GRIB files into NumPy arrays and provides utilities for working with THREDDS data catalogs. Supports simple longitude shifts for global grids and helper functions for stacking 2D slices over time.
- Parameters:
catalog_url (str, optional) – Optional THREDDS catalog URL for dataset listing.
Examples
Read a GRIB file into arrays:
from datavizhub.processing.grib_data_processor import GRIBDataProcessor gp = GRIBDataProcessor() data, dates = gp.process(grib_file_path="/path/to/file.grib2", shift_180=True)
- FEATURES = {'load', 'process', 'save', 'validate'}¶
- static combine_into_3d_array(directory: str, file_pattern: str)[source]¶
Combine 2D grids from multiple files into a 3D array.
- Parameters:
directory (str) – Root directory to search.
file_pattern (str) – Glob pattern to match files (used with
Path.rglob
).
- Returns:
A 3D array stacking the first returned field across files.
- Return type:
numpy.ndarray
- list_datasets() None [source]¶
List datasets from a THREDDS data server catalog configured by
catalog_url
.
- load(input_source: Any) None [source]¶
Load or set a THREDDS catalog URL.
- Parameters:
input_source (Any) – Catalog URL string (converted with
str()
).
- static load_data_from_file(file_path: str, short_name: str, shift_180: bool = False)[source]¶
Load one field by
short_name
returning data and coordinates.- Parameters:
file_path (str) – Path to the GRIB file.
short_name (str) – GRIB message shortName to select (e.g.,
"tc_mdens"
).shift_180 (bool, default=False) – If True, shift longitudes and roll the data accordingly.
- Returns:
(data, lats, lons)
arrays when found; otherwise(None, None, None)
.- Return type:
tuple
- process(**kwargs: Any)[source]¶
Process a GRIB file into arrays and timestamps.
- Parameters:
grib_file_path (str, optional) – Path to a GRIB file to parse.
shift_180 (bool, default=False) – If True, roll the longitudes by 180 degrees for global alignment.
- Returns:
(data_list, dates)
whengrib_file_path
is provided, wheredata_list
is a list of 2D arrays. OtherwiseNone
.- Return type:
tuple or None
- static process_grib_files_wgrib2(grib_dir: str, command: list[str], output_file: str) None [source]¶
Invoke an external wgrib2 command for each file in a directory.
- Parameters:
grib_dir (str) – Directory containing input GRIB files.
command (list of str) – Command and arguments to execute (
wgrib2
invocation).output_file (str) – Path to write outputs to, according to the command.
- static read_grib_file(file_path: str) None [source]¶
Print basic metadata for each GRIB message.
- Parameters:
file_path (str) – Path to a GRIB file to inspect.
- static read_grib_to_numpy(grib_file_path: str, shift_180: bool = False) Tuple[list | None, list | None] [source]¶
Convert a GRIB file into a list of 2D arrays and dates.
- Parameters:
grib_file_path (str) – Path to a GRIB file to read.
shift_180 (bool, default=False) – If True, roll the longitudes by half the width for -180/180 alignment.
- Returns:
(data_list, dates)
on success;(None, None)
on error.- Return type:
tuple of list or (None, None)
- save(output_path: str | None = None) str | None [source]¶
Save processed data to a NumPy
.npy
file if available.- Parameters:
output_path (str, optional) – Destination filename to write array data to.
- Returns:
The path written to, or
None
if there is no data.- Return type:
str or None
- static shift_data_180(data: numpy.ndarray, lons: numpy.ndarray) numpy.ndarray [source]¶
Shift global longitude grid by 180 degrees.
- Parameters:
data (numpy.ndarray) – 2D array of gridded values with longitudes varying along axis=1.
lons (numpy.ndarray) – 2D array of longitudes corresponding to
data
(unused for the basic roll operation but included for interface symmetry).
- Returns:
Rolled data such that a 0–360 grid is centered at -180..180.
- Return type:
numpy.ndarray
- datavizhub.processing.grib_data_processor.interpolate_time_steps(data, current_interval_hours: int = 6, new_interval_hours: int = 1)[source]¶
Interpolate a 3D time-sequenced array to a new temporal resolution.
- Parameters:
data (numpy.ndarray) – 3D array with time as the first dimension.
current_interval_hours (int, default=6) – Current spacing in hours between frames in
data
.new_interval_hours (int, default=1) – Desired spacing in hours for interpolated frames.
- Returns:
Interpolated array with adjusted time dimension length.
- Return type:
numpy.ndarray