API

Contents

API#

The entire Polaris library is predicated on the existence of a folder that contains all the data for a particular model, including input data and model results.

class polaris.project.polaris.Polaris(project_folder=None, config_file=None)#

Bases: object

Python interface for all things Polaris

Running polaris models with this interface is trivial

::

model = Polaris(“D:/src/argonne/MODELS/bloomington”) model.run()

__init__(project_folder=None, config_file=None)#
classmethod add_license(license_file: PathLike)#

Copies the license file to a place where the Polaris binary can find it

Parameters:

license_file – Path to the license file

from polaris import Polaris
Polaris.add_license("path/to/license.txt")
classmethod from_dir(project_folder, config_file=None)#
classmethod from_config_file(config_file)#
classmethod build_from_git(model_dir, city, db_name=None, overwrite=False, inplace=False, branch='main', git_dir=None, scenario_name=None)#

Clones a polaris project from git and builds it into a runnable model

from polaris import Polaris
Polaris.from_dir(Polaris.build_from_git("e:/models/from_git", "atlanta"))
classmethod restore(data_dir, city=None, dbtype=None, upgrade=False, overwrite=False, scenario_name=None)#

Builds a polaris project from directory into a runnable model

from polaris import Polaris
Polaris.restore("e:/models/from_git/Atlanta", "Atlanta")
property is_open: bool#
open(model_path: PathLike, config_file: str | None = None) None#

Opens a Polaris model in memory. When a config file is provided, the model tries to load it

Parameters:
  • model_path – Complete path for the folder containing the Polaris model.

  • config_fileOptional, Name of the convergence control yaml we want to work with. Defaults to polaris.yaml

from polaris import Polaris

model = Polaris()
model.open('path/to/model', 'polaris_modified.yaml')
property model_path: Path#

Path to the loaded project

property supply_file: Path#

Path to the supply file in project

property demand_file: Path#

Path to the demand file in project

property freight_file: Path#

Path to the freight file in project

property result_file: Path#

Path to the result sqlite file in project

property result_h5_file: Path#

Path to the result h5 file in project

property network#
property demand#
property freight#
property latest_output_dir: Path#
property router#
set_router_lib(router_lib)#
property skims#
upgrade(max_migration: str | None = None, force_migrations: List[str] | None = None)#

Upgrade the underlying databases to the latest / greatest schema version.

Parameters:

max_migration – a string (date) defining the latest migration id that should be applied. Useful if working with an older version of the POLARIS executable which isn’t compatible with the latest schema.

::

model.upgrade(“202402”) # only apply upgrades (migrations) from before February 2024.

run(**kwargs) None#
close()#

Eliminates all data from memory

::

model.close()

reset(include_outputs=False)#
polaris.project.polaris.migrate_yaml(pth: Path)#

Since all three submodules are fairly independent, we list them below.

Network module#

The network module library is predicated on the existence of a single network project that will be under analysis/modification, and therefore all submodules responsible for different capabilities will ge generated from this network object, below.

class polaris.network.network.Network#

Bases: object

Polaris Network Class

# We can open the network just to figure out its projection
from polaris.network.network import Network
n = Network()
n.open(source)

# We get the projection used in this project
srid = n.srid

# Or we can also get the checker for this network
checker = n.checker

# In case we ran some query while with the database open, we can commit it
n.commit()

# We can create a new network file from scratch as well
new_net = Network()

new_net.srid = srid # We use the SRID from the known network, for example

# if we don't have Spatialite on our system path environment variables, we can
# add it manually before attempting to create a new database
new_net.add_spatialite('path/to/spatialite/folder')

# If the second parameter is True, an SQLite file pre-populated with Spatialite extensions is used
new_net.new('path/to/new/file', True)


# To close the connection is simply
n.close()
__init__()#

Instantiates the network

static from_file(network_file: PathLike, run_consistency=False)#
static create(network_file: PathLike, srid: int, jumpstart: bool = False) None#

Creates new empty network file

Args:

network_file (str): Full path to the network file to be opened.

jumpstart (bool): Copies base sql already loaded with spatialite extension. It saves a few seconds of runtime.

new(network_file: str, jumpstart=False) None#
open(network_file: PathLike, run_consistency=False)#

Opens project for editing/querying

Args:

network_file (str): Full path to the network file to be opened.

upgrade(redo_triggers=True) Network#

Updates the network to the latest version available

close(clear_issues=False)#

Closes database connection

full_rebuild()#

Rebuilds all network components that can be rebuilt automatically. Designed to be used when building the network from scratch or making changes to the network in bulk. This method runs the following methods in order:

  • Rebuilds the location_links table

  • Rebuilds the location_parking table

  • Rebuilds intersections, where signalized intersections are sourced from OSM and all stop signs are added

  • Rebuilds the active networks

  • Run full geo-consistency

  • Deletes all records from the editing table

property tools: Tools#

Tools for general manipulation of the network

property transit: Transit#

Transit manipulation class

property active#

Active transport network creation class

property checker#

Network checker class

property populate#

Network checker class

property diagnostics#

Network checker class

property geo_consistency#

Geo-consistency analysis class

property open_data: OpenData#
property geotools#
property tables: DataTableAccess#
property ie: ImportExport#

Network Import-Export class

get_location(location_id: int) Location#

Location object

get_intersection(node: int) Intersection#

Network intersection class

clear_log()#

Clears log file

static has_edit_table(path_to_file)#

Network

Polaris Network Class

Due to the complexity of some of the supply model components, this page only lists the package submodules, while further detail on manipulating Traffic and Public transport components, go to the dedicated pages listed on the left side bar.

Main submodules#

The PolarisNetwork has 6 submodules that bring critical features to the network, which are organized as follows.

Accessing these submodules is as simple as opening the network and using method calls dedicated to returning class instances of each submodule already properly configured for the network at hand.

import os
from polaris.network.network import Network

net_file = 'D:/src/argonne/chicago2018-Supply.sqlite'

net = Network()
net.open(net_file)

checker = net.checker
walk_nt = net.active
transit = net.transit
geo_con = net.geo_consistency
consistency = net.consistency

Analyze submodule#

Key Performance Indicators#

The KPI submodules are designed to compile summary data for a range of summary metrics useful when evaluating results and comparing across iterations and across model runs.

The ResultKPI class allows all metrics to be computed and cached for each simulation run, so it is convenient to compute them once the simulation is over for easy future access.

class polaris.analyze.result_kpis.ResultKPIs(inputs: PolarisInputs, cache_dir: Path, population_scale_factor: float, include_kpis: Tuple[KPITag, ...], exclude_kpis: Tuple[KPITag, ...] = (KPITag.HIGH_MEMORY, KPITag.BROKEN))

Bases: object

This class provides an easy way to extract relevant metrics for a single simulation run of POLARIS. The easiest way to generate an instance is via the factory method from_iteration which takes the path to the outputs of a simulation run (or a ConvergenceIteration())

from polaris.analyze.result_kpi import ResultKPIs

kpis = ResultKPIs.from_iteration(ref_project_dir / f"{city}_iteration_2")

Metric comparison plots can then be generated in a notebook using:

results = KpiComparator()
results.add_run(kpis, 'an-arbitrary-label')
results.plot_mode_share()
results.plot_vmt()
results.plot_vmt_by_link_type()

Any number of runs can be added using add_run up to the limit of readability on the generated plots.

result_time_step = 3600
__init__(inputs: PolarisInputs, cache_dir: Path, population_scale_factor: float, include_kpis: Tuple[KPITag, ...], exclude_kpis: Tuple[KPITag, ...] = (KPITag.HIGH_MEMORY, KPITag.BROKEN))
classmethod from_iteration(iteration: ConvergenceIteration, **kwargs)

Create a KPI object from a ConvergenceIteration object.

Parameters:

iteration – Iteration object for which a KPI object is created

classmethod from_dir(iteration_dir: Path, **kwargs)

Create a KPI object from a given directory.

Parameters:

iteration_dir – Iteration directory for which a KPI object is created

classmethod from_args(files: PolarisInputs, iteration_dir: Path, cache_name: str = 'kpi.cache', clear_cache=False, exit_if_no_cache=False, population_scale_factor=None, include_kpis: Tuple[KPITag, ...] = (KPITag.SYSTEM, KPITag.POPULATION, KPITag.ACTIVITIES, KPITag.TRIPS, KPITag.TNC, KPITag.TRAFFIC, KPITag.TRANSIT, KPITag.VEHICLES, KPITag.CALIBRATION, KPITag.VALIDATION, KPITag.CONVERGENCE, KPITag.GLOBAL, KPITag.PARKING, KPITag.FREIGHT), exclude_kpis: Tuple[KPITag, ...] = (KPITag.HIGH_MEMORY, KPITag.BROKEN))

Create a KPI object using a detailed set of arguments.

cache_all_available_metrics(verbose=False, metrics_to_cache=None, skip_cache=False)

Cache available metrics into a KPI cache directory

Parameters:
  • verboseOptional, Enable verbose logging

  • metrics_to_cacheOptional, A list of metrics to cache if not all available metrics

  • skip_cacheOptional, Skip using the cache directory and generate metrics

classmethod available_metrics()

Return a dictionary of available metrics that can be computed

close()

Close access to the KPI cache database

get_kpi_value(kpi_name)

Compute a metric that is requested, if defined

Parameters:

kpi_name – Name of the metric to be retrieved

get_cached_kpi_value(kpi_name, skip_cache=False, force_cache=False)

Get the cached value of a metric

Parameters:
  • kpi_name – Name of the metric to be retrieved

  • skip_cacheOptional, skip cache and compute the metric before retrieval

  • force_cacheOptional, force only the use of a stored value and skip if it does not exist

has_cached_kpi(kpi_name)

Check if KPI is cached

Parameters:

kpi_name – String name of metric to check if cached

cached_metrics()

Obtain a set of cached metrics for the KPI object

metric_summary()

Metric that captures the summary of the iteration such as number of vehicles in network, run time, memory usage.

metric_file_sizes()

Metric provides the size of generated SQLite and H5 files

metric_polaris_exe()

Metric provides the GIT branch, SHA, build date, and URL of the POLARIS executable used to generate the outputs

metric_gaps()

Metric provides the routing gaps from the run

static one_value(conn, query, default=0)

Provide one value from a SQL query evaluated on a connection or return a default

Parameters:
  • conn – SQLite connection to a database

  • query – String SQL query to be evaluated

  • defaultOptional, default value returned if query evaluates to None

metric_population()

Metric provides population summary simulated

metric_num_persons_by_age_band_5()

Metric provides persons by age 5-year age bands

metric_num_persons_by_age_band_10()

Metric provides persons by age 10-year age bands

metric_num_hh()

Metric provides number of households simulated in iteration

metric_num_hh_by_hh_size()

Metric provides number of households by household size

metric_tts()

Metric provides total travel time for all trips in simulation

metric_distance_by_act_type()

Metric provides average trip distance by activity type

metric_planned_modes()

Metric provides summary of planned modes BEFORE dynamic traffic assignment

metric_executed_modes()

Metric provides summary of executed modes AFTER dynamic traffic assignment

metric_mode_shares()

Metric provides the aggregate mode shares by trip purposes of HBW, HBO, NHB, and total

metric_executed_activity_mode_share_by_income()

Metric provides the executed mode share by income quintiles

metric_activity_start_distribution()

Metric provides the activity start distribution of all activity purposes in the simulation

metric_activity_rate_distribution()

Metric provides the activity rate distribution of all activity purposes in the simulation

metric_activity_duration_distribution()

Metric provides the activity duration distribution for all activity purposes in the simulation

metric_vmt_vht()

Metric provides the aggregate vehicle miles traveled (VMT), vehicle hours traveled (VHT), average speed, and count by all modes in simulation

metric_ev_charging()

Metric provides an aggregate summary of electric vehicle (EV) charging such as energy, waiting time, charging time, cost of charging, and number of charging events by EV type, charging station type, and whether residential charging was available

metric_ev_consumption()

Metric provides electric vehicle (EV) consumption while in operation such as distance, energy used, and average Wh per mile by trip mode

metric_vmt_vht_by_link()

Metric provides the vehicle miles traveled (VMT), vehicle hours traveled (VHT), and average speed in AM, PM, and Off-Peak periods by link type

metric_flow_density_fd()

Metric provides the fundamental diagram by link types in the simulation

metric_activity_distances()

Metric provides activity distances and travel times by activity type

metric_vehicle_technology(**kwargs)

Metric provides the summary of vehicle technology owned by various POLARIS agents (such as households, freight operators, TNC, and transit)

metric_tnc_request_stats()

Metric provides a summary of all TNC requests made during simulation such as time to assign, pickup time, dropoff time, travel time, travel distance group by TNC operators, service modes such as TAXI and First-Mile-Last-Mile (FMLM), and assignment status

metric_tnc_trip_stats()

Metric provides a summary of all TNC vehicle trips made during simulation such as vehicle miles traveled (VMT), vehicle hours traveled (VHT) and grouped by TNC operator, various legs of operation (pickup, dropoff, charging), and whether vehicle was occupied or unoccupied

metric_tnc_stats()

Metric provides the aggregate TNC operator level stats such as trips served, revenue, charging trips by operator

metric_avo_by_tnc_operator()

Metric provides the average vehicle occupancy (AVO) achieved by each TNC oeprator when serving the demand in simulation

metric_road_pricing()

Metric provides the tolls collected from various vehicle sources (such as personally-owned, TNC, freight)

metric_transit_boardings()

Metric provides the boardings and alightings by each transit agency and GTFS mode (such as bus, rail, metro, etc.)

metric_transit_experience()

Metric provides the aggregate transit experience as processed from multimodal paths in the simulation to provide travel time, transfers, walk time, bike time, cost, etc.

network_gaps_by_x(x)

Utility function used by metrics to aggregate network gaps by x

Parameters:

x – aggregation criteria such as ‘link_type’ or ‘hour’

metric_network_gaps_by_link_type()

Metric provides the network gaps from routing aggregated by link type

metric_network_gaps_by_hour()

Metric provides the network gaps from routing aggregated by time of day (in hours)

metric_skim_stats()

Metric provides the skim statistics for highway and transit skims generated in the iteration such as min, max, and average travel time, cost, etc.

metric_count_validation()

Metric provides the data comparing simulation link counts to those observed in data

metric_rmse_vs_observed()

Metric provides the root mean squared errors for calibration procedures such as activity, mode, boardings, trip distance, and trip start time

metric_planned_rmse_vs_observed()

Metric provides the root mean squared error for calibration procedures between planned and executed values of activity counts, mode shares, and trip start time choice

metric_calibration_act_gen()

Metric provides calibration comparison between simulated activities and target activities by person type and activity type

metric_calibration_act_gen_planned()

Metric provides calibration comparison between planned number of activities and target activities by person type and activity type - primarily used for debugging

metric_calibration_mode_share()

Metric provides calibration comparison of executed mode shares by trip purpose in simulation and target for the region

metric_calibration_mode_share_planned()

Metric provides calibration comparison of planned mode shares by trip purpose in simulation and target for the region

metric_calibration_timing()

Metric provides calibration comparison of trip start times by activity purposein simulation and target for the region

metric_calibration_timing_planned()

Metric provides calibration comparison of planned trip start times by activity purpose in simulation and target for the region - primarily used for debugging

metric_calibration_destination()

Metric provides the calibration comparison of trip distances by activity type in simulation and that found in data, and also shows the validation travel times achieved in simulation compared to that found in data

metric_calibration_boardings()

Metric provides the calibration comparison of boarding counts by transit agency and transit mode

metric_validation_speed()

Metric provides the validation of speeds simulated by link type and time period while comparing to data provided

metric_trip_length_distribution()

Metric provides an aggregate distribution of trip distances by mode and trip type

metric_trip_costs()

“Metric provides the cost experienced when making trips in the simulation such as monetary (operating), toll, and time costs by mode and trip type

metric_activity_start_time_distributions()

Metric provides the activity start time distribution of vehicle trips in simulation

metric_planned_activity_start_time_distributions()

“Metric provides the planned activity start time distribution for all modes

metric_county_to_county_demand()

Metric provides a matrix of county to county demand aggregated using the County table in Supply

metric_average_vehicle_ownership()

Metric retrieves the average household vehicle ownership from the synthetic population in the output

metric_traffic_cumulative_gap()

BROKEN: Metric provides the cumulative gap

metric_spatial_trips()

Metric provides trips by origin and destination across various aggregation for spatial mapping

metric_sov_parking_access_time()

Metric provides the parking access time simulated for personally-owned auto-based trips

metric_parking_share()

Metric provides the share of parking simulated between garage and on-street parking locations

metric_escooter_utilization_at_garage()

Metric provides the e-scooter utilization when they are available at garages to use after parking a vehicle

metric_parking_utilization()

Metric provides the temporal utilization of parking for various parking types separated by area types

metric_garage_access_w_escooters()

Metric provides e-scooter utilization, demand, and access time observed at garages

metric_parking_stats()

Metric provides aggregate parking statistics such as revenue, demand, and number of e-scooter trips made by parking type and time of day

metric_parking_delay_stats()

Metric provides the aggregate delay statistics observed around parking infrastructure

metric_freight_mode_share()

Metric provides the aggregate freight mode share by shipment mode

metric_freight_shipping_cost()

Metric provides the total shipment costs by shipment mode

metric_freight_mode_trade_type(**kwargs)

Metric provides the freight movements by trade type

metric_freight_trip()

Metric provides freight trip summary

metric_freight_distance_distribution()

Metric provides freight distance distribution by freight mode, trip purpose, and trip type

The KPI comparator leverages KPIs produced by the ResultKPI class and produces a series of useful plots, which are also accessible through the QPolaris interface.

class polaris.analyze.kpi_comparator.KpiComparator

Bases: object

This class provides an easy way to group together multiple runs of POLARIS and compare their outputs. Runs KPIs are added along with a string based name which is used as the label for that run in any subsequent plots which are generated.

from polaris.analyze.kpi_comparator import KpiComparator

results = KpiComparator()
results.add_run(ResultKPIs.from_iteration(ref_project_dir / f"{city}_iteration_2"), 'REF_iteration_2')
results.add_run(ResultKPIs.from_iteration(eval_project_dir / f"{city}_iteration_2"), 'EVAL_iteration_2')

Metric comparison plots can then be generated in a notebook using:

results.plot_mode_share()
results.plot_vmt()
results.plot_vmt_by_link_type()

Any number of runs can be added using add_run up to the limit of readability on the generated plots.

The object can also be used to generate a set of csv files for input into Excel (if you really have to use Excel):

results.dump_to_csvs(output_dir = "my_csv_dump_dir")
__init__()
add_run(kpi: ResultKPIs, run_id: str)
has_run(run_id)
dump_to_csvs(output_dir, metrics_to_dump=None, **kwargs)
plot_everything(**kwargs)
classmethod available_plots()
plot_mode_share(**kwargs)
plot_population(**kwargs)
plot_externals(**kwargs)
plot_congestion_pricing(**kwargs)
plot_transit(**kwargs)
add_iter(x)
across_iterations(cols, **kwargs)
plot_act_dist(act_type: str | None = None, **kwargs)
plot_vmt(**kwargs)
plot_vehicle_connectivity(**kwargs)
plot_vmt_by_link_type(**kwargs)
plot_fundamental_diagram(**kwargs)
plot_gaps(type='defult', **kwargs)
static plot_multiple_gaps(kpi_results)
plot_congestion_removal(**kwargs)
plot_trips_with_path(**kwargs)
plot_pax_in_network(**kwargs)
plot_veh_in_network(**kwargs)
plot_freight_in_network(**kwargs)
plot_cpu_mem(**kwargs)
plot_polaris_exe(**kwargs)
plot_network_gaps(**kwargs)
plot_skim_stats(show_min_max=False, **kwargs)
plot_trip_length_distributions(max_dist=None, modes=None, types=None, use_imperial=True, **kwargs)
plot_activity_start_time_distributions(**kwargs)
plot_planned_activity_start_time_distributions(**kwargs)
plot_tnc_vmt_vht(**kwargs)
plot_tnc_demand(**kwargs)
plot_tnc_stats(**kwargs)
plot_rmse_vs_observed(**kwargs)
plot_planned_rmse_vs_observed(**kwargs)
plot_calibration_for_activity_generation(**kwargs)
plot_planned_calibration_for_activity_generation_planned(**kwargs)
plot_calibration_for_mode_share(**kwargs)
plot_calibration_for_boardings(**kwargs)
plot_validation_for_speeds(**kwargs)
plot_calibration_timing(**kwargs)
plot_calibration_destination(**kwargs)
plot_count_validation(**kwargs)
plot_parking_demand(**kwargs)
plot_parking_access(**kwargs)
plot_parking_utilization(type_filter=None, area_filter=None, **kwargs)
plot_parking_revenue(type_filter=None, **kwargs)
plot_escooter_utilization_at_garage(**kwargs)
plot_parking_delay_stats(type_filter=None, areatype_filter=None, **kwargs)
plot_garage_parking_share(**kwargs)
plot_freight_distance_distribution_by_mode(mode_filter=None, **kwargs)
plot_freight_shipment_count_share_by_mode(mode_filter=None, **kwargs)
plot_trip_count_by_attributes(**kwargs)
plot_truck_vht_by_mode_trade_type(mode_filter=None, **kwargs)
plot_truck_vmt_by_mode_trade_type(mode_filter=None, **kwargs)

Spatial metrics#

This module is the basis for all the transit metrics mapping in QPolaris. This means, however, that a couple of summary tables are created in the demand database the first time the computation of demand results are first invoked, which may be somewhat time-consuming.

class polaris.analyze.spatial_metrics.SpatialMetrics(inputs: PolarisInputs)

Bases: object

Spatial metrics class that provides access to metrics associated with spatially-enabled elements of the model.

from polaris.runs.polaris_inputs import PolarisInputs
from Polaris.analyze.spatial_metrics import SpatialMetrics

inputs = PolarisInputs.from_dir('path/to/model_dir')

metrics_object = SpatialMetrics(inputs)
__init__(inputs: PolarisInputs)
classmethod from_iteration(iteration: ConvergenceIteration)

Create a KPI object from a ConvergenceIteration object.

classmethod from_dir(iteration_dir: Path)

Create a Spatial metrics object from a given directory.

transit_supply_metrics() TransitSupplyMetrics

Returns a class of TransitSupplyMetrics()

transit_demand_metrics() TransitDemandMetrics

Returns a class of TransitDemandMetrics()

tnc_metrics() TNCMetrics

Returns a class of TNCMetrics()

Among these, the transit analysis submodules are designed to compile summary data for stops, patterns and routes, for both the supply used in the model run, as well as for the results recorded in the demand database.

Other metrics#

These methods allow for retrieval of path and activity data directly from the demand database for more detailed analysis.

class polaris.analyze.path_metrics.PathMetrics(demand_file: Path, h5_file: Path)#

Bases: object

Loads all data required for the computation of metrics on Paths.

__init__(demand_file: Path, h5_file: Path)#
Parameters:

demand_file – Path to the result file we want to compute metrics for

property data: DataFrame#
class polaris.analyze.activity_metrics.ActivityMetrics(supply_file: Path, demand_file: Path)#

Bases: DemandTableMetrics

Loads all data required for the computation of metrics on activities.

The behavior of time filtering consists of setting to instant zero whenever from_time is not provided and the end of simulation when to_time is not provided

__init__(supply_file: Path, demand_file: Path)#
Parameters:
  • supply_file – Path to the supply file corresponding to the demand file we will compute metrics for

  • demand_file – Path to the demand file we want to compute metrics for

get_trips(aggregation='zone') DataFrame#

Queries all trips for the current set of filters and for the aggregation of choice

Parameters:

aggregation – Filter to see either location or zone. Default is “zone”

Returns:

Statistics DataFrame

set_mode(mode: str, mode_share=False)#
set_start_hour(start_hour: int)#
set_end_hour(end_hour: int)#

The hour for the end (INCLUDED) of the period.

set_time_period(time_period: str)#

sets the time period we want to retrieve statistics about

property modes#
property types#
property data: DataFrame#
property locations: DataFrame#
vehicle_trip_matrix(from_start_time: float, to_start_time: float)#

Returns the expected trip matrix for the UNIVERSE of trips starting between from_start_time and to_start_time, according to the results of the ABM seen in the Activities table

Skim module#

The skim module is designed to provide convenient access to the Skim matrices produced by Polaris, particularly for visualization in QGIS and exporting to OMX.

class polaris.skims.highway.highway_skim.HighwaySkim(filename: PathLike | None = None)#

Bases: SkimBase

Polaris Skims class

from polaris.skims.highway.highway_skim import HighwaySkim

skims = HighwaySkim()

# Load skims for highway
skims.open('path/to/hwy/skims')

# accessing skims is easy
m1 = skims.time[1440] # time for the interval
m2 = skims.distance[720] # distance for the interval 720
m3 = skims.cost[240] # cost for the interval 240

# We can also access skims like we do for PT
time_morning = skims.get_skims(interval=240, metric="time")
prefix = 'highway'#
__init__(filename: PathLike | None = None)#
open(path_to_file: PathLike)#

Loads the highway skim data

Args:

path_to_file (str): Full file path to the highway skim

create_empty(intervals: list, zones: int)#
Creates a new skim data cube for a given set of intervals and number of zones.

All matrices are filled with zeros

Args:

intervals (list): List of all intervals this skim file should have zones (int): Number of zones for this skim

get_skims(interval=None, metric=None, **kwargs)#

Gets skim data for specified mode/interval/metric. These filters are not, however, required. If one or more parameters it not provided, a dictionary (or nested dictionaries) will be returned

Args:

interval Optional (int): The time interval of interest metric Optional (str): Metric

remove_interval(interval: int)#

Removes one interval from this skim. Operation happens in memory only. It does NOT alter skim on disk

Args:

interval (int): Interval to remove from the skim

add_interval(interval: int, copy_interval=None)#

Adds a new interval to the skim matrix

Args:

interval (int): Interval to be added to the skim data cube copy_interval Optional (int): Interval to be copied into the new interval. Arrays of zeros are added if not provided

write_to_file(path_to_file: PathLike)#
convert_zoning_systems(source_zoning: GeoDataFrame, target_zoning: GeoDataFrame, output_path: Path)#

Converts the zoning system of the skims

Args:

src_zone (gpd.GeoDataFrame): GeoDataFrame with the source zoning system tgt_zone (gpd.GeoDataFrame): GeoDataFrame with the target zoning system output_path (Path): Path to the output file

classmethod from_file(path_to_file: PathLike)#
property num_zones: int#
class polaris.skims.transit.transit_skim.TransitSkim(filename: str | None = None)#

Bases: SkimBase

Polaris Transit Skim class

from polaris.skims import TransitSkim

skims = TransitSkim()

# Load skims for highway
skims.open('path/to/pt/skims')

# We can retrieve skims
# The keys are not not case-sensitive
bus_time_morning = skims.get_skims(interval=240, metric="time", mode="bus")

# to get the metrics and modes available you can do
skims.modes
# or
skims.metrics

# We can also export it
skims.export('path/to/omx/file.omx')
prefix = 'transit'#
__init__(filename: str | None = None)#
open(path_to_file: PathLike)#

Loads the transit skim data

Args:

path_to_file (str): Full file path to the transit skim

get_skims(interval=None, metric=None, mode=None)#

Gets skim data for specified mode/interval/metric. These filters are not, however, required. If one or more parameters it not provided, a dictionary (or nested dictionaries) will be returned

Args:

interval Optional (int): The time interval of interest metric Optional (str): Metric mode Optional (str): name of the transport mode

create_empty(intervals: list, zones: int)#
Creates a new skim data cube for a given set of intervals and number of zones.

All matrices are filled with zeros

Args:

intervals (list): List of all intervals this skim file should have zones (int): Number of zones for this skim

remove_interval(interval: int)#

Removes one interval from this skim. Operation happens in memory only. It does NOT alter skim on disk

Args:

interval (int): Interval to remove from the skim

add_interval(interval: int, copy_interval=None)#

Adds a new interval to the skim matrix

Args:

interval (int): Interval to be added to the skim data cube copy_interval Optional (int): Interval to be copied into the new interval. Arrays of zeros are added is not provided

add_mode(mode_name: str, copy_mode: str)#

Adds a new mode to the skim matrix

Args:

mode_name (str): Mode name to be added to the skim data cube copy_mode Optional (str):copy_mode to be copied into the new mode. Arrays of zeros are added if not provided

add_metric(metric: str, copy_metric: str)#

Adds a new metric to the skim matrix

Args:

metric (str): Metric to be added to the skim data cube copy_metric Optional (str): metric to be copied into the new metric. Arrays of zeros are added if not provided

write_to_file(path_to_file: PathLike)#
convert_zoning_systems(source_zoning: GeoDataFrame, target_zoning: GeoDataFrame, output_path: Path)#

Converts the zoning system of the skims

Args:

src_zone (gpd.GeoDataFrame): GeoDataFrame with the source zoning system tgt_zone (gpd.GeoDataFrame): GeoDataFrame with the target zoning system output_path (Path): Path to the output file

classmethod from_file(path_to_file: PathLike)#
property num_zones: int#