Note
Go to the end to download the full example code.
Simplifying a model network#
This example shows how to leverage Polaris-Studio tools to suggest possible simplifications on Polaris networks that are deemed too dense. The main idea is to execute a static traffic assignment using an arbitrary full demand matrix and look into which links were actually used during the assignment.
The procedure allows the user to execute an All-or-Nothing assignment or an equilibrium assignment, which causes flows to spread out and therefore yields a larger number of links and a better connected network.
The algorithm is a basic heuristic, however, and we recommend users to use caution and thoroughly review its results when simplifying networks.
Icon source: https://foto.wuestenigel.com/strainer-in-close-up
Imports#
import shutil
from pathlib import Path
from uuid import uuid4
import numpy as np
# Figure out where our polaris directory is (so we can get some test data)
from polaris.network.data.data_table_cache import DataTableCache
from polaris.prepare.supply_tables.network.traffic_links import used_links_traffic
from polaris.runs.static_skimmer.static_graph import build_graph
from polaris.utils.database.db_utils import commit_and_close
Creates a new supply file with the chosen SRID sphinx_gallery_thumbnail_path = ‘../../examples/model_building/strainer-in-close-up.jpeg’
source = Path("/tmp/Grid/Grid-Supply.sqlite")
supply_file = Path("/tmp/Grid") / f"{uuid4().hex}-Supply.sqlite"
shutil.copy(source, supply_file)
PosixPath('/tmp/Grid/2310a19cc11644dbb0a1920944c2210f-Supply.sqlite')
Opens the newly-created network and import the GMNS network into it
graph = build_graph(supply_file)
# The most basic use is to figure out which links are part of the shortest path between any two zones
links_usage = used_links_traffic(graph, 0.1, algorithm="all-or-nothing")
# # Or we can use biconjugate Frank-Wolfe with this heavily congested network to *spread the flow* a little
# # That would result in a larger number of links being used
# The default number of iterations when algorithm and max_iterations are not provided is 50, which may take a little
# to compute when the network is very large and the number of zones is high
# links_usage = used_links_traffic(graph, 0.1, max_iterations=10)
# Let's get the links that ended up having flow on them (were part of at least one shortest path)
links_to_keep = links_usage.link_id.to_numpy()
# That's what we get back from our little algorithm
links_usage.head()
car_usage : 0%| | 0/9 [00:00<?, ?it/s]
Equilibrium Assignment : 0%| | 0/50 [00:00<?, ?it/s]
YOU PROBABLY ALSO WANT TO KEEP ALL LINKS THAT ARE OF A CERTAIN HIERARCHIES
keep_types = ["FREEWAY", "MINOR", "MAJOR", "RAMP", "EXPRESSWAY", "PRINCIPAL", "FRONTAGE", "BRIDGE", "TUNNEL"]
links = DataTableCache(supply_file).get_table("link").reset_index()
links = links[links["type"].isin(keep_types)]
links_to_keep = np.unique(np.hstack([links_to_keep, links.link]))
WARNING
WE HIGHLY RECOMMEND REVIEWING THE RESULTS OF THE PROCEDURE ABOVE. YOU WILL LIKELY HAVE TO TUNE ITS PARAMETERS IT IS ALSO LIKELY THAT YOU WILL HAVE TO FIX SOME ISSUES IN THE NETWORK MANUALLY.
We can now delete the links that we are not going to use
with commit_and_close(supply_file, spatial=True) as conn:
sql = "DELETE FROM Link WHERE link NOT IN (" + ",".join([str(x) for x in links_to_keep]) + ")"
conn.execute(sql)
Total running time of the script: (0 minutes 0.302 seconds)