libcity.model.traffic_speed_prediction.GTS¶
-
class
libcity.model.traffic_speed_prediction.GTS.
DCGRUCell
(input_dim, num_units, adj_mx, max_diffusion_step, num_nodes, device, nonlinearity='tanh', filter_type='laplacian', use_gc_for_ru=True)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(inputs, hx)[source]¶ Gated recurrent unit (GRU) with Graph Convolution. :param inputs: (B, num_nodes * input_dim) :param hx: (B, num_nodes * rnn_units)
- Returns
shape (B, num_nodes * rnn_units)
- Return type
torch.tensor
-
training
: bool¶
-
-
class
libcity.model.traffic_speed_prediction.GTS.
DecoderModel
(config, data_feature, adj_mx, device)[source]¶ Bases:
torch.nn.modules.module.Module
,libcity.model.traffic_speed_prediction.GTS.Seq2SeqAttrs
-
forward
(inputs, hidden_state=None)[source]¶ Decoder forward pass. :param inputs: shape (batch_size, self.num_nodes * self.output_dim) :param hidden_state: (num_layers, batch_size, self.hidden_state_size),
optional, zeros if not provided, hidden_state_size = num_nodes * rnn_units
- Returns
- tuple contains:
output: shape (batch_size, self.num_nodes * self.output_dim)
hidden_state: shape (num_layers, batch_size, self.hidden_state_size)
(lower indices mean lower layers)
- Return type
tuple
-
training
: bool¶
-
-
class
libcity.model.traffic_speed_prediction.GTS.
EncoderModel
(config, data_feature, adj_mx, device)[source]¶ Bases:
torch.nn.modules.module.Module
,libcity.model.traffic_speed_prediction.GTS.Seq2SeqAttrs
-
forward
(inputs, hidden_state=None)[source]¶ Encoder forward pass. :param inputs: shape (batch_size, self.num_nodes * self.input_dim) :param hidden_state: (num_layers, batch_size, self.hidden_state_size),
optional, zeros if not provided, hidden_state_size = num_nodes * rnn_units
- Returns
- tuple contains:
output: shape (batch_size, self.hidden_state_size)
hidden_state: shape (num_layers, batch_size, self.hidden_state_size)
(lower indices mean lower layers)
- Return type
tuple
-
training
: bool¶
-
-
class
libcity.model.traffic_speed_prediction.GTS.
FC
(num_nodes, device, input_dim, hid_dim, output_dim, bias_start=0.0)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(inputs, state)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
libcity.model.traffic_speed_prediction.GTS.
GCONV
(num_nodes, max_diffusion_step, device, input_dim, hid_dim, output_dim, adj_mx, bias_start=0.0)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(inputs, state)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training
: bool¶
-
-
class
libcity.model.traffic_speed_prediction.GTS.
GTS
(config, data_feature)[source]¶ Bases:
libcity.model.abstract_traffic_state_model.AbstractTrafficStateModel
,libcity.model.traffic_speed_prediction.GTS.Seq2SeqAttrs
-
_get_x_y
(x, y)[source]¶ - Parameters
x – shape (batch_size, seq_len, num_sensor, input_dim)
y – shape (batch_size, horizon, num_sensor, input_dim)
- :returns x shape (seq_len, batch_size, num_sensor, input_dim)
y shape (horizon, batch_size, num_sensor, input_dim)
-
_get_x_y_in_correct_dims
(x, y)[source]¶ - Parameters
x – shape (seq_len, batch_size, num_sensor, input_dim)
y – shape (horizon, batch_size, num_sensor, input_dim)
- Returns
x: shape (seq_len, batch_size, num_sensor * input_dim) y: shape (horizon, batch_size, num_sensor * output_dim)
-
calculate_loss
(batch, batches_seen=None)[source]¶ 输入一个batch的数据,返回训练过程的loss,也就是需要定义一个loss函数
- Parameters
batch (Batch) – a batch of input
- Returns
return training loss
- Return type
torch.tensor
-
decoder
(encoder_hidden_state, labels=None, batches_seen=None)[source]¶ Decoder forward pass :param encoder_hidden_state: (num_layers, batch_size, self.hidden_state_size) :param labels: (self.horizon, batch_size, self.num_nodes * self.output_dim) [optional, not exist for inference] :param batches_seen: global step [optional, not exist for inference] :return: output: (self.horizon, batch_size, self.num_nodes * self.output_dim)
-
encoder
(inputs)[source]¶ Encoder forward pass :param inputs: shape (seq_len, batch_size, num_sensor * input_dim) :return: encoder_hidden_state: (num_layers, batch_size, self.hidden_state_size)
-
forward
(batch, batches_seen=None)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
predict
(batch, batches_seen=None)[source]¶ 输入一个batch的数据,返回对应的预测值,一般应该是**多步预测**的结果,一般会调用nn.Moudle的forward()方法
- Parameters
batch (Batch) – a batch of input
- Returns
predict result of this batch
- Return type
torch.tensor
-
training
: bool¶
-
-
class
libcity.model.traffic_speed_prediction.GTS.
Seq2SeqAttrs
(config, data_feature)[source]¶ Bases:
object
-
libcity.model.traffic_speed_prediction.GTS.
gumbel_softmax
(device, logits, temperature, hard=False, eps=1e-10)[source]¶ Sample from the Gumbel-Softmax distribution and optionally discretize. :param logits: [batch_size, n_class] unnormalized log-probs :param temperature: non-negative scalar :param hard: if True, take argmax, but differentiate w.r.t. soft sample y
- Returns
[batch_size, n_class] sample from the Gumbel-Softmax distribution. If hard=True, then the returned sample will be one-hot, otherwise it will be a probabilitiy distribution that sums to 1 across classes