Module AssetAllocator.algorithms.TRPO.value

Expand source code
import torch
import torch.autograd as autograd
import torch.nn as nn


torch.set_default_tensor_type('torch.DoubleTensor')

class Value(nn.Module):
    def __init__(self, num_inputs,hidden_size):
        super(Value, self).__init__()
        self.inputLayer = nn.Linear(num_inputs, hidden_size)
        self.hiddenLayer = nn.Linear(hidden_size, hidden_size)
        self.hiddenLayer2 = nn.Linear(hidden_size, hidden_size)
        self.outputLayer = nn.Linear(hidden_size, 1)

    def forward(self, x):
        """
        Parameters:
        states (torch.Tensor): N_state x N_sample

        Returns:
        torch.Tensor:  N_sample  | value of the state

        """
        x = x.double()
        x = torch.tanh(self.inputLayer(x))
        x = torch.tanh(self.hiddenLayer(x))
        x = torch.tanh(self.hiddenLayer2(x))
        x = self.outputLayer(x)
        return x

Classes

class Value (num_inputs, hidden_size)

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Expand source code
class Value(nn.Module):
    def __init__(self, num_inputs,hidden_size):
        super(Value, self).__init__()
        self.inputLayer = nn.Linear(num_inputs, hidden_size)
        self.hiddenLayer = nn.Linear(hidden_size, hidden_size)
        self.hiddenLayer2 = nn.Linear(hidden_size, hidden_size)
        self.outputLayer = nn.Linear(hidden_size, 1)

    def forward(self, x):
        """
        Parameters:
        states (torch.Tensor): N_state x N_sample

        Returns:
        torch.Tensor:  N_sample  | value of the state

        """
        x = x.double()
        x = torch.tanh(self.inputLayer(x))
        x = torch.tanh(self.hiddenLayer(x))
        x = torch.tanh(self.hiddenLayer2(x))
        x = self.outputLayer(x)
        return x

Ancestors

  • torch.nn.modules.module.Module

Class variables

var dump_patches : bool
var training : bool

Methods

def forward(self, x) ‑> Callable[..., Any]

Parameters: states (torch.Tensor): N_state x N_sample

Returns: torch.Tensor: N_sample | value of the state

Expand source code
def forward(self, x):
    """
    Parameters:
    states (torch.Tensor): N_state x N_sample

    Returns:
    torch.Tensor:  N_sample  | value of the state

    """
    x = x.double()
    x = torch.tanh(self.inputLayer(x))
    x = torch.tanh(self.hiddenLayer(x))
    x = torch.tanh(self.hiddenLayer2(x))
    x = self.outputLayer(x)
    return x