Skip to content

Training Questions #23

@jhunter533

Description

@jhunter533

I wanted to note I appreciate the from_torch tutorials that were recently uploaded and the help they provide. I wanted to ask some clarifying questions.
When using the from_torch function does the original network need to actually use sequential or will it see through forward calls in something like

class Network(nn.Module):
  def __init__():
     self.state_fc=nn.Linear(num_states,256)
  def forward(self):
     state_out=F.relu(self.state_fc(state))
     return torch.squeeze(layer)
  def train():
     #Training lines

When doing in cross-sim training to fine tune a network converted using from_torch I understand you adding synchronize() but if in training I originally called self.Network do I keep that the same only adding synchronize or do I change it to a call for the converted network and add synchronize?
Additionally, if I was running training with multiple networks and soft updates do all networks need to be converted or just the main networks (not target networks using soft updates)?
Finally, I wanted to ask if there was support for lstm layers or plans to add them.

Thank you for any help it's very appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions