-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update and convert to torchsharp code #99
base: master
Are you sure you want to change the base?
Conversation
public virtual void vali(object vali_data, object vali_loader, void criterion) => public virtual void vali(object vali_data, object vali_loader, object criterion)
@toolgood Ideally I would like to paste a python module into ONE polyglot Cell and then use PyToCs to covert the PyTorch cdoes and output into a new Cell and then modify the converted code until it works. In this way, it is possible to documents what are further modifications and improvements needed to make PyToCs in a realiable way generate the TorchSharp codes. Good Job! |
using DataEmbedding = layers.Embed.DataEmbedding; this.enc_embedding = DataEmbedding(configs.enc_in, configs.d_model, configs.embed, configs.freq, configs.dropout); => this.enc_embedding = new DataEmbedding(configs.enc_in, configs.d_model, configs.embed, configs.freq, configs.dropout);
…ethod according to the parameters of the 'nn.XXX' and 'torch.XXX' methods.
Please share how you would use your codes, I could not see the results of your new changes |
Is this still valid? TorchUtil.ReplaceFolder(folder);
TorchUtil.CreateNetstandardCode(folder); |
yes |
I am getting |
The generated PyToCs py.cs is private static void getClassName(string text, HashSet<string> classNames)
{
const string classRegex = @"public class ([a-zA-Z_][a-zA-Z0-9_]*)"; |
One file is a static class. There are only static methods in the static class, so I will deal with the static methods in the file. |
…d of the relevant class in the folder
//// Forward pass: compute predicted y
//original pytorch: var y_pred = a + b * x + c * x * *2 + d * x * *3
//Converted by PyToCs: var y_pred = a + b * x + c * Math.Pow(x, 2) + d * Math.Pow(x, 3);
//Working in TorchSHarp
var y_pred = a + b * x + c * x.pow(2) + d * x.pow(3); This means when dealing with Tensor in TorchSharp => |
The PyTorch Source used import torch
import math
dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU
# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
# Randomly initialize weights
a = torch.randn((), device=device, dtype=dtype)
b = torch.randn((), device=device, dtype=dtype)
c = torch.randn((), device=device, dtype=dtype)
d = torch.randn((), device=device, dtype=dtype)
learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y
y_pred = a + b * x + c * x ** 2 + d * x ** 3
# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 100 == 99:
print(t, loss)
# Backprop to compute gradients of a, b, c, d with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_a = grad_y_pred.sum()
grad_b = (grad_y_pred * x).sum()
grad_c = (grad_y_pred * x ** 2).sum()
grad_d = (grad_y_pred * x ** 3).sum()
# Update weights using gradient descent
a -= learning_rate * grad_a
b -= learning_rate * grad_b
c -= learning_rate * grad_c
d -= learning_rate * grad_d
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3') |
// pytorch: a = torch.randn((), device=device, dtype=dtype)
//PyToCs: a = torch.randn(ValueTuple.Create("<Empty>"), device: device, dtype: dtype);
//Works in TorchSharp
a = torch.randn(new long[]{ }, device: device, dtype: dtype); |
//# Compute and print loss
//PyTorch: loss = (y_pred - y).pow(2).sum().item()
//PyToCs: loss = (y_pred - y).pow(2).sum().item()
//Work in TorchSharp
var loss = (y_pred - y).pow(2).sum().item<float>(); |
This needs to be converted manually, |
Math.Pow(x,2) ===> x,pow(2) WHEN x is a tensor. |
// PyTorch is torch.randn(new() @uxmal John, could you see, how similar TorchSharp is to PyTorch |
…the specified folder .
Jupyter Notebook parser. Would you be interested to turn your Console program into one that generate Jupyter Notebook with PyTorch and TorchSharp codes Side by Side ? e.g. cell with PyTorch codes followed by cell with TorchSharp code? var notebook = json.load(open(filename))
plugin.parse_notebook(filename, notebook) def parse_notebook(self, filename, notebook):
if 'cells' not in notebook:
# we don't handle v3 for now
return
# this pattern has gotten weaker from original
pattern = r'^(?:from|import)\s+([\w.]*)\s+'
cells = notebook['cells']
execution_cells = [cell for cell in cells if cell['cell_type'] == 'code']
modules = []
for cell in execution_cells:
# determine if any libraries have been used w/ regular expressions
source = cell['source']
source = ''.join(source)
modules += re.findall(pattern, source)
self.libraries_in_notebook[filename] = modules |
This is very difficult. The main reason is that python is a weakly typed language, and Csharp is a strongly typed language. When writing in python, you can not write the class name, and Csharp must write the class name. The conversion code should conjecture the class name according to the context. Now TorchSharp has poor compatibility, as well as various inexplicable bugs. |
torch.arange(B)[:, None, ] == torch.arange(B)[:, None]
good job |
Update and convert to torchsharp code