{
  "version": "https://jsonfeed.org/version/1", 
  "title": "Torch", 
  "description": "Torch is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.", 
  "home_page_url": "https://www.v2ex.com/go/torch", 
  "feed_url": "https://www.v2ex.com/feed/torch.json", 
  "icon": "https://cdn.v2ex.com/navatar/2bca/b9d9/995_large.png?m=1582782020", 
  "favicon": "https://cdn.v2ex.com/navatar/2bca/b9d9/995_normal.png?m=1582782020", 
  "items": [
    {
      "author": {
        "url": "https://www.v2ex.com/member/RYS", 
        "name": "RYS", 
        "avatar": "https://cdn.v2ex.com/gravatar/12e74a3c35952923ae61f3614558db1b?s=73&d=retro"
      }, 
      "url": "https://www.v2ex.com/t/1067756", 
      "title": "Torch \u51c9\u4e86\u5417\uff1f\u8fd9\u4e2a\u4e3b\u9898\u600e\u4e48\u8fd9\u4e48\u4e45\u6ca1\u6709\u65b0\u8282\u70b9\u4e86", 
      "id": "https://www.v2ex.com/t/1067756", 
      "date_published": "2024-08-26T02:20:50+00:00", 
      "content_html": ""
    }, 
    {
      "author": {
        "url": "https://www.v2ex.com/member/xing393939", 
        "name": "xing393939", 
        "avatar": "https://cdn.v2ex.com/avatar/9096/5ec7/18925_large.png?m=1652855853"
      }, 
      "url": "https://www.v2ex.com/t/987091", 
      "date_modified": "2023-10-31T06:21:06+00:00", 
      "content_html": "<p>\u8bad\u7ec3\u4ee3\u7801\u5982\u4e0b\uff1a</p>\n<pre><code class=\"language-python\">import numpy as np\nimport torch\n\n# 1.prepare dataset\nxy = np.loadtxt(\"redPacket_2.csv\", skiprows=1, delimiter=\",\", dtype=np.float32)\nx_data = torch.from_numpy(xy[:, :-1])\ny_data = torch.from_numpy(xy[:, [-1]])\n\n# 2.design model using class\nclass Model(torch.nn.Module):\n    def __init__(self):\n        super(Model, self).__init__()\n        self.linear1 = torch.nn.Linear(4, 2)\n        self.linear2 = torch.nn.Linear(2, 1)\n        self.activate = torch.nn.ReLU()\n        self.sigmoid = torch.nn.Sigmoid()\n\n    def forward(self, x):\n        x = self.activate(self.linear1(x))\n        x = self.sigmoid(self.linear2(x))\n        return x\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmodel = Model().to(device)\nx_data = x_data.to(device)\ny_data = y_data.to(device)\n\n# 3.construct loss and optimizer\ncriterion = torch.nn.BCELoss(reduction=\"mean\")\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01)\n\n# 4.training cycle forward, backward, update\nfor epoch in range(10000):\n    y_pred = model(x_data)\n    loss = criterion(y_pred, y_data)\n    if epoch % 100 == 0:\n        print(\n            \"epoch %9d loss %.3f\" % (epoch, loss.item()),\n            model.linear2.weight.data,\n            model.linear2.bias.data,\n        )\n\n    optimizer.zero_grad()\n    loss.backward()\n    optimizer.step()\n</code></pre>\n<p>\u8bad\u7ec3\u96c6\u4e0b\u8f7d\u5728<a href=\"http://snfx8file.test.upcdn.net/redPacket_2.csv\" rel=\"nofollow\">\u8fd9\u91cc</a>\uff0c\u6211\u6bcf\u9694 100 \u5468\u671f\u6253\u5370\u6a21\u578b\u7684\u635f\u5931\u503c\u548c\u6a21\u578b\u53c2\u6570\uff0c\u7ed3\u679c\u5982\u4e0b\uff1a</p>\n<pre><code>epoch         0 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch       100 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch       200 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch       300 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch       400 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch       500 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch       600 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch       700 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch       800 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch       900 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch      1000 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch      1100 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch      1200 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch      1300 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\n</code></pre>\n<p>\u4e0d\u77e5\u9053\u4e3a\u4ec0\u4e48\u4f1a\u4e0d\u6536\u655b\uff0c\u662f\u54ea\u91cc\u9700\u8981\u6539\u8fdb\u5417\uff1f</p>\n", 
      "date_published": "2023-10-31T06:20:25+00:00", 
      "title": "\u5199\u4e86\u4e00\u4e2a\u7b80\u5355\u7684\u4e8c\u5206\u7c7b\u6a21\u578b\uff0c\u4f46\u662f\u8bad\u7ec3\u4e86 N \u6b21\u6a21\u578b\u53c2\u6570\u90fd\u6ca1\u6709\u52a8\u9759", 
      "id": "https://www.v2ex.com/t/987091"
    }, 
    {
      "author": {
        "url": "https://www.v2ex.com/member/1722332572", 
        "name": "1722332572", 
        "avatar": "https://cdn.v2ex.com/avatar/31e3/2c4d/135473_large.png?m=1739072858"
      }, 
      "url": "https://www.v2ex.com/t/561876", 
      "date_modified": "2020-02-27T05:40:30+00:00", 
      "content_html": "<h2>PyTorch \u5b66\u4e60\u6559\u7a0b\u3001\u624b\u518c</h2>\n<ul>\n<li><a href=\"https://pytorch.org/tutorials/\" rel=\"nofollow\">PyTorch \u82f1\u6587\u7248\u5b98\u65b9\u624b\u518c</a>\uff1a\u5bf9\u4e8e\u82f1\u6587\u6bd4\u8f83\u597d\u7684\u540c\u5b66\uff0c\u975e\u5e38\u63a8\u8350\u8be5 PyTorch \u5b98\u65b9\u6587\u6863\uff0c\u4e00\u6b65\u6b65\u5e26\u4f60\u4ece\u5165\u95e8\u5230\u7cbe\u901a\u3002\u8be5\u6587\u6863\u8be6\u7ec6\u7684\u4ecb\u7ecd\u4e86\u4ece\u57fa\u7840\u77e5\u8bc6\u5230\u5982\u4f55\u4f7f\u7528 PyTorch \u6784\u5efa\u6df1\u5c42\u795e\u7ecf\u7f51\u7edc\uff0c\u4ee5\u53ca PyTorch \u8bed\u6cd5\u548c\u4e00\u4e9b\u9ad8\u8d28\u91cf\u7684\u6848\u4f8b\u3002</li>\n<li><a href=\"https://pytorch-cn.readthedocs.io/zh/latest/\" rel=\"nofollow\">PyTorch \u4e2d\u6587\u5b98\u65b9\u6587\u6863</a>\uff1a\u9605\u8bfb\u4e0a\u8ff0\u82f1\u6587\u6587\u6863\u6bd4\u8f83\u56f0\u96be\u7684\u540c\u5b66\u4e5f\u4e0d\u8981\u7d27\uff0c\u6211\u4eec\u4e3a\u5927\u5bb6\u51c6\u5907\u4e86\u6bd4\u8f83\u5b98\u65b9\u7684 PyTorch \u4e2d\u6587\u6587\u6863\uff0c\u6587\u6863\u975e\u5e38\u8be6\u7ec6\u7684\u4ecb\u7ecd\u4e86\u5404\u4e2a\u51fd\u6570\uff0c\u53ef\u4f5c\u4e3a\u4e00\u4efd PyTorch \u7684\u901f\u67e5\u5b9d\u5178\u3002</li>\n<li><a href=\"https://github.com/yunjey/pytorch-tutorial\" rel=\"nofollow\">\u6bd4\u8f83\u504f\u7b97\u6cd5\u5b9e\u6218\u7684 PyTorch \u4ee3\u7801\u6559\u7a0b</a>\uff1a\u5728 github \u4e0a\u6709\u5f88\u9ad8\u7684 star\u3002\u5efa\u8bae\u5927\u5bb6\u5728\u9605\u8bfb\u672c\u6587\u6863\u4e4b\u524d\uff0c\u5148\u5b66\u4e60\u4e0a\u8ff0\u4e24\u4e2a PyTorch \u57fa\u7840\u6559\u7a0b\u3002</li>\n<li><a href=\"https://github.com/zergtant/pytorch-handbook\" rel=\"nofollow\">\u5f00\u6e90\u4e66\u7c4d</a>\uff1a\u8fd9\u662f\u4e00\u672c\u5f00\u6e90\u7684\u4e66\u7c4d\uff0c\u76ee\u6807\u662f\u5e2e\u52a9\u90a3\u4e9b\u5e0c\u671b\u548c\u4f7f\u7528 PyTorch \u8fdb\u884c\u6df1\u5ea6\u5b66\u4e60\u5f00\u53d1\u548c\u7814\u7a76\u7684\u670b\u53cb\u5feb\u901f\u5165\u95e8\u3002\u4f46\u672c\u6587\u6863\u4e0d\u662f\u5185\u5bb9\u4e0d\u662f\u5f88\u5168\uff0c\u8fd8\u5728\u6301\u7eed\u66f4\u65b0\u4e2d\u3002</li>\n<li><a href=\"https://github.com/fendouai/pytorch1.0-cn\" rel=\"nofollow\">\u7b80\u5355\u6613\u4e0a\u624b\u7684 PyTorch \u4e2d\u6587\u6587\u6863</a>\uff1a\u975e\u5e38\u9002\u5408\u65b0\u624b\u5b66\u4e60\u3002\u8be5\u6587\u6863\u4ece\u4ecb\u7ecd\u4ec0\u4e48\u662f PyTorch \u5f00\u59cb\uff0c\u5230\u795e\u7ecf\u7f51\u7edc\u3001PyTorch \u7684\u5b89\u88c5\uff0c\u518d\u5230\u56fe\u50cf\u5206\u7c7b\u5668\u3001\u6570\u636e\u5e76\u884c\u5904\u7406\uff0c\u975e\u5e38\u8be6\u7ec6\u7684\u4ecb\u7ecd\u4e86 PyTorch \u7684\u77e5\u8bc6\u4f53\u7cfb\uff0c\u9002\u5408\u65b0\u624b\u7684\u5b66\u4e60\u5165\u95e8\u3002\u8be5\u6587\u6863\u7684\u5b98\u7f51\uff1a<a href=\"http://pytorchchina.com\" rel=\"nofollow\">http://pytorchchina.com</a> \u3002</li>\n</ul>\n<h2>PyTorch \u89c6\u9891\u6559\u7a0b</h2>\n<ul>\n<li><a href=\"https://www.bilibili.com/video/av31914351/\" rel=\"nofollow\">B \u7ad9 PyTorch \u89c6\u9891\u6559\u7a0b</a>\uff1a\u9996\u63a8\u7684\u662f B \u7ad9\u4e2d\u8fd1\u671f\u70b9\u51fb\u7387\u975e\u5e38\u9ad8\u7684\u4e00\u4e2a PyTorch \u89c6\u9891\u6559\u7a0b\uff0c\u867d\u7136\u89c6\u9891\u5185\u5bb9\u53ea\u6709\u516b\u96c6\uff0c\u4f46\u8bb2\u7684\u6df1\u5165\u6d45\u51fa\uff0c\u5341\u5206\u7cbe\u5f69\u3002\u53ea\u662f\u6ca1\u6709\u4e2d\u6587\u5b57\u5e55\uff0c\u5c0f\u4f19\u4f34\u4eec\u662f\u8be5\u7ec3\u4e60\u4e00\u4e0b\u82f1\u6587\u4e86...</li>\n<li><a href=\"https://www.youtube.com/watch?v=SKq-pmkekTk\" rel=\"nofollow\">\u56fd\u5916\u89c6\u9891\u6559\u7a0b</a>\uff1a\u53e6\u5916\u4e00\u4e2a\u56fd\u5916\u5927\u4f6c\u7684\u89c6\u9891\u6559\u7a0b\uff0c\u5728 YouTube \u4e0a\u6709\u5f88\u9ad8\u7684\u70b9\u51fb\u7387\uff0c\u4e5f\u662f\u7eaf\u82f1\u6587\u7684\u89c6\u9891\uff0c\u6709\u6ca1\u6709\u89c9\u5f97\u5916\u56fd\u7684\u6559\u5b66\u89c6\u9891\u4e0d\u7ba1\u662f\u591a\u4e48\u590d\u6742\u7684\u95ee\u9898\u90fd\u80fd\u8bb2\u7684\u5f88\u5f62\u8c61\u5f88\u7b80\u5355\uff1f</li>\n<li><a href=\"https://morvanzhou.github.io/tutorials/machine-learning/torch/\" rel=\"nofollow\">\u83ab\u70e6</a>\uff1a\u76f8\u4fe1\u83ab\u70e6\u8001\u5e08\u5927\u5bb6\u5e94\u8be5\u5f88\u719f\u4e86\uff0c\u4ed6\u7684 Python\u3001\u6df1\u5ea6\u5b66\u4e60\u7684\u7cfb\u5217\u89c6\u9891\u5728 B \u7ad9\u548c YouTube \u4e0a\u5747\u6709\u5f88\u9ad8\u7684\u70b9\u51fb\u7387\uff0c\u8be5 PyTorch \u89c6\u9891\u6559\u7a0b\u4e5f\u662f\u53bb\u5e74\u521a\u51fa\u4e0d\u4e45\uff0c\u63a8\u8350\u7ed9\u65b0\u624b\u670b\u53cb\u3002</li>\n<li><a href=\"https://www.bilibili.com/video/av49008640/\" rel=\"nofollow\">101 \u5b66\u9662</a>\uff1a\u4eba\u5de5\u667a\u80fd 101 \u5b66\u9662\u7684 PyTorch \u7cfb\u5217\u89c6\u9891\u8bfe\u7a0b\uff0c\u8bb2\u7684\u6bd4\u8f83\u8be6\u7ec6\u3001\u8986\u76d6\u7684\u77e5\u8bc6\u70b9\u4e5f\u6bd4\u8f83\u5e7f\uff0c\u611f\u5174\u8da3\u7684\u670b\u53cb\u53ef\u4ee5\u8bd5\u542c\u4e00\u4e0b\u3002</li>\n<li><a href=\"https://www.julyedu.com/course/getDetail/140/\" rel=\"nofollow\">\u4e03\u6708\u5728\u7ebf</a>\uff1a\u6700\u540e\uff0c\u5411\u5927\u5bb6\u63a8\u8350\u7684\u662f\u56fd\u5185\u9886\u5148\u7684\u4eba\u5de5\u667a\u80fd\u6559\u80b2\u5e73\u53f0\u2014\u2014\u4e03\u6708\u5728\u7ebf\u7684 PyTorch \u5165\u95e8\u4e0e\u5b9e\u6218\u7cfb\u5217\u8bfe\u3002\u8bfe\u7a0b\u867d\u7136\u662f\u6536\u8d39\u8bfe\u7a0b\uff0c\u4f46\u8bfe\u7a0b\u5305\u542b PyTorch \u8bed\u6cd5\u3001\u6df1\u5ea6\u5b66\u4e60\u57fa\u7840\u3001\u8bcd\u5411\u91cf\u57fa\u7840\u3001NLP \u548c CV \u7684\u9879\u76ee\u5e94\u7528\u3001\u5b9e\u6218\u7b49\uff0c\u7406\u8bba\u548c\u5b9e\u6218\u76f8\u7ed3\u5408\uff0c\u786e\u5b9e\u6bd4\u5176\u5b83\u8bfe\u7a0b\u8bb2\u7684\u66f4\u8be6\u7ec6\uff0c\u63a8\u8350\u7ed9\u5927\u5bb6\u3002</li>\n</ul>\n<h2>NLP&amp;PyTorch \u5b9e\u6218</h2>\n<ul>\n<li><a href=\"https://github.com/pytorch/text\" rel=\"nofollow\">Pytorch text</a>\uff1aTorchtext \u662f\u4e00\u4e2a\u975e\u5e38\u597d\u7528\u7684\u5e93\uff0c\u53ef\u4ee5\u5e2e\u52a9\u6211\u4eec\u5f88\u597d\u7684\u89e3\u51b3\u6587\u672c\u7684\u9884\u5904\u7406\u95ee\u9898\u3002\u6b64 github \u5b58\u50a8\u5e93\u5305\u542b\u4e24\u90e8\u5206\uff1a\n<ul>\n<li>torchText.data\uff1a\u6587\u672c\u7684\u901a\u7528\u6570\u636e\u52a0\u8f7d\u5668\u3001\u62bd\u8c61\u548c\u8fed\u4ee3\u5668\uff08\u5305\u62ec\u8bcd\u6c47\u548c\u8bcd\u5411\u91cf\uff09</li>\n<li>torchText.datasets\uff1a\u901a\u7528 NLP \u6570\u636e\u96c6\u7684\u9884\u8bad\u7ec3\u52a0\u8f7d\u7a0b\u5e8f\n\u6211\u4eec\u53ea\u9700\u8981\u901a\u8fc7 pip install torchtext \u5b89\u88c5\u597d torchtext \u540e\uff0c\u4fbf\u53ef\u4ee5\u5f00\u59cb\u4f53\u9a8c Torchtext \u7684\u79cd\u79cd\u4fbf\u6377\u4e4b\u5904\u3002</li>\n</ul>\n</li>\n<li><a href=\"https://github.com/IBM/pytorch-seq2seq\" rel=\"nofollow\">Pytorch-Seq2seq</a>\uff1aSeq2seq \u662f\u4e00\u4e2a\u5feb\u901f\u53d1\u5c55\u7684\u9886\u57df\uff0c\u65b0\u6280\u672f\u548c\u65b0\u6846\u67b6\u7ecf\u5e38\u5728\u6b64\u53d1\u5e03\u3002\u8fd9\u4e2a\u5e93\u662f\u5728 PyTorch \u4e2d\u5b9e\u73b0\u7684 Seq2seq \u6a21\u578b\u7684\u6846\u67b6\uff0c\u8be5\u6846\u67b6\u4e3a Seq2seq \u6a21\u578b\u7684\u8bad\u7ec3\u548c\u9884\u6d4b\u7b49\u90fd\u63d0\u4f9b\u4e86\u6a21\u5757\u5316\u548c\u53ef\u6269\u5c55\u7684\u7ec4\u4ef6\uff0c\u6b64 github \u9879\u76ee\u662f\u4e00\u4e2a\u57fa\u7840\u7248\u672c\uff0c\u76ee\u6807\u662f\u4fc3\u8fdb\u8fd9\u4e9b\u6280\u672f\u548c\u5e94\u7528\u7a0b\u5e8f\u7684\u5f00\u53d1\u3002</li>\n<li><a href=\"https://github.com/kamalkraj/BERT-NER\" rel=\"nofollow\">BERT NER</a>\uff1aBERT \u662f 2018 \u5e74 google \u63d0\u51fa\u6765\u7684\u9884\u8bad\u7ec3\u8bed\u8a00\u6a21\u578b\uff0c\u81ea\u5176\u8bde\u751f\u540e\u6253\u7834\u4e86\u4e00\u7cfb\u5217\u7684 NLP \u4efb\u52a1\uff0c\u6240\u4ee5\u5176\u5728 nlp \u7684\u9886\u57df\u4e00\u76f4\u5177\u6709\u5f88\u91cd\u8981\u7684\u5f71\u54cd\u529b\u3002\u8be5 github \u5e93\u662f BERT \u7684 PyTorch \u7248\u672c\uff0c\u5185\u7f6e\u4e86\u5f88\u591a\u5f3a\u5927\u7684\u9884\u8bad\u7ec3\u6a21\u578b\uff0c\u4f7f\u7528\u65f6\u975e\u5e38\u65b9\u4fbf\u3001\u6613\u4e0a\u624b\u3002</li>\n<li><a href=\"https://github.com/pytorch/fairseq\" rel=\"nofollow\">Fairseq</a>\uff1aFairseq \u662f\u4e00\u4e2a\u5e8f\u5217\u5efa\u6a21\u5de5\u5177\u5305\uff0c\u5141\u8bb8\u7814\u7a76\u4eba\u5458\u548c\u5f00\u53d1\u4eba\u5458\u4e3a\u7ffb\u8bd1\u3001\u603b\u7ed3\u3001\u8bed\u8a00\u5efa\u6a21\u548c\u5176\u4ed6\u6587\u672c\u751f\u6210\u4efb\u52a1\u8bad\u7ec3\u81ea\u5b9a\u4e49\u6a21\u578b\uff0c\u5b83\u8fd8\u63d0\u4f9b\u4e86\u5404\u79cd Seq2seq \u6a21\u578b\u7684\u53c2\u8003\u5b9e\u73b0\u3002\u8be5 github \u5b58\u50a8\u5e93\u5305\u542b\u6709\u5173\u5165\u95e8\u3001\u8bad\u7ec3\u65b0\u6a21\u578b\u3001\u4f7f\u7528\u65b0\u6a21\u578b\u548c\u4efb\u52a1\u6269\u5c55 Fairseq \u7684\u8bf4\u660e\uff0c\u5bf9\u8be5\u6a21\u578b\u611f\u5174\u8da3\u7684\u5c0f\u4f19\u4f34\u53ef\u4ee5\u70b9\u51fb\u4e0a\u65b9\u94fe\u63a5\u5b66\u4e60\u3002</li>\n<li><a href=\"https://github.com/outcastofmusic/quick-nlp\" rel=\"nofollow\">Quick-nlp</a>\uff1aQuick-nlp \u662f\u4e00\u4e2a\u6df1\u53d7 <a href=\"http://fast.ai\" rel=\"nofollow\">fast.ai</a> \u5e93\u542f\u53d1\u7684\u6df1\u5165\u5b66\u4e60 Nlp \u5e93\u3002\u5b83\u9075\u5faa\u4e0e Fastai \u76f8\u540c\u7684 API\uff0c\u5e76\u5bf9\u5176\u8fdb\u884c\u4e86\u6269\u5c55\uff0c\u5141\u8bb8\u5feb\u901f\u3001\u8f7b\u677e\u5730\u8fd0\u884c NLP \u6a21\u578b\u3002</li>\n<li><a href=\"https://github.com/OpenNMT/OpenNMT-py\" rel=\"nofollow\">OpenNMT-py</a>\uff1a\u8fd9\u662f OpenNMT \u7684\u4e00\u4e2a PyTorch \u5b9e\u73b0\uff0c\u4e00\u4e2a\u5f00\u653e\u6e90\u7801\u7684\u795e\u7ecf\u7f51\u7edc\u673a\u5668\u7ffb\u8bd1\u7cfb\u7edf\u3002\u5b83\u7684\u8bbe\u8ba1\u662f\u4e3a\u4e86\u4fbf\u4e8e\u7814\u7a76\uff0c\u5c1d\u8bd5\u65b0\u7684\u60f3\u6cd5\uff0c\u4ee5\u53ca\u5728\u7ffb\u8bd1\uff0c\u603b\u7ed3\uff0c\u56fe\u50cf\u5230\u6587\u672c\uff0c\u5f62\u6001\u5b66\u7b49\u8bb8\u591a\u9886\u57df\u4e2d\u5c1d\u8bd5\u65b0\u7684\u60f3\u6cd5\u3002\u4e00\u4e9b\u516c\u53f8\u5df2\u7ecf\u8bc1\u660e\u8be5\u4ee3\u7801\u53ef\u4ee5\u7528\u4e8e\u5b9e\u9645\u7684\u5de5\u4e1a\u9879\u76ee\u4e2d\uff0c\u66f4\u591a\u5173\u4e8e\u8fd9\u4e2a github \u7684\u8be6\u7ec6\u4fe1\u606f\u8bf7\u53c2\u9605\u4ee5\u4e0a\u94fe\u63a5\u3002</li>\n</ul>\n<h2>CV&amp;PyTorch \u5b9e\u6218</h2>\n<ul>\n<li><a href=\"https://github.com/pytorch/vision\" rel=\"nofollow\">pytorch vision</a>\uff1aTorchvision \u662f\u72ec\u7acb\u4e8e pytorch \u7684\u5173\u4e8e\u56fe\u50cf\u64cd\u4f5c\u7684\u4e00\u4e9b\u65b9\u4fbf\u5de5\u5177\u5e93\u3002\u4e3b\u8981\u5305\u62ec\uff1avision.datasets\u3001vision.models\u3001vision.transforms\u3001vision.utils \u51e0\u4e2a\u5305\uff0c\u5b89\u88c5\u548c\u4f7f\u7528\u90fd\u975e\u5e38\u7b80\u5355\uff0c\u611f\u5174\u8da3\u7684\u5c0f\u4f19\u4f34\u4eec\u53ef\u4ee5\u53c2\u8003\u4ee5\u4e0a\u94fe\u63a5\u3002</li>\n<li><a href=\"https://github.com/thnkim/OpenFacePytorch\" rel=\"nofollow\">OpenFacePytorch</a>\uff1a\u6b64 github \u5e93\u662f OpenFace \u5728 Pytorch \u4e2d\u7684\u5b9e\u73b0\uff0c\u4ee3\u7801\u8981\u6c42\u8f93\u5165\u7684\u56fe\u50cf\u8981\u4e0e\u539f\u59cb OpenFace \u76f8\u540c\u7684\u65b9\u5f0f\u5bf9\u9f50\u548c\u88c1\u526a\u3002</li>\n<li><a href=\"https://github.com/donnyyou/torchcv\" rel=\"nofollow\">TorchCV</a>\uff1aTorchCV \u662f\u4e00\u4e2a\u57fa\u4e8e PyTorch \u7684\u8ba1\u7b97\u673a\u89c6\u89c9\u6df1\u5ea6\u5b66\u4e60\u6846\u67b6\uff0c\u652f\u6301\u5927\u90e8\u5206\u89c6\u89c9\u4efb\u52a1\u8bad\u7ec3\u548c\u90e8\u7f72\uff0c\u6b64 github \u5e93\u4e3a\u5927\u591a\u6570\u57fa\u4e8e\u6df1\u5ea6\u5b66\u4e60\u7684 CV \u95ee\u9898\u63d0\u4f9b\u6e90\u4ee3\u7801\uff0c\u5bf9 CV \u65b9\u5411\u611f\u5174\u8da3\u7684\u5c0f\u4f19\u4f34\u8fd8\u5728\u7b49\u4ec0\u4e48\uff1f</li>\n<li><a href=\"https://github.com/creafz/pytorch-cnn-finetune\" rel=\"nofollow\">Pytorch-cnn-finetune</a>\uff1a\u8be5 github \u5e93\u662f\u5229\u7528 pytorch \u5bf9\u9884\u8bad\u7ec3\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u8fdb\u884c\u5fae\u8c03\uff0c\u652f\u6301\u7684\u67b6\u6784\u548c\u6a21\u578b\u5305\u62ec\uff1aResNet\u3001DenseNet\u3001Inception v3\u3001VGG\u3001SqueezeNet\u3001AlexNet \u7b49\u3002</li>\n<li><a href=\"https://github.com/tymokvo/pt-styletransfer#pt-styletransfer\" rel=\"nofollow\">Pt-styletransfer</a>\uff1a\u8fd9\u4e2a github \u9879\u76ee\u662f Pytorch \u4e2d\u7684\u795e\u7ecf\u98ce\u683c\u8f6c\u6362\uff0c\u5177\u4f53\u6709\u4ee5\u4e0b\u51e0\u4e2a\u9700\u8981\u6ce8\u610f\u7684\u5730\u65b9\uff1a\n<ul>\n<li>StyleTransferNet \u4f5c\u4e3a\u53ef\u7531\u5176\u4ed6\u811a\u672c\u5bfc\u5165\u7684\u7c7b\uff1b</li>\n<li>\u652f\u6301 VGG \uff08\u8fd9\u662f\u5728 PyTorch \u4e2d\u63d0\u4f9b\u9884\u8bad\u7ec3\u7684 VGG \u6a21\u578b\u4e4b\u524d\uff09</li>\n<li>\u53ef\u4fdd\u5b58\u7528\u4e8e\u663e\u793a\u7684\u4e2d\u95f4\u6837\u5f0f\u548c\u5185\u5bb9\u76ee\u6807\u7684\u529f\u80fd</li>\n<li>\u53ef\u4f5c\u4e3a\u56fe\u50cf\u68c0\u67e5\u56fe\u77e9\u9635\u7684\u51fd\u6570</li>\n<li>\u81ea\u52a8\u6837\u5f0f\u3001\u5185\u5bb9\u548c\u4ea7\u54c1\u56fe\u50cf\u4fdd\u5b58</li>\n<li>\u4e00\u6bb5\u65f6\u95f4\u5185\u635f\u5931\u7684 Matplotlib \u56fe\u548c\u8d85\u53c2\u6570\u8bb0\u5f55\uff0c\u4ee5\u8ddf\u8e2a\u6709\u5229\u7684\u7ed3\u679c</li>\n</ul>\n</li>\n<li><a href=\"https://github.com/1adrianb/face-alignment#face-recognition\" rel=\"nofollow\">Face-alignment</a>\uff1aFace-alignment \u662f\u4e00\u4e2a\u7528 pytorch \u5b9e\u73b0\u7684 2D \u548c 3D \u4eba\u8138\u5bf9\u9f50\u5e93\uff0c\u4f7f\u7528\u4e16\u754c\u4e0a\u6700\u51c6\u786e\u7684\u9762\u5bf9\u9f50\u7f51\u7edc\u4ece Python \u68c0\u6d4b\u9762\u90e8\u5730\u6807\uff0c\u80fd\u591f\u5728 2D \u548c 3D \u5750\u6807\u4e2d\u68c0\u6d4b\u70b9\u3002\u8be5 github \u5e93\u8be6\u7ec6\u7684\u4ecb\u7ecd\u4e86\u4f7f\u7528 Face-alignment \u8fdb\u884c\u4eba\u8138\u5bf9\u9f50\u7684\u57fa\u672c\u6d41\u7a0b\uff0c\u6b22\u8fce\u611f\u5174\u8da3\u7684\u540c\u5b66\u5b66\u4e60\u3002</li>\n</ul>\n<h2>PyTorch \u8bba\u6587\u63a8\u8350</h2>\n<ul>\n<li><a href=\"https://github.com/neuralix/google_evolution\" rel=\"nofollow\">Google_evolution</a>\uff1a\u8be5\u8bba\u6587\u5b9e\u73b0\u4e86\u5b9e\u73b0\u4e86\u7531 Esteban Real \u7b49\u4eba\u63d0\u51fa\u7684\u56fe\u50cf\u5206\u7c7b\u5668\u5927\u89c4\u6a21\u6f14\u5316\u7684\u7ed3\u679c\u7f51\u7edc\u3002\u5728\u5b9e\u9a8c\u4e4b\u524d\uff0c\u9700\u8981\u6211\u4eec\u5b89\u88c5\u597d PyTorch\u3001Scikit-learn \u4ee5\u53ca\u4e0b\u8f7d\u597d <a href=\"https://www.cs.toronto.edu/%7Ekriz/cifar.html\" rel=\"nofollow\">CIFAR10 dataset \u6570\u636e\u96c6</a>\u3002</li>\n<li><a href=\"https://github.com/onlytailei/Value-Iteration-Networks-PyTorch\" rel=\"nofollow\">PyTorch-value-iteration-networks</a>\uff1a\u8be5\u8bba\u6587\u57fa\u4e8e\u4f5c\u8005\u6700\u521d\u7684 Theano \u5b9e\u73b0\u548c Abhishek Kumar \u7684 Tensoflow \u5b9e\u73b0\uff0c\u5305\u542b\u4e86\u5728 PyTorch \u4e2d\u5b9e\u73b0\u4ef7\u503c\u8fed\u4ee3\u7f51\u7edc\uff08 VIN \uff09\u3002Vin \u5728 NIPS 2016 \u5e74\u83b7\u5f97\u6700\u4f73\u8bba\u6587\u5956\u3002</li>\n<li><a href=\"https://github.com/kefirski/pytorch_Highway\" rel=\"nofollow\">Pytorch Highway</a>\uff1aHighway Netowrks \u662f\u5141\u8bb8\u4fe1\u606f\u9ad8\u901f\u65e0\u963b\u788d\u7684\u901a\u8fc7\u5404\u5c42\uff0c\u5b83\u662f\u4ece Long Short Term Memory(LSTM) recurrent networks \u4e2d\u7684 gate \u673a\u5236\u53d7\u5230\u542f\u53d1\uff0c\u53ef\u4ee5\u8ba9\u4fe1\u606f\u65e0\u963b\u788d\u7684\u901a\u8fc7\u8bb8\u591a\u5c42\uff0c\u8fbe\u5230\u8bad\u7ec3\u6df1\u5c42\u795e\u7ecf\u7f51\u7edc\u7684\u6548\u679c\uff0c\u4f7f\u6df1\u5c42\u795e\u7ecf\u7f51\u7edc\u4e0d\u5728\u4ec5\u4ec5\u5177\u6709\u6d45\u5c42\u795e\u7ecf\u7f51\u7edc\u7684\u6548\u679c\u3002\u8be5\u8bba\u6587\u662f Highway network \u57fa\u4e8e Pytorch \u7684\u5b9e\u73b0\u3002</li>\n<li><a href=\"https://github.com/edouardoyallon/pyscatwave\" rel=\"nofollow\">Pyscatwave</a>\uff1aCupy/Pythorn \u7684\u6563\u5c04\u5b9e\u73b0\u3002\u6563\u5c04\u7f51\u7edc\u662f\u4e00\u79cd\u5377\u79ef\u7f51\u7edc\uff0c\u5b83\u7684\u6ee4\u6ce2\u5668\u88ab\u9884\u5148\u5b9a\u4e49\u4e3a\u5b50\u6ce2\uff0c\u4e0d\u9700\u8981\u5b66\u4e60\uff0c\u53ef\u4ee5\u7528\u4e8e\u56fe\u50cf\u5206\u7c7b\u7b49\u89c6\u89c9\u4efb\u52a1\u3002\u6563\u5c04\u53d8\u6362\u53ef\u4ee5\u663e\u8457\u964d\u4f4e\u8f93\u5165\u7684\u7a7a\u95f4\u5206\u8fa8\u7387\uff08\u4f8b\u5982 224x224-&gt;14x14 \uff09\uff0c\u4e14\u53cc\u5173\u529f\u7387\u635f\u5931\u660e\u663e\u4e3a\u8d1f\u3002</li>\n<li><a href=\"https://github.com/kefirski/pytorch_NEG_loss\" rel=\"nofollow\">Pytorch_NEG_loss</a>\uff1a\u8be5\u8bba\u6587\u662f Negative Sampling Loss \u7684 Pytorch \u5b9e\u73b0\u3002Negative Sampling \u662f\u4e00\u79cd\u6c42\u89e3 word2vec \u6a21\u578b\u7684\u65b9\u6cd5\uff0c\u5b83\u6452\u5f03\u4e86\u970d\u592b\u66fc\u6811\uff0c\u91c7\u7528\u4e86 Negative Sampling \uff08\u8d1f\u91c7\u6837\uff09\u7684\u65b9\u6cd5\u6765\u6c42\u89e3\uff0c\u672c\u8bba\u6587\u662f\u5bf9 Negative Sampling \u7684 loss \u51fd\u6570\u7684\u7814\u7a76\uff0c\u611f\u5174\u8da3\u7684\u5c0f\u4f19\u4f34\u53ef\u70b9\u51fb\u4e0a\u65b9\u8bba\u6587\u94fe\u63a5\u5b66\u4e60\u3002</li>\n<li><a href=\"https://github.com/kefirski/pytorch_TDNN\" rel=\"nofollow\">Pytorch_TDNN</a>\uff1a\u8be5\u8bba\u6587\u662f\u5bf9 Time Delayed NN \u7684 Pytorch \u5b9e\u73b0\u3002\u8bba\u6587\u8be6\u7ec6\u7684\u8bb2\u8ff0\u4e86 TDNN \u7684\u539f\u7406\u4ee5\u53ca\u5b9e\u73b0\u8fc7\u7a0b\u3002</li>\n</ul>\n<h2>PyTorch \u4e66\u7c4d\u63a8\u8350</h2>\n<p>\u76f8\u8f83\u4e8e\u76ee\u524d Tensorflow \u7c7b\u578b\u7684\u4e66\u7c4d\u5df2\u7ecf\u70c2\u5927\u8857\u7684\u72b6\u51b5\uff0cPyTorch \u7c7b\u7684\u4e66\u7c4d\u76ee\u524d\u5df2\u51fa\u7248\u7684\u5e76\u6ca1\u6709\u90a3\u4e48\u591a\uff0c\u7b14\u8005\u7ed9\u5927\u5bb6\u63a8\u8350\u6211\u8ba4\u4e3a\u8fd8\u4e0d\u9519\u7684\u56db\u672c PyTorch \u4e66\u7c4d\u3002</p>\n<ul>\n<li><strong>\u300a\u6df1\u5ea6\u5b66\u4e60\u5165\u95e8\u4e4b PyTorch \u300b</strong>\uff0c\u7535\u5b50\u5de5\u4e1a\u51fa\u7248\u793e\uff0c\u4f5c\u8005\uff1a\u5ed6\u661f\u5b87\u3002\u8fd9\u672c\u300a\u6df1\u5ea6\u5b66\u4e60\u5165\u95e8\u4e4b PyTorch \u300b\u662f\u6240\u6709 PyTorch \u4e66\u7c4d\u4e2d\u51fa\u7248\u7684\u76f8\u5bf9\u8f83\u65e9\u7684\u4e00\u672c\uff0c\u4f5c\u8005\u4ee5\u81ea\u5df1\u7684\u5c0f\u767d\u5165\u95e8\u6df1\u5ea6\u5b66\u4e60\u4e4b\u8def\uff0c\u6df1\u5165\u6d45\u51fa\u7684\u8bb2\u89e3\u4e86 PyTorch \u7684\u8bed\u6cd5\u3001\u539f\u7406\u4ee5\u53ca\u5b9e\u6218\u7b49\u5185\u5bb9\uff0c\u9002\u5408\u65b0\u624b\u7684\u5165\u95e8\u5b66\u4e60\u3002\u4f46\u4e0d\u8db3\u7684\u662f\uff0c\u4e66\u4e2d\u6709\u5f88\u591a\u4e0d\u4e25\u8c28\u4ee5\u53ca\u751f\u642c\u786c\u5957\u7684\u5730\u65b9\uff0c\u9700\u8981\u8bfb\u8005\u597d\u597d\u7504\u522b\u3002\n\u63a8\u8350\u6307\u6570\uff1a\u2605\u2605\u2605</li>\n<li><strong>\u300a PyTorch \u6df1\u5ea6\u5b66\u4e60\u300b</strong>\uff0c\u4eba\u6c11\u90ae\u7535\u51fa\u7248\u793e\uff0c\u4f5c\u8005\uff1a\u738b\u6d77\u73b2\u3001\u5218\u6c5f\u5cf0\u3002\u8be5\u4e66\u662f\u4e00\u672c\u82f1\u8bd1\u4e66\u7c4d\uff0c\u539f\u4f5c\u8005\u662f\u4e24\u4f4d\u5370\u5ea6\u7684\u5927\u4f6c\uff0c\u8be5\u4e66\u9664\u4e86 PyTorch \u57fa\u672c\u8bed\u6cd5\u3001\u51fd\u6570\u5916\uff0c\u8fd8\u6db5\u76d6\u4e86 ResNET\u3001Inception\u3001DenseNet \u7b49\u5728\u5185\u7684\u9ad8\u7ea7\u795e\u7ecf\u7f51\u7edc\u67b6\u6784\u4ee5\u53ca\u5b83\u4eec\u7684\u5e94\u7528\u6848\u4f8b\u3002\u8be5\u4e66\u9002\u5408\u6570\u636e\u5206\u6790\u5e08\u3001\u6570\u636e\u79d1\u5b66\u5bb6\u7b49\u76f8\u5bf9\u6709\u4e00\u4e9b\u7406\u8bba\u57fa\u7840\u548c\u5b9e\u6218\u7ecf\u9a8c\u7684\u8bfb\u8005\u5b66\u4e60\uff0c\u4e0d\u592a\u5efa\u8bae\u4f5c\u4e3a\u65b0\u624b\u7684\u5165\u95e8\u9009\u62e9\u3002\n\u63a8\u8350\u6307\u6570\uff1a\u2605\u2605\u2605</li>\n<li><strong>\u300a\u6df1\u5ea6\u5b66\u4e60\u6846\u67b6 PyTorch \u5165\u95e8\u4e0e\u5b9e\u8df5\u300b</strong>\uff0c\u7535\u5b50\u5de5\u4e1a\u51fa\u7248\u793e\uff0c\u4f5c\u8005\uff1a\u9648\u4e91\u3002\u8fd9\u662f\u4e00\u672c 2018 \u5e74\u4e0a\u5e02\u7684 PyTorch \u4e66\u7c4d\uff0c\u5305\u542b\u7406\u8bba\u5165\u95e8\u548c\u5b9e\u6218\u9879\u76ee\u4e24\u5927\u90e8\u5206\uff0c\u76f8\u8f83\u4e8e\u5176\u5b83\u540c\u7c7b\u578b\u4e66\u7c4d\uff0c\u8be5\u4e66\u6848\u4f8b\u975e\u5e38\u7684\u7fd4\u5b9e\uff0c\u5305\u62ec\uff1aKaggle \u7ade\u8d5b\u4e2d\u7ecf\u5178\u9879\u76ee\u3001GAN \u751f\u6210\u52a8\u6f2b\u5934\u50cf\u3001AI \u6ee4\u955c\u3001RNN \u5199\u8bd7\u3001\u56fe\u50cf\u63cf\u8ff0\u4efb\u52a1\u7b49\u3002\u7406\u8bba+\u5b9e\u6218\u7684\u5185\u5bb9\u8bbe\u7f6e\u4e5f\u66f4\u9002\u5408\u6df1\u5ea6\u5b66\u4e60\u5165\u95e8\u8005\u548c\u4ece\u4e1a\u8005\u5b66\u4e60\u3002\n\u63a8\u8350\u6307\u6570\uff1a\u2605\u2605\u2605\u2605</li>\n<li><strong>\u300a PyTorch \u673a\u5668\u5b66\u4e60\u4ece\u5165\u95e8\u5230\u5b9e\u6218\u300b</strong>\uff0c\u673a\u68b0\u5de5\u4e1a\u51fa\u7248\u793e\uff0c\u4f5c\u8005\uff1a\u6821\u5b9d\u5728\u7ebf\u3001\u5b59\u7433\u7b49\u3002\u8be5\u4e66\u540c\u6837\u662f\u4e00\u672c\u7406\u8bba\u7ed3\u5408\u5b9e\u6218\u7684 Pytorch \u6559\u7a0b\uff0c\u76f8\u8f83\u4e8e\u524d\u4e00\u672c\u5165\u95e8+\u5b9e\u6218\u6559\u7a0b\uff0c\u672c\u4e66\u7684\u7279\u8272\u5728\u4e8e\u5173\u4e8e\u6df1\u5ea6\u5b66\u4e60\u7684\u7406\u8bba\u90e8\u5206\u8bb2\u7684\u975e\u5e38\u8be6\u7ec6\uff0c\u540e\u8fb9\u7684\u5b9e\u6218\u9879\u76ee\u66f4\u52a0\u7684\u7efc\u5408\u3002\u603b\u4f53\u800c\u8a00\uff0c\u672c\u4e66\u4e5f\u662f\u4e00\u672c\u9002\u5408\u65b0\u624b\u5b66\u4e60\u7684\u4e0d\u9519\u7684 PyTorch \u5165\u95e8\u4e66\u7c4d\u3002\n\u63a8\u8350\u6307\u6570\uff1a\u2605\u2605\u2605</li>\n</ul>\n<p>\u6b22\u8fce Star Fork : <a href=\"https://github.com/INTERMT/Awesome-PyTorch-Chinese\" rel=\"nofollow\">https://github.com/INTERMT/Awesome-PyTorch-Chinese</a></p>\n", 
      "date_published": "2019-05-07T08:59:52+00:00", 
      "title": "[\u5e72\u8d27] \u53f2\u4e0a\u6700\u5168\u7684 PyTorch \u5b66\u4e60\u8d44\u6e90\u6c47\u603b import torch as tf", 
      "id": "https://www.v2ex.com/t/561876"
    }, 
    {
      "author": {
        "url": "https://www.v2ex.com/member/1722332572", 
        "name": "1722332572", 
        "avatar": "https://cdn.v2ex.com/avatar/31e3/2c4d/135473_large.png?m=1739072858"
      }, 
      "url": "https://www.v2ex.com/t/521234", 
      "title": "PyTorch 60 \u5206\u949f\u5b89\u88c5\u5165\u95e8\u6559\u7a0b", 
      "id": "https://www.v2ex.com/t/521234", 
      "date_published": "2018-12-26T08:54:19+00:00", 
      "content_html": "PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1aPyTorch \u6df1\u5ea6\u5b66\u4e60\u5b98\u65b9\u5165\u95e8\u4e2d\u6587\u6559\u7a0b<br /><a target=\"_blank\" href=\"http://pytorchchina.com/2018/06/25/what-is-pytorch/\" rel=\"nofollow\">http://pytorchchina.com/2018/06/25/what-is-pytorch/</a><br />PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1a\u81ea\u52a8\u5fae\u5206<br /><a target=\"_blank\" href=\"http://pytorchchina.com/2018/12/25/autograd-automatic-differentiation/\" rel=\"nofollow\">http://pytorchchina.com/2018/12/25/autograd-automatic-differentiation/</a><br />PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1a\u795e\u7ecf\u7f51\u7edc<br /><a target=\"_blank\" href=\"http://pytorchchina.com/2018/12/25/neural-networks/\" rel=\"nofollow\">http://pytorchchina.com/2018/12/25/neural-networks/</a><br />PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1aPyTorch \u8bad\u7ec3\u5206\u7c7b\u5668<br /><a target=\"_blank\" href=\"http://pytorchchina.com/2018/12/25/training-a-classifier/\" rel=\"nofollow\">http://pytorchchina.com/2018/12/25/training-a-classifier/</a><br />PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1a\u6570\u636e\u5e76\u884c\u5904\u7406<br /><a target=\"_blank\" href=\"http://pytorchchina.com/2018/12/11/optional-data-parallelism/\" rel=\"nofollow\">http://pytorchchina.com/2018/12/11/optional-data-parallelism/</a><br />PyTorch 60 \u5206\u949f\u5b89\u88c5\u5165\u95e8\u6559\u7a0b<br /><a target=\"_blank\" href=\"http://pytorchchina.com\" rel=\"nofollow\">http://pytorchchina.com</a>"
    }, 
    {
      "author": {
        "url": "https://www.v2ex.com/member/1722332572", 
        "name": "1722332572", 
        "avatar": "https://cdn.v2ex.com/avatar/31e3/2c4d/135473_large.png?m=1739072858"
      }, 
      "url": "https://www.v2ex.com/t/516574", 
      "title": "PyTorch \u5b89\u88c5\u6559\u7a0b", 
      "id": "https://www.v2ex.com/t/516574", 
      "date_published": "2018-12-11T08:55:40+00:00", 
      "content_html": "PyTorch windows \u5b89\u88c5\u6559\u7a0b\uff1a\u4e24\u884c\u4ee3\u7801\u641e\u5b9a PyTorch \u5b89\u88c5<br /><a target=\"_blank\" href=\"http://pytorchchina.com/2018/12/11/pytorch-windows-install-1/\" rel=\"nofollow\">http://pytorchchina.com/2018/12/11/pytorch-windows-install-1/</a><br />PyTorch Mac \u5b89\u88c5\u6559\u7a0b<br /><a target=\"_blank\" href=\"http://pytorchchina.com/2018/12/11/pytorch-mac-install/\" rel=\"nofollow\">http://pytorchchina.com/2018/12/11/pytorch-mac-install/</a><br />PyTorch Linux \u5b89\u88c5\u6559\u7a0b<br /><a target=\"_blank\" href=\"http://pytorchchina.com/2018/12/11/pytorch-linux-install/\" rel=\"nofollow\">http://pytorchchina.com/2018/12/11/pytorch-linux-install/</a>"
    }, 
    {
      "author": {
        "url": "https://www.v2ex.com/member/1722332572", 
        "name": "1722332572", 
        "avatar": "https://cdn.v2ex.com/avatar/31e3/2c4d/135473_large.png?m=1739072858"
      }, 
      "url": "https://www.v2ex.com/t/516422", 
      "title": "PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b", 
      "id": "https://www.v2ex.com/t/516422", 
      "date_published": "2018-12-11T03:26:21+00:00", 
      "content_html": "\u4ec0\u4e48\u662f PyTorch?<br />PyTorch \u662f\u4e00\u4e2a\u57fa\u4e8e Python \u7684\u79d1\u5b66\u8ba1\u7b97\u5305\uff0c\u4e3b\u8981\u5b9a\u4f4d\u4e24\u7c7b\u4eba\u7fa4\uff1a<br /><br />NumPy \u7684\u66ff\u4ee3\u54c1\uff0c\u53ef\u4ee5\u5229\u7528 GPU \u7684\u6027\u80fd\u8fdb\u884c\u8ba1\u7b97\u3002<br />\u6df1\u5ea6\u5b66\u4e60\u7814\u7a76\u5e73\u53f0\u62e5\u6709\u8db3\u591f\u7684\u7075\u6d3b\u6027\u548c\u901f\u5ea6<br />\u5f00\u59cb\u5b66\u4e60<br />Tensors (\u5f20\u91cf)<br />Tensors \u7c7b\u4f3c\u4e8e NumPy \u7684 ndarrays\uff0c\u540c\u65f6  Tensors \u53ef\u4ee5\u4f7f\u7528 GPU \u8fdb\u884c\u8ba1\u7b97\u3002<br /><br />from __future__ import print_function<br />import torch<br />\u6784\u9020\u4e00\u4e2a 5\u00d73 \u77e9\u9635\uff0c\u4e0d\u521d\u59cb\u5316\u3002<br /><br />x = torch.empty(5, 3)<br />print(x)<br />\u8f93\u51fa:<br /><br />tensor(1.00000e-04 *<br />       [[-0.0000,  0.0000,  1.5135],<br />        [ 0.0000,  0.0000,  0.0000],<br />        [ 0.0000,  0.0000,  0.0000],<br />        [ 0.0000,  0.0000,  0.0000],<br />        [ 0.0000,  0.0000,  0.0000]])<br /> <br /><br />\u6784\u9020\u4e00\u4e2a\u968f\u673a\u521d\u59cb\u5316\u7684\u77e9\u9635\uff1a<br /><br />x = torch.rand(5, 3)<br />print(x)<br />\u8f93\u51fa:<br /><br />tensor([[ 0.6291,  0.2581,  0.6414],<br />        [ 0.9739,  0.8243,  0.2276],<br />        [ 0.4184,  0.1815,  0.5131],<br />        [ 0.5533,  0.5440,  0.0718],<br />        [ 0.2908,  0.1850,  0.5297]])<br /> <br /><br />\u6784\u9020\u4e00\u4e2a\u77e9\u9635\u5168\u4e3a 0\uff0c\u800c\u4e14\u6570\u636e\u7c7b\u578b\u662f long.<br /><br />Construct a matrix filled zeros and of dtype long:<br /><br />x = torch.zeros(5, 3, dtype=torch.long)<br />print(x)<br />\u8f93\u51fa:<br /><br />tensor([[ 0,  0,  0],<br />        [ 0,  0,  0],<br />        [ 0,  0,  0],<br />        [ 0,  0,  0],<br />        [ 0,  0,  0]])<br />\u6784\u9020\u4e00\u4e2a\u5f20\u91cf\uff0c\u76f4\u63a5\u4f7f\u7528\u6570\u636e\uff1a<br /><br />x = torch.tensor([5.5, 3])<br />print(x)<br />\u8f93\u51fa:<br /><br />tensor([ 5.5000,  3.0000])<br />\u521b\u5efa\u4e00\u4e2a tensor \u57fa\u4e8e\u5df2\u7ecf\u5b58\u5728\u7684 tensor\u3002<br /><br />x = x.new_ones(5, 3, dtype=torch.double)      <br /># new_* methods take in sizes<br />print(x)<br /><br />x = torch.randn_like(x, dtype=torch.float)    <br /># override dtype!<br />print(x)                                      <br /># result has the same size<br />\u8f93\u51fa:<br /><br />tensor([[ 1.,  1.,  1.],<br />        [ 1.,  1.,  1.],<br />        [ 1.,  1.,  1.],<br />        [ 1.,  1.,  1.],<br />        [ 1.,  1.,  1.]], dtype=torch.float64)<br />tensor([[-0.2183,  0.4477, -0.4053],<br />        [ 1.7353, -0.0048,  1.2177],<br />        [-1.1111,  1.0878,  0.9722],<br />        [-0.7771, -0.2174,  0.0412],<br />        [-2.1750,  1.3609, -0.3322]])<br />\u83b7\u53d6\u5b83\u7684\u7ef4\u5ea6\u4fe1\u606f:<br /><br />print(x.size())<br />\u8f93\u51fa:<br /><br />torch.Size([5, 3])<br />\u6ce8\u610f<br /><br />torch.Size  \u662f\u4e00\u4e2a\u5143\u7ec4\uff0c\u6240\u4ee5\u5b83\u652f\u6301\u5de6\u53f3\u7684\u5143\u7ec4\u64cd\u4f5c\u3002<br /><br />\u64cd\u4f5c<br />\u5728\u63a5\u4e0b\u6765\u7684\u4f8b\u5b50\u4e2d\uff0c\u6211\u4eec\u5c06\u4f1a\u770b\u5230\u52a0\u6cd5\u64cd\u4f5c\u3002<br /><br />\u52a0\u6cd5: \u65b9\u5f0f 1<br /><br />y = torch.rand(5, 3)<br />print(x + y)<br />Out:<br /><br />tensor([[-0.1859,  1.3970,  0.5236],<br />        [ 2.3854,  0.0707,  2.1970],<br />        [-0.3587,  1.2359,  1.8951],<br />        [-0.1189, -0.1376,  0.4647],<br />        [-1.8968,  2.0164,  0.1092]])<br />\u52a0\u6cd5: \u65b9\u5f0f 2<br /><br />print(torch.add(x, y))<br />Out:<br /><br />tensor([[-0.1859,  1.3970,  0.5236],<br />        [ 2.3854,  0.0707,  2.1970],<br />        [-0.3587,  1.2359,  1.8951],<br />        [-0.1189, -0.1376,  0.4647],<br />        [-1.8968,  2.0164,  0.1092]])<br />\u52a0\u6cd5: \u63d0\u4f9b\u4e00\u4e2a\u8f93\u51fa tensor \u4f5c\u4e3a\u53c2\u6570<br /><br />result = torch.empty(5, 3)<br />torch.add(x, y, out=result)<br />print(result)<br />Out:<br /><br />tensor([[-0.1859,  1.3970,  0.5236],<br />        [ 2.3854,  0.0707,  2.1970],<br />        [-0.3587,  1.2359,  1.8951],<br />        [-0.1189, -0.1376,  0.4647],<br />        [-1.8968,  2.0164,  0.1092]])<br />\u52a0\u6cd5: in-place<br /><br /># adds x to y<br />y.add_(x)<br />print(y)<br />Out:<br /><br />tensor([[-0.1859,  1.3970,  0.5236],<br />        [ 2.3854,  0.0707,  2.1970],<br />        [-0.3587,  1.2359,  1.8951],<br />        [-0.1189, -0.1376,  0.4647],<br />        [-1.8968,  2.0164,  0.1092]])<br />Note<br /><br />\u6ce8\u610f<br /><br />\u4efb\u4f55\u4f7f\u5f20\u91cf\u4f1a\u53d1\u751f\u53d8\u5316\u7684\u64cd\u4f5c\u90fd\u6709\u4e00\u4e2a\u524d\u7f00 \u2018_\u2019\u3002\u4f8b\u5982\uff1ax.copy_(y), x.t_(), \u5c06\u4f1a\u6539\u53d8 x.<br /><br />\u4f60\u53ef\u4ee5\u4f7f\u7528\u6807\u51c6\u7684  NumPy \u7c7b\u4f3c\u7684\u7d22\u5f15\u64cd\u4f5c<br /><br />print(x[:, 1])<br />Out:<br /><br />tensor([ 0.4477, -0.0048,  1.0878, -0.2174,  1.3609])<br />\u6539\u53d8\u5927\u5c0f\uff1a\u5982\u679c\u4f60\u60f3\u6539\u53d8\u4e00\u4e2a tensor \u7684\u5927\u5c0f\u6216\u8005\u5f62\u72b6\uff0c\u4f60\u53ef\u4ee5\u4f7f\u7528 torch.view:<br /><br />x = torch.randn(4, 4)<br />y = x.view(16)<br />z = x.view(-1, 8)  # the size -1 is inferred from other dimensions<br />print(x.size(), y.size(), z.size())<br />Out:<br /><br />torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])<br />\u5982\u679c\u4f60\u6709\u4e00\u4e2a\u5143\u7d20 tensor\uff0c\u4f7f\u7528 .item() \u6765\u83b7\u5f97\u8fd9\u4e2a value\u3002<br /><br />x = torch.randn(1)<br />print(x)<br />print(x.item())<br />Out:<br /><br />tensor([ 0.9422])<br />0.9422121644020081<br /><br />PyTorch \u5165\u95e8\u6559\u7a0b\uff1a <a target=\"_blank\" href=\"http://pytorchchina.com/2018/06/25/what-is-pytorch/\" rel=\"nofollow\">http://pytorchchina.com/2018/06/25/what-is-pytorch/</a>"
    }, 
    {
      "author": {
        "url": "https://www.v2ex.com/member/1722332572", 
        "name": "1722332572", 
        "avatar": "https://cdn.v2ex.com/avatar/31e3/2c4d/135473_large.png?m=1739072858"
      }, 
      "url": "https://www.v2ex.com/t/516416", 
      "title": "PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1a\u6570\u636e\u5e76\u884c\u5904\u7406", 
      "id": "https://www.v2ex.com/t/516416", 
      "date_published": "2018-12-11T03:15:52+00:00", 
      "content_html": "\u53ef\u9009\u62e9\uff1a\u6570\u636e\u5e76\u884c\u5904\u7406\uff08\u6587\u672b\u6709\u5b8c\u6574\u4ee3\u7801\u4e0b\u8f7d\uff09<br />\u4f5c\u8005\uff1aSung Kim \u548c Jenny Kang<br /><br />\u5728\u8fd9\u4e2a\u6559\u7a0b\u4e2d\uff0c\u6211\u4eec\u5c06\u5b66\u4e60\u5982\u4f55\u7528 DataParallel \u6765\u4f7f\u7528\u591a GPU\u3002<br />\u901a\u8fc7 PyTorch \u4f7f\u7528\u591a\u4e2a GPU \u975e\u5e38\u7b80\u5355\u3002\u4f60\u53ef\u4ee5\u5c06\u6a21\u578b\u653e\u5728\u4e00\u4e2a GPU\uff1a<br /><br /> device = torch.device(\"cuda:0\")<br /> <a target=\"_blank\" href=\"http://model.to\" rel=\"nofollow\">model.to</a>(device)<br />\u7136\u540e\uff0c\u4f60\u53ef\u4ee5\u590d\u5236\u6240\u6709\u7684\u5f20\u91cf\u5230 GPU\uff1a<br /><br /> mytensor = <a target=\"_blank\" href=\"http://my_tensor.to\" rel=\"nofollow\">my_tensor.to</a>(device)<br />\u8bf7\u6ce8\u610f\uff0c\u53ea\u662f\u8c03\u7528 <a target=\"_blank\" href=\"http://my_tensor.to\" rel=\"nofollow\">my_tensor.to</a>(device) \u8fd4\u56de\u4e00\u4e2a my_tensor \u65b0\u7684\u590d\u5236\u5728 GPU \u4e0a\uff0c\u800c\u4e0d\u662f\u91cd\u5199 my_tensor\u3002\u4f60\u9700\u8981\u5206\u914d\u7ed9\u4ed6\u4e00\u4e2a\u65b0\u7684\u5f20\u91cf\u5e76\u4e14\u5728 GPU \u4e0a\u4f7f\u7528\u8fd9\u4e2a\u5f20\u91cf\u3002<br /><br />\u5728\u591a GPU \u4e2d\u6267\u884c\u524d\u9988\uff0c\u540e\u9988\u64cd\u4f5c\u662f\u975e\u5e38\u81ea\u7136\u7684\u3002\u5c3d\u7ba1\u5982\u6b64\uff0cPyTorch \u9ed8\u8ba4\u53ea\u4f1a\u4f7f\u7528\u4e00\u4e2a GPU\u3002\u901a\u8fc7\u4f7f\u7528 DataParallel \u8ba9\u4f60\u7684\u6a21\u578b\u5e76\u884c\u8fd0\u884c\uff0c\u4f60\u53ef\u4ee5\u5f88\u5bb9\u6613\u7684\u5728\u591a GPU \u4e0a\u8fd0\u884c\u4f60\u7684\u64cd\u4f5c\u3002<br /><br /> model = nn.DataParallel(model)<br />\u8fd9\u662f\u6574\u4e2a\u6559\u7a0b\u7684\u6838\u5fc3\uff0c\u6211\u4eec\u63a5\u4e0b\u6765\u5c06\u4f1a\u8be6\u7ec6\u8bb2\u89e3\u3002<br />\u5f15\u7528\u548c\u53c2\u6570<br /><br />\u5f15\u5165 PyTorch \u6a21\u5757\u548c\u5b9a\u4e49\u53c2\u6570<br /><br /> import torch<br /> import torch.nn as nn<br /> from torch.utils.data import Dataset, DataLoader<br /># \u53c2\u6570<br /><br /> input_size = 5<br /> output_size = 2<br /><br /> batch_size = 30<br /> data_size = 100<br />\u8bbe\u5907<br /><br />device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")<br />\u5b9e\u9a8c\uff08\u73a9\u5177\uff09\u6570\u636e<br /><br />\u751f\u6210\u4e00\u4e2a\u73a9\u5177\u6570\u636e\u3002\u4f60\u53ea\u9700\u8981\u5b9e\u73b0 getitem.<br /><br />class RandomDataset(Dataset):<br /><br />    def __init__(self, size, length):<br />        self.len = length<br />        self.data = torch.randn(length, size)<br /><br />    def __getitem__(self, index):<br />        return self.data[index]<br /><br />    def __len__(self):<br />        return self.len<br /><br />rand_loader = DataLoader(dataset=RandomDataset(input_size, data_size),batch_size=batch_size, shuffle=True)<br />\u7b80\u5355\u6a21\u578b<br /><br />\u4e3a\u4e86\u505a\u4e00\u4e2a\u5c0f demo\uff0c\u6211\u4eec\u7684\u6a21\u578b\u53ea\u662f\u83b7\u5f97\u4e00\u4e2a\u8f93\u5165\uff0c\u6267\u884c\u4e00\u4e2a\u7ebf\u6027\u64cd\u4f5c\uff0c\u7136\u540e\u7ed9\u4e00\u4e2a\u8f93\u51fa\u3002\u5c3d\u7ba1\u5982\u6b64\uff0c\u4f60\u53ef\u4ee5\u4f7f\u7528 DataParallel   \u5728\u4efb\u4f55\u6a21\u578b(CNN, RNN, Capsule Net \u7b49\u7b49.)<br /><br />\u6211\u4eec\u653e\u7f6e\u4e86\u4e00\u4e2a\u8f93\u51fa\u58f0\u660e\u5728\u6a21\u578b\u4e2d\u6765\u68c0\u6d4b\u8f93\u51fa\u548c\u8f93\u5165\u5f20\u91cf\u7684\u5927\u5c0f\u3002\u8bf7\u6ce8\u610f\u5728 batch rank 0 \u4e2d\u7684\u8f93\u51fa\u3002<br /><br />class Model(nn.Module):<br />    # Our model<br /><br />    def __init__(self, input_size, output_size):<br />        super(Model, self).__init__()<br />        self.fc = nn.Linear(input_size, output_size)<br /><br />    def forward(self, input):<br />        output = self.fc(input)<br />        print(\"\\tIn Model: input size\", input.size(),<br />              \"output size\", output.size())<br /><br />        return output<br /> <br /><br />\u521b\u5efa\u6a21\u578b\u5e76\u4e14\u6570\u636e\u5e76\u884c\u5904\u7406<br /><br />\u8fd9\u662f\u6574\u4e2a\u6559\u7a0b\u7684\u6838\u5fc3\u3002\u9996\u5148\u6211\u4eec\u9700\u8981\u4e00\u4e2a\u6a21\u578b\u7684\u5b9e\u4f8b\uff0c\u7136\u540e\u9a8c\u8bc1\u6211\u4eec\u662f\u5426\u6709\u591a\u4e2a GPU\u3002\u5982\u679c\u6211\u4eec\u6709\u591a\u4e2a GPU\uff0c\u6211\u4eec\u53ef\u4ee5\u7528 nn.DataParallel \u6765   \u5305\u88f9 \u6211\u4eec\u7684\u6a21\u578b\u3002\u7136\u540e\u6211\u4eec\u4f7f\u7528 <a target=\"_blank\" href=\"http://model.to\" rel=\"nofollow\">model.to</a>(device) \u628a\u6a21\u578b\u653e\u5230\u591a GPU \u4e2d\u3002<br /><br /> <br /><br />model = Model(input_size, output_size)<br />if torch.cuda.device_count() &gt; 1:<br />  print(\"Let's use\", torch.cuda.device_count(), \"GPUs!\")<br />  # dim = 0 [30, xxx] -&gt; [10, ...], [10, ...], [10, ...] on 3 GPUs<br />  model = nn.DataParallel(model)<br /><br /><a target=\"_blank\" href=\"http://model.to\" rel=\"nofollow\">model.to</a>(device)<br />\u8f93\u51fa\uff1a<br /><br />Let's use 2 GPUs!<br /> \u8fd0\u884c\u6a21\u578b\uff1a<br />\u73b0\u5728\u6211\u4eec\u53ef\u4ee5\u770b\u5230\u8f93\u5165\u548c\u8f93\u51fa\u5f20\u91cf\u7684\u5927\u5c0f\u4e86\u3002<br />for data in rand_loader:<br />    input = <a target=\"_blank\" href=\"http://data.to\" rel=\"nofollow\">data.to</a>(device)<br />    output = model(input)<br />    print(\"Outside: input size\", input.size(),<br />          \"output_size\", output.size())<br />\u8f93\u51fa\uff1a<br /><br />In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />        In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />        In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />        In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />        In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />        In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />        In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])<br />        In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])<br />Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])<br />\u7ed3\u679c\uff1a<br /><br />\u5982\u679c\u4f60\u6ca1\u6709 GPU \u6216\u8005\u53ea\u6709\u4e00\u4e2a GPU\uff0c\u5f53\u6211\u4eec\u83b7\u53d6 30 \u4e2a\u8f93\u5165\u548c 30 \u4e2a\u8f93\u51fa\uff0c\u6a21\u578b\u5c06\u671f\u671b\u83b7\u5f97 30 \u4e2a\u8f93\u5165\u548c 30 \u4e2a\u8f93\u51fa\u3002\u4f46\u662f\u5982\u679c\u4f60\u6709\u591a\u4e2a GPU\uff0c\u4f60\u4f1a\u83b7\u5f97\u8fd9\u6837\u7684\u7ed3\u679c\u3002<br /><br />\u591a GPU<br /><br />\u5982\u679c\u4f60\u6709 2 \u4e2a GPU\uff0c\u4f60\u4f1a\u770b\u5230\uff1a<br /><br /># on 2 GPUs<br />Let's use 2 GPUs!<br />    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />    In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />    In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])<br />    In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])<br />Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])<br /> <br /><br />\u5982\u679c\u4f60\u6709 3 \u4e2a GPU\uff0c\u4f60\u4f1a\u770b\u5230\uff1a<br /><br />Let's use 3 GPUs!<br />    In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])<br />    In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])<br />    In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />    In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])<br />    In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])<br />    In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />    In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])<br />    In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])<br />    In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])<br />Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])<br />\u5982\u679c\u4f60\u6709 8 \u4e2a GPU\uff0c\u4f60\u4f1a\u770b\u5230\uff1a<br /><br />Let's use 8 GPUs!<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])<br />    In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])<br />Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])<br />    In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])<br />    In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])<br />    In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])<br />    In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])<br />    In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])<br />Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])<br />\u603b\u7ed3<br />\u6570\u636e\u5e76\u884c\u81ea\u52a8\u62c6\u5206\u4e86\u4f60\u7684\u6570\u636e\u5e76\u4e14\u5c06\u4efb\u52a1\u5355\u53d1\u9001\u5230\u591a\u4e2a GPU \u4e0a\u3002\u5f53\u6bcf\u4e00\u4e2a\u6a21\u578b\u90fd\u5b8c\u6210\u81ea\u5df1\u7684\u4efb\u52a1\u4e4b\u540e\uff0cDataParallel \u6536\u96c6\u5e76\u4e14\u5408\u5e76\u8fd9\u4e9b\u7ed3\u679c\uff0c\u7136\u540e\u518d\u8fd4\u56de\u7ed9\u4f60\u3002<br /><br />\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u8bbf\u95ee\uff1a<br /><a target=\"_blank\" href=\"https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html\" rel=\"nofollow\">https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html</a><br /><br />\u4e0b\u8f7d\u5b8c\u6574 Python \u4ee3\u7801\uff1a <a target=\"_blank\" href=\"http://pytorchchina.com/2018/12/11/optional-data-parallelism/\" rel=\"nofollow\">http://pytorchchina.com/2018/12/11/optional-data-parallelism/</a>"
    }
  ]
}