Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove and reorganize the alias of APIs #27717

Merged
merged 55 commits into from
Oct 14, 2020
Merged
Show file tree
Hide file tree
Changes from 44 commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
7e6ab7c
modify cond while_loop to paddle.static.nn.cond
MingMingShangTian Sep 29, 2020
b93fafd
modify crop_tensor to paddle.crop
MingMingShangTian Sep 29, 2020
0269dd9
modify Variable to paddle.static.Variable
MingMingShangTian Oct 9, 2020
6103542
remove nn.beam_search, nn.beam_search_decode, nn.gather_tree
MingMingShangTian Oct 9, 2020
2bb797d
remove bpr_loss, center_loss, rank_loss, smooth_l1, teacher_student_s…
MingMingShangTian Oct 9, 2020
966ed64
remove apis in nn.functional.learn_rate.py
MingMingShangTian Oct 9, 2020
0a78555
remove pool2d, pool3d, adaptive_pool2d, adaptive_pool3d in nn.functional
MingMingShangTian Oct 9, 2020
36ac91c
remove apis in nn.functional.vision
MingMingShangTian Oct 9, 2020
7bed39c
remove erf, soft_relu in nn.functional.activation
MingMingShangTian Oct 9, 2020
8040bbf
remove apis in nn.functional.extension
MingMingShangTian Oct 9, 2020
197c0b9
remove nn.functional.rnn
MingMingShangTian Oct 9, 2020
287b2d6
remove hash from nn.functional.lod
MingMingShangTian Oct 9, 2020
3be1ddc
resolve conflicts
MingMingShangTian Oct 9, 2020
157e5f9
remove row_conv from nn.functional.extension
MingMingShangTian Oct 9, 2020
aa7955d
remove one_hot, pad2d, pad_constant_like from nn.functional.common
MingMingShangTian Oct 9, 2020
0e62a92
remove nn.gather_tree, nn.BilinearTensorProduct, nn.Pool2D, nn.Pad2D
MingMingShangTian Oct 9, 2020
0c87858
remove apis from optimizer.__init
MingMingShangTian Oct 9, 2020
30efb9c
remove tensor.creation.fill_constant
MingMingShangTian Oct 9, 2020
585fe35
remove elementwise_mul in nn.functional.common and modify to paddle.…
MingMingShangTian Oct 9, 2020
a999a7c
remove tensor.stat.reduce_mean
MingMingShangTian Oct 9, 2020
87f0f06
remove reduce_all, reduce_any in tensor.logic
MingMingShangTian Oct 9, 2020
38eadea
remove apis in tensor.math
MingMingShangTian Oct 9, 2020
bc228a5
remove apis in tensor.__init__
MingMingShangTian Oct 9, 2020
f425817
remove has_inf, has_nan in tensor.search
MingMingShangTian Oct 9, 2020
516d7c8
remove apis in framework.__init__
MingMingShangTian Oct 9, 2020
a522969
remove apis in paddle.__init__
MingMingShangTian Oct 9, 2020
cfd7c9c
resolve conflicts
MingMingShangTian Oct 10, 2020
0d14b3b
remove apis in nn.functional.__init__
MingMingShangTian Oct 10, 2020
794b2d1
modify removed alias apis to raw api in doc and unittests
MingMingShangTian Oct 10, 2020
e9f2a98
resolve conflicts
MingMingShangTian Oct 10, 2020
065147c
fix remove grid_sample bug
MingMingShangTian Oct 10, 2020
f489fac
modify removed alias apis to raw api in doc and unittests
MingMingShangTian Oct 11, 2020
2a0957c
modify removed alias apis to raw api in doc and unittests
MingMingShangTian Oct 11, 2020
5c130b9
modify removed alias apis to raw api in doc and unittests
MingMingShangTian Oct 11, 2020
64928a7
modify removed alias apis to raw api in doc and unittests
MingMingShangTian Oct 11, 2020
153779d
resolve conflicts
MingMingShangTian Oct 12, 2020
cd7863b
modify removed alias apis to raw api in doc and unittests
MingMingShangTian Oct 12, 2020
96249be
modify removed alias apis to raw api in doc and unittests
MingMingShangTian Oct 12, 2020
6d9e5dc
delete alias api relastions in doc
MingMingShangTian Oct 12, 2020
cd755bb
reserve paddle.compat, paddle.sysconfig
MingMingShangTian Oct 12, 2020
d5258a4
resolve conflicts
MingMingShangTian Oct 12, 2020
5230217
remove unittest for paddle.reduce_all, paddle.reduce_any
MingMingShangTian Oct 12, 2020
a45809b
modify removed alias apis to raw api in doc and unittests
MingMingShangTian Oct 12, 2020
8edcf88
resolve conflicts
MingMingShangTian Oct 12, 2020
4952ffd
recover paddle.save and paddle.load
MingMingShangTian Oct 13, 2020
578599f
resolve conflicts
MingMingShangTian Oct 13, 2020
c2c8b98
resolve conflicts
MingMingShangTian Oct 13, 2020
e41eb74
resolve conflicts
MingMingShangTian Oct 13, 2020
d86f6bf
fix sample code missing paddle.enable_static() bug
MingMingShangTian Oct 13, 2020
a1ef0aa
fix sample code missing paddle.enable_static() bug
MingMingShangTian Oct 13, 2020
c4e154e
fix to_string sample code error
MingMingShangTian Oct 14, 2020
e634fa6
resolve conflicts
MingMingShangTian Oct 14, 2020
2119342
resolve conflicts
MingMingShangTian Oct 14, 2020
6e50e93
resolve conflicts
MingMingShangTian Oct 14, 2020
e74715e
resolve conflicts
MingMingShangTian Oct 14, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion paddle/fluid/pybind/imperative.cc
Original file line number Diff line number Diff line change
Expand Up @@ -712,7 +712,7 @@ void BindImperative(py::module *m_ptr) {
tmp.stop_gradient=False
inputs.append(tmp)
ret = paddle.sums(inputs2)
loss = paddle.reduce_sum(ret)
loss = paddle.fluid.layers.reduce_sum(ret)
loss.backward()
print("Before clear_gradient {}".format(loss.grad))
loss.clear_gradient()
Expand Down
62 changes: 31 additions & 31 deletions python/paddle/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,10 +59,9 @@
from .tensor.attribute import rank #DEFINE_ALIAS
from .tensor.attribute import shape #DEFINE_ALIAS
from .tensor.creation import to_tensor #DEFINE_ALIAS
from .tensor.creation import crop_tensor #DEFINE_ALIAS
from .tensor.creation import diag #DEFINE_ALIAS
from .tensor.creation import eye #DEFINE_ALIAS
from .tensor.creation import fill_constant #DEFINE_ALIAS
# from .tensor.creation import fill_constant #DEFINE_ALIAS
# from .tensor.creation import get_tensor_from_selected_rows #DEFINE_ALIAS
from .tensor.creation import linspace #DEFINE_ALIAS
from .tensor.creation import ones #DEFINE_ALIAS
Expand Down Expand Up @@ -103,8 +102,8 @@
from .tensor.logic import logical_or #DEFINE_ALIAS
from .tensor.logic import logical_xor #DEFINE_ALIAS
from .tensor.logic import not_equal #DEFINE_ALIAS
from .tensor.logic import reduce_all #DEFINE_ALIAS
from .tensor.logic import reduce_any #DEFINE_ALIAS
# from .tensor.logic import reduce_all #DEFINE_ALIAS
# from .tensor.logic import reduce_any #DEFINE_ALIAS
from .tensor.logic import allclose #DEFINE_ALIAS
from .tensor.logic import equal_all #DEFINE_ALIAS
# from .tensor.logic import isnan #DEFINE_ALIAS
Expand Down Expand Up @@ -145,23 +144,23 @@
from .tensor.math import cos #DEFINE_ALIAS
from .tensor.math import cosh #DEFINE_ALIAS
from .tensor.math import cumsum #DEFINE_ALIAS
from .tensor.math import elementwise_add #DEFINE_ALIAS
from .tensor.math import elementwise_div #DEFINE_ALIAS
from .tensor.math import elementwise_floordiv #DEFINE_ALIAS
from .tensor.math import elementwise_mod #DEFINE_ALIAS
from .tensor.math import elementwise_pow #DEFINE_ALIAS
from .tensor.math import elementwise_sub #DEFINE_ALIAS
# from .tensor.math import elementwise_add #DEFINE_ALIAS
# from .tensor.math import elementwise_div #DEFINE_ALIAS
# from .tensor.math import elementwise_floordiv #DEFINE_ALIAS
# from .tensor.math import elementwise_mod #DEFINE_ALIAS
# from .tensor.math import elementwise_pow #DEFINE_ALIAS
# from .tensor.math import elementwise_sub #DEFINE_ALIAS
from .tensor.math import exp #DEFINE_ALIAS
from .tensor.math import floor #DEFINE_ALIAS
from .tensor.math import increment #DEFINE_ALIAS
from .tensor.math import log #DEFINE_ALIAS
from .tensor.math import multiplex #DEFINE_ALIAS
from .tensor.math import pow #DEFINE_ALIAS
from .tensor.math import reciprocal #DEFINE_ALIAS
from .tensor.math import reduce_max #DEFINE_ALIAS
from .tensor.math import reduce_min #DEFINE_ALIAS
from .tensor.math import reduce_prod #DEFINE_ALIAS
from .tensor.math import reduce_sum #DEFINE_ALIAS
# from .tensor.math import reduce_max #DEFINE_ALIAS
# from .tensor.math import reduce_min #DEFINE_ALIAS
# from .tensor.math import reduce_prod #DEFINE_ALIAS
# from .tensor.math import reduce_sum #DEFINE_ALIAS
from .tensor.math import round #DEFINE_ALIAS
from .tensor.math import rsqrt #DEFINE_ALIAS
from .tensor.math import scale #DEFINE_ALIAS
Expand All @@ -174,7 +173,7 @@
from .tensor.math import sum #DEFINE_ALIAS
from .tensor.math import sums #DEFINE_ALIAS
from .tensor.math import tanh #DEFINE_ALIAS
from .tensor.math import elementwise_sum #DEFINE_ALIAS
# from .tensor.math import elementwise_sum #DEFINE_ALIAS
from .tensor.math import max #DEFINE_ALIAS
from .tensor.math import maximum #DEFINE_ALIAS
from .tensor.math import min #DEFINE_ALIAS
Expand All @@ -192,7 +191,7 @@
from .tensor.math import inverse #DEFINE_ALIAS
from .tensor.math import log1p #DEFINE_ALIAS
from .tensor.math import erf #DEFINE_ALIAS
from .tensor.math import addcmul #DEFINE_ALIAS
# from .tensor.math import addcmul #DEFINE_ALIAS
from .tensor.math import addmm #DEFINE_ALIAS
from .tensor.math import clip #DEFINE_ALIAS
from .tensor.math import trace #DEFINE_ALIAS
Expand All @@ -212,8 +211,8 @@
from .tensor.search import argmax #DEFINE_ALIAS
from .tensor.search import argmin #DEFINE_ALIAS
from .tensor.search import argsort #DEFINE_ALIAS
from .tensor.search import has_inf #DEFINE_ALIAS
from .tensor.search import has_nan #DEFINE_ALIAS
# from .tensor.search import has_inf #DEFINE_ALIAS
# from .tensor.search import has_nan #DEFINE_ALIAS
from .tensor.search import masked_select #DEFINE_ALIAS
from .tensor.search import topk #DEFINE_ALIAS
from .tensor.search import where #DEFINE_ALIAS
Expand All @@ -223,36 +222,35 @@
from .framework.random import manual_seed #DEFINE_ALIAS
from .framework.random import get_cuda_rng_state #DEFINE_ALIAS
from .framework.random import set_cuda_rng_state #DEFINE_ALIAS
from .framework import Variable #DEFINE_ALIAS
from .framework import ParamAttr #DEFINE_ALIAS
from .framework import create_global_var #DEFINE_ALIAS
# from .framework import create_global_var #DEFINE_ALIAS
from .framework import create_parameter #DEFINE_ALIAS
from .framework import CPUPlace #DEFINE_ALIAS
from .framework import CUDAPlace #DEFINE_ALIAS
from .framework import CUDAPinnedPlace #DEFINE_ALIAS

from .framework import grad #DEFINE_ALIAS
from .framework import no_grad #DEFINE_ALIAS
from .framework import save #DEFINE_ALIAS
from .framework import load #DEFINE_ALIAS
# from .framework import save #DEFINE_ALIAS
# from .framework import load #DEFINE_ALIAS
from .framework import DataParallel #DEFINE_ALIAS

from .framework import NoamDecay #DEFINE_ALIAS
from .framework import PiecewiseDecay #DEFINE_ALIAS
from .framework import NaturalExpDecay #DEFINE_ALIAS
from .framework import ExponentialDecay #DEFINE_ALIAS
from .framework import InverseTimeDecay #DEFINE_ALIAS
from .framework import PolynomialDecay #DEFINE_ALIAS
from .framework import CosineDecay #DEFINE_ALIAS
# from .framework import NoamDecay #DEFINE_ALIAS
# from .framework import PiecewiseDecay #DEFINE_ALIAS
# from .framework import NaturalExpDecay #DEFINE_ALIAS
# from .framework import ExponentialDecay #DEFINE_ALIAS
# from .framework import InverseTimeDecay #DEFINE_ALIAS
# from .framework import PolynomialDecay #DEFINE_ALIAS
# from .framework import CosineDecay #DEFINE_ALIAS
from .framework import set_default_dtype #DEFINE_ALIAS
from .framework import get_default_dtype #DEFINE_ALIAS

from .tensor.search import index_sample #DEFINE_ALIAS
from .tensor.stat import mean #DEFINE_ALIAS
from .tensor.stat import reduce_mean #DEFINE_ALIAS
# from .tensor.stat import reduce_mean #DEFINE_ALIAS
from .tensor.stat import std #DEFINE_ALIAS
from .tensor.stat import var #DEFINE_ALIAS
from .fluid.data import data
# from .fluid.data import data
from .tensor.stat import numel #DEFINE_ALIAS
from .device import get_cudnn_version
from .device import set_device
Expand All @@ -268,6 +266,8 @@
from .fluid.dygraph.base import disable_dygraph as enable_static #DEFINE_ALIAS
from .fluid.framework import in_dygraph_mode as in_dynamic_mode #DEFINE_ALIAS
from .fluid.dygraph.base import no_grad_ as no_grad #DEFINE_ALIAS
from .fluid.layers import crop_tensor as crop #DEFINE_ALIAS


from . import jit
from . import static
Expand Down
6 changes: 3 additions & 3 deletions python/paddle/amp/grad_scaler.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ class GradScaler(AmpScaler):
data = paddle.rand([10, 3, 32, 32])
with paddle.amp.auto_cast():
conv = model(data)
loss = paddle.reduce_mean(conv)
loss = paddle.fluid.layers.reduce_mean(conv)
scaled = scaler.scale(loss) # scale the loss
scaled.backward() # do backward
scaler.minimize(optimizer, scaled) # update parameters
Expand Down Expand Up @@ -96,7 +96,7 @@ def scale(self, var):
data = paddle.rand([10, 3, 32, 32])
with paddle.amp.auto_cast():
conv = model(data)
loss = paddle.reduce_mean(conv)
loss = paddle.fluid.layers.reduce_mean(conv)
scaled = scaler.scale(loss) # scale the loss
scaled.backward() # do backward
scaler.minimize(optimizer, scaled) # update parameters
Expand Down Expand Up @@ -128,7 +128,7 @@ def minimize(self, optimizer, *args, **kwargs):
data = paddle.rand([10, 3, 32, 32])
with paddle.amp.auto_cast():
conv = model(data)
loss = paddle.reduce_mean(conv)
loss = paddle.fluid.layers.reduce_mean(conv)
scaled = scaler.scale(loss) # scale the loss
scaled.backward() # do backward
scaler.minimize(optimizer, scaled) # update parameters
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/distributed/collective.py
Original file line number Diff line number Diff line change
Expand Up @@ -439,7 +439,7 @@ def barrier(group=0):
paddle.distributed.barrier()
"""
op_type = 'barrier'
temp = paddle.fill_constant([1], dtype="int32", value="1")
temp = fill_constant([1], dtype="int32", value="1")
if in_dygraph_mode():
return core.ops.barrier(temp, temp, 'ring_id', group)
if not isinstance(group, int):
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/distribution.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@
from .fluid.layers import tensor
from .fluid.layers import ops
from .fluid.layers import nn
from .fluid.layers import elementwise_mul, elementwise_div, elementwise_add, elementwise_sub
from .fluid import core
from .fluid.framework import in_dygraph_mode
from .tensor.math import elementwise_mul, elementwise_div, elementwise_add, elementwise_sub
from .tensor import arange, gather_nd, concat, multinomial
import math
import numpy as np
Expand Down
6 changes: 3 additions & 3 deletions python/paddle/fluid/dygraph/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -480,7 +480,7 @@ def test_dygraph_grad(create_graph):
paddle.disable_static()

def test_dygraph_grad(grad_outputs=None):
x = paddle.fill_constant(shape=[1], value=2.0, dtype='float32')
x = paddle.fluid.layers.fill_constant(shape=[1], value=2.0, dtype='float32')
x.stop_gradient = False

y1 = x * x
Expand All @@ -503,7 +503,7 @@ def test_dygraph_grad(grad_outputs=None):

return dx.numpy()

grad_value = paddle.fill_constant(shape=[1], value=4.0, dtype='float32')
grad_value = paddle.fluid.layers.fill_constant(shape=[1], value=4.0, dtype='float32')

# dy1 = [1], dy2 = [1]
print(test_dygraph_grad(None)) # [7.]
Expand All @@ -515,7 +515,7 @@ def test_dygraph_grad(grad_outputs=None):
print(test_dygraph_grad([grad_value, None])) # [19.]

# dy1 = [3], dy2 = [4]
grad_y1 = paddle.fill_constant(shape=[1], value=3.0, dtype='float32')
grad_y1 = paddle.fluid.layers.fill_constant(shape=[1], value=3.0, dtype='float32')
print(test_dygraph_grad([grad_y1, grad_value])) # [24.]
'''

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def create_static_variable_gast_node(name):


def create_fill_constant_node(name, value):
func_code = "{} = paddle.fill_constant(shape=[1], ".format(name)
func_code = "{} = paddle.fluid.layers.fill_constant(shape=[1], ".format(name)
if isinstance(value, bool):
func_code += "dtype='bool', value={})".format(value)
return gast.parse(func_code).body[0]
Expand Down
6 changes: 3 additions & 3 deletions python/paddle/fluid/dygraph/layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -1100,7 +1100,7 @@ def state_dict(self,
emb = paddle.nn.Embedding(10, 10)

state_dict = emb.state_dict()
paddle.save( state_dict, "paddle_dy.pdparams")
paddle.framework.io.save( state_dict, "paddle_dy.pdparams")

'''

Expand Down Expand Up @@ -1148,8 +1148,8 @@ def set_state_dict(self,
emb = paddle.nn.Embedding(10, 10)

state_dict = emb.state_dict()
paddle.save(state_dict, "paddle_dy.pdparams")
para_state_dict = paddle.load("paddle_dy.pdparams")
paddle.framework.io.save(state_dict, "paddle_dy.pdparams")
para_state_dict = paddle.framework.io.load("paddle_dy.pdparams")
emb.set_state_dict(para_state_dict)

'''
Expand Down
6 changes: 0 additions & 6 deletions python/paddle/fluid/dygraph/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -702,9 +702,6 @@ def forward(self, input):

class Pool2D(layers.Layer):
"""
:alias_main: paddle.nn.Pool2D
:alias: paddle.nn.Pool2D,paddle.nn.layer.Pool2D,paddle.nn.layer.common.Pool2D
:old_api: paddle.fluid.dygraph.Pool2D

This interface is used to construct a callable object of the ``Pool2D`` class.
For more details, refer to code examples.
Expand Down Expand Up @@ -2354,9 +2351,6 @@ def forward(self, input):

class BilinearTensorProduct(layers.Layer):
"""
:alias_main: paddle.nn.BilinearTensorProduct
:alias: paddle.nn.BilinearTensorProduct,paddle.nn.layer.BilinearTensorProduct,paddle.nn.layer.common.BilinearTensorProduct
:old_api: paddle.fluid.dygraph.BilinearTensorProduct

**Add Bilinear Tensor Product Layer**

Expand Down
4 changes: 2 additions & 2 deletions python/paddle/fluid/dygraph/parallel.py
Original file line number Diff line number Diff line change
Expand Up @@ -509,9 +509,9 @@ def set_state_dict(self,
emb = fluid.dygraph.DataParallel(emb, strategy)

state_dict = emb.state_dict()
paddle.save(state_dict, "paddle_dy.pdparams")
paddle.framework.io.save(state_dict, "paddle_dy.pdparams")

para_state_dict = paddle.load("paddle_dy.pdparams")
para_state_dict = paddle.framework.io.load("paddle_dy.pdparams")

emb.set_state_dict(para_state_dict)

Expand Down
2 changes: 1 addition & 1 deletion python/paddle/fluid/dygraph/varbase_patch_methods.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ def backward(self, retain_graph=False):
tmp.stop_gradient=False
inputs.append(tmp)
ret = paddle.sums(inputs)
loss = paddle.reduce_sum(ret)
loss = paddle.fluid.layers.reduce_sum(ret)
loss.backward()

"""
Expand Down
18 changes: 9 additions & 9 deletions python/paddle/fluid/framework.py
Original file line number Diff line number Diff line change
Expand Up @@ -533,7 +533,7 @@ def name_scope(prefix=None):
import paddle
paddle.enable_static()
with paddle.static.name_scope("s1"):
a = paddle.data(name='data', shape=[None, 1], dtype='int32')
a = paddle.fluid.data(name='data', shape=[None, 1], dtype='int32')
b = a + 1
with paddle.static.name_scope("s2"):
c = b * 1
Expand Down Expand Up @@ -1183,7 +1183,7 @@ def backward(self, retain_graph=False):
tmp.stop_gradient=False
inputs.append(tmp)
ret = paddle.sums(inputs)
loss = paddle.reduce_sum(ret)
loss = paddle.fluid.layers.reduce_sum(ret)
loss.backward()

"""
Expand Down Expand Up @@ -5345,8 +5345,8 @@ def default_startup_program():
main_program = paddle.static.Program()
startup_program = paddle.static.Program()
with paddle.static.program_guard(main_program=main_program, startup_program=startup_program):
x = paddle.data(name="x", shape=[-1, 784], dtype='float32')
y = paddle.data(name="y", shape=[-1, 1], dtype='int32')
x = paddle.fluid.data(name="x", shape=[-1, 784], dtype='float32')
y = paddle.fluid.data(name="y", shape=[-1, 1], dtype='int32')
z = paddle.static.nn.fc(name="fc", input=x, size=10, act="relu")

print("main program is: {}".format(paddle.static.default_main_program()))
Expand All @@ -5360,7 +5360,7 @@ def default_main_program():
This API can be used to get ``default main program`` which store the
descriptions of Ops and tensors.

For example ``z = paddle.elementwise_add(x, y)`` will create a new ``elementwise_add``
For example ``z = paddle.fluid.layers.elementwise_add(x, y)`` will create a new ``elementwise_add``
Op and a new ``z`` tensor, and they will be recorded in ``default main program`` .

The ``default main program`` is the default value for ``Program`` parameter in
Expand All @@ -5379,15 +5379,15 @@ def default_main_program():

paddle.enable_static()
# Sample Network:
data = paddle.data(name='image', shape=[None, 3, 224, 224], dtype='float32')
label = paddle.data(name='label', shape=[None, 1], dtype='int64')
data = paddle.fluid.data(name='image', shape=[None, 3, 224, 224], dtype='float32')
label = paddle.fluid.data(name='label', shape=[None, 1], dtype='int64')

conv1 = paddle.static.nn.conv2d(data, 4, 5, 1, act=None)
bn1 = paddle.static.nn.batch_norm(conv1, act='relu')
pool1 = paddle.nn.functional.pool2d(bn1, 2, 'max', 2)
pool1 = paddle.fluid.layers.pool2d(bn1, 2, 'max', 2)
conv2 = paddle.static.nn.conv2d(pool1, 16, 5, 1, act=None)
bn2 = paddle.static.nn.batch_norm(conv2, act='relu')
pool2 = paddle.nn.functional.pool2d(bn2, 2, 'max', 2)
pool2 = paddle.fluid.layers.pool2d(bn2, 2, 'max', 2)

fc1 = paddle.static.nn.fc(pool2, size=50, act='relu')
fc2 = paddle.static.nn.fc(fc1, size=102, act='softmax')
Expand Down
Loading