This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Merged
Conversation
7a3e387 to
780eb54
Compare
haojin2
reviewed
Sep 12, 2019
| }; | ||
|
|
||
| template<int req> | ||
| struct around_forwardint{ |
Contributor
There was a problem hiding this comment.
Get rid of this kernel after you switch to identity below.
haojin2
reviewed
Sep 12, 2019
| && param.decimals > 0) { | ||
| MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, { | ||
| MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, { | ||
| Kernel<around_forwardint<req_type>, xpu>::Launch( |
Contributor
There was a problem hiding this comment.
simply use identity kernel instead of your new kernel
haojin2
reviewed
Sep 12, 2019
| for hybridize in [True, False]: | ||
| for oneType in types: | ||
| rtol=1e-3 | ||
| atol=1e-5 |
haojin2
reviewed
Sep 12, 2019
| return F.np.around(x, self.decimals) | ||
|
|
||
| shapes = [(), (1,), (1, 1), (1, 2, 3), (1, 0), (3, 0, 2)] # test_shapes, remember to include zero-dim shape and zero-size shapes | ||
| types = ['int32', 'int64', 'float32', 'double'] |
Contributor
There was a problem hiding this comment.
types = ['int32', 'int64', 'float32', 'float64']
haojin2
reviewed
Sep 12, 2019
| def hybrid_forward(self, F, x): | ||
| return F.np.around(x, self.decimals) | ||
|
|
||
| shapes = [(), (1,), (1, 1), (1, 2, 3), (1, 0), (3, 0, 2)] # test_shapes, remember to include zero-dim shape and zero-size shapes |
Contributor
There was a problem hiding this comment.
shapes = [(), (1, 2, 3), (1, 0)]
haojin2
reviewed
Sep 12, 2019
| rtol=1e-3 | ||
| atol=1e-5 | ||
| for shape in shapes: | ||
| for d in range(-10, 11): |
Contributor
There was a problem hiding this comment.
too many cases for d, simply reduce to something like -5, 6
848ce30 to
81649d1
Compare
006ec32 to
1c2d15c
Compare
* change the name of argument
* add doc in three files and fix some bug
* change the data type in .h and add test function
cancel optimization when abs(temp) < 0.5
modify test on cpu and add test on gpu
do not support float16
edit testcase on gpu and add 'Do not support float16 on doc'
* edit doc: support scalar
* adjust the format
* add license
* fix format error
* delete gpu test
* move around to np_elemwise_unary_op_basic
* edit AroundOpType
* replace int kernel with identity_with_cast and fix format error
* delete unused req_type
1c2d15c to
4a1a595
Compare
drivanov
pushed a commit
to drivanov/incubator-mxnet
that referenced
this pull request
Sep 26, 2019
* change the name of argument
* add doc in three files and fix some bug
* change the data type in .h and add test function
cancel optimization when abs(temp) < 0.5
modify test on cpu and add test on gpu
do not support float16
edit testcase on gpu and add 'Do not support float16 on doc'
* edit doc: support scalar
* adjust the format
* add license
* fix format error
* delete gpu test
* move around to np_elemwise_unary_op_basic
* edit AroundOpType
* replace int kernel with identity_with_cast and fix format error
* delete unused req_type
larroy
pushed a commit
to larroy/mxnet
that referenced
this pull request
Sep 28, 2019
* change the name of argument
* add doc in three files and fix some bug
* change the data type in .h and add test function
cancel optimization when abs(temp) < 0.5
modify test on cpu and add test on gpu
do not support float16
edit testcase on gpu and add 'Do not support float16 on doc'
* edit doc: support scalar
* adjust the format
* add license
* fix format error
* delete gpu test
* move around to np_elemwise_unary_op_basic
* edit AroundOpType
* replace int kernel with identity_with_cast and fix format error
* delete unused req_type
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Create a new branch and move around to np_elemwise_unary_op_basic.
@haojin2