[Take,2] More NEGATE_EXPR folding in match.pd

Message ID 01fd01d7a625$511c5110$f354f330$@nextmovesoftware.com
State New
Headers show
Series
  • [Take,2] More NEGATE_EXPR folding in match.pd
Related show

Commit Message

Roger Sayle Sept. 10, 2021, 9:22 a.m.
Hi Richard,
Thanks for suggestion, which cleanly solves the problem I was encountering.
This revised patch adds a Boolean simplify argument to tree-ssa-sccvn.c's
vn_nary_build_or_lookup_1 to control whether to simplification should be
performed before value numbering, updating the callers, but then
avoiding simplification when constructing/value-numbering NEGATE_EXPR.
This avoids the regression of gcc.dg/tree-ssa/ssa-free-88.c, and enables the
new test case(s) to pass.  Brilliant, thank you.

This patch has been tested on x86_64-pc-linux-gnu with a "make bootstrap"
and "make -k check" with no new failures.  Ok for mainline?
<note: discussion continues after ChangeLog>

2021-09-10  Roger Sayle  <roger@nextmovesoftware.com>
	    Richard Biener  <rguenther@suse.de>

gcc/ChangeLog
	* match.pd (negation simplifications): Implement some negation
	folding transformations from fold-const.c's fold_negate_expr.
	* tree-ssa-sccvn.c (vn_nary_build_or_lookup_1): Add a SIMPLIFY
	argument, to control whether the op should be simplified prior
	to looking up/assigning a value number.
	(vn_nary_build_or_lookup): Update call to vn_nary_build_or_lookup_1.
	(vn_nary_simplify): Likewise.
	(visit_nary_op): Likewise, but when constructing a NEGATE_EXPR
	now call vn_nary_build_or_lookup_1 disabling simplification.

gcc/testsuite/ChangeLog
	* gcc.dg/fold-negate-1.c: New test case.


One potential enhancement request it might be useful to file in Bugzilla
(I'm not familiar enough with sccvn to investigate this myself), but there's
a missed optimization opportunity when we recognize one value-number
as the negation of another (and can therefore materialize one result from
the other using a single negation instruction).  The opportunity is that we
currently always select the first value number as the parent, and derive
the second from it, ignoring the expressions themselves.   Sometimes, it
may be profitable to use the second (negated) occurrence as the parent,
and instead negate that to obtain the first.  One could use negate_expr_p
to decide whether one expression is cheaper to negate than the other.

Both examples in gcc.dg/tree-ssa/ssa-free-88.c would benefit from this:
Firstly:
void bar (double x, double z) {
  y[0] = -z / x;
  y[1] = z / x;
}
if we select "z / x" as the parent, and derive -(z/x) from it, we can avoid/
eliminate a negation, over the current code that calculates "(-z)/x" and 
then derives "-((-z)/x)" from it.
Secondly:
void foo (double x) {
  y[0] = x * -3.;
  y[1] = x * 3.;
}
Following Richard's solution/workaround to PR 19988, we'd prefer to keep
positive real constants in the constant pool, hence selecting "x * 3.0" as the
parent and deriving "-(x * 3.0)" from it, would be slightly preferred over the
current behaviour of placing -3 in the constant pool.

Thanks again,
--
Roger

-----Original Message-----
From: Richard Biener <richard.guenther@gmail.com> 

Sent: 09 September 2021 13:05
To: Roger Sayle <roger@nextmovesoftware.com>
Cc: GCC Patches <gcc-patches@gcc.gnu.org>
Subject: Re: [PATCH] More NEGATE_EXPR folding in match.pd

On Thu, Sep 9, 2021 at 12:08 PM Roger Sayle <roger@nextmovesoftware.com> wrote:
>

>

> As observed by Jakub in comment #2 of PR 98865, the expression 

> -(a>>63) is optimized in GENERIC but not in GIMPLE.  Investigating 

> further it turns out that this is one of a few transformations 

> performed by fold_negate_expr in fold-const.c that aren't yet performed by match.pd.

> This patch moves/duplicates them there, and should be relatively safe 

> as these transformations are already performed by the compiler, but 

> just in different passes.

>

> Alas the one minor complication is that some of these transformations 

> are only wins, if the intermediate result (of the multiplication or

> division) is only used once, to avoid duplication/performing them again.

> See gcc.dg/tree-ssa/ssa-free-88.c.  Normally, this is the perfect 

> usage of match's single_use (aka SSA's has_single_use).  Alas, 

> single_use is not always accurate in match.pd, as some passes will 

> construct and simplify an expression/stmt before inserting it into 

> GIMPLE, and folding during this process sees the temporary undercount from the data-flow.

> To solve this, this patch introduces a new single_use_is_op_p that 

> double checks that the single_use has the expected tree_code/operation 

> and skips the transformation if we can tell single_use might be invalid.

>

> A follow-up patch might be to investigate whether genmatch.c can be 

> tweaked to use this new helper function to implement the :s qualifier 

> when the enclosing context is/should be known, but that's overkill to 

> just unblock Jakub and Andrew on 98865.

>

> This patch has been tested on x86_64-pc-linux-gnu with a "make bootstrap"

> and "make -k check" with no new failures.  Ok for mainline?


I think that single_use_is_op_p is "bad" heuristics since it stretches the SSA operand / immediate use use a bit too far.  Generally fold_stmt and thus match.pd patterns may not rely on stmt operands or immediate uses as only generated by update_stmt.

In fact the whole gimple_build machinery relies on match.pd patterns operating on stmts that are _not_ in the IL yet and it carefully restricts instruction combining to those stmts (see gimple_build_valueize).  I suppose the cases you are running into cross the boundary which is where these kind of issues can happen.

That said, it's a heuristic that can't be perfect - some passes are building a lot of IL into a sequence via gimple_build and they are generally not too careful to flush stmts when they make use of the result multiple times, so allowing has_zero_uses is not conservative either.

That said - I'm not sure - (x*y) -> x * -y for easily negatable y is something we should pursue during folding when we're facing expression graphs.  That looks like sth for backprop.

Iff we want to improve single_use heuristics on the side of saying 'no' when we're not absolutely sure that there's a single_use we probably want to think of some way of tracking immediate use reliability and documenting constraints and assumptions we make when matching patterns.

For the ssa-fre-88.c issue at hand it's probably visit_nary_op that shouldn't simplify the expression it wants to insert when it does the reverse transform.
So sth like


or rather adding a new flag to the function, bool simplify and explicitely disabling it from the negation transform.

Richard.

>

> 2021-09-09  Roger Sayle  <roger@nextmovesoftware.com>

>

> gcc/ChangeLog

>         * generic-match-head.c (single_use_is_op_p): New helper function.

>         * gimple-match-head.c (single_use_is_op_p): New helper function.

>         * match.pd (negation simplifications): Implement some negation

>         folding transformations from fold-const.c's fold_negate_expr.

>

> gcc/testsuite/ChangeLog

>         * gcc.dg/fold-negate-1.c: New test case.

>

> Roger

> --

>
diff --git a/gcc/match.pd b/gcc/match.pd
index 008f775..6401685 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -1481,6 +1481,36 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
  (if (!FIXED_POINT_TYPE_P (type))
  (plus @0 (negate @1))))
 
+/* Other simplifications of negation (c.f. fold_negate_expr_1).  */
+(simplify
+ (negate (mult:c@0 @1 negate_expr_p@2))
+ (if (! TYPE_UNSIGNED (type)
+      && ! HONOR_SIGN_DEPENDENT_ROUNDING (type)
+      && single_use (@0))
+  (mult @1 (negate @2))))
+
+(simplify
+ (negate (rdiv@0 @1 negate_expr_p@2))
+ (if (! HONOR_SIGN_DEPENDENT_ROUNDING (type)
+      && single_use (@0))
+  (rdiv @1 (negate @2))))
+
+(simplify
+ (negate (rdiv@0 negate_expr_p@1 @2))
+ (if (! HONOR_SIGN_DEPENDENT_ROUNDING (type)
+      && single_use (@0))
+  (rdiv (negate @1) @2)))
+
+/* Fold -((int)x >> (prec - 1)) into (unsigned)x >> (prec - 1).  */
+(simplify
+ (negate (convert? (rshift @0 INTEGER_CST@1)))
+ (if (tree_nop_conversion_p (type, TREE_TYPE (@0))
+      && wi::to_wide (@1) == element_precision (type) - 1)
+  (with { tree stype = TREE_TYPE (@0);
+	  tree ntype = TYPE_UNSIGNED (stype) ? signed_type_for (stype)
+					     : unsigned_type_for (stype); }
+   (convert (rshift:ntype (convert:ntype @0) @1)))))
+
 /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))
    when profitable.
    For bitwise binary operations apply operand conversions to the
diff --git a/gcc/testsuite/gcc.dg/fold-negate-1.c b/gcc/testsuite/gcc.dg/fold-negate-1.c
new file mode 100644
index 0000000..00ec8b4
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/fold-negate-1.c
@@ -0,0 +1,58 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+#define SHIFT ((8*__SIZEOF_INT__)-1)
+
+int test_rshift_1(int x)
+{
+  int t = x >> SHIFT;
+  return -t;
+}
+
+int test_rshift_2(int x)
+{
+  unsigned int t = (unsigned int)x >> SHIFT;
+  return -t;
+}
+
+int test_rshift_3(int x)
+{
+  int t = (unsigned int)x >> SHIFT;
+  return -t;
+}
+
+int test_rshift_4(int x)
+{
+  unsigned int t = x >> SHIFT;
+  return -t;
+}
+
+double test_mul_1(double x)
+{
+  double t = -5.0 * x;
+  return -t;
+}
+
+double test_mul_2(double x, double y)
+{
+  double t1 = -x;
+  double t2 = t1 * y;
+  return -t2;
+}
+
+double test_rdiv_1(double x, double y)
+{
+  double t1 = -x;
+  double t2 = t1 / y;
+  return -t2;
+}
+
+double test_rdiv_2(double x, double y)
+{
+  double t1 = -y;
+  double t2 = x / t1;
+  return -t2;
+}
+
+/* { dg-final { scan-tree-dump-not " -" "optimized" } } */
+
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index 2357bbd..fca65cf 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -2321,11 +2321,13 @@ vn_reference_lookup_or_insert_for_pieces (tree vuse,
 }
 
 /* Return a value-number for RCODE OPS... either by looking up an existing
-   value-number for the simplified result or by inserting the operation if
-   INSERT is true.  */
+   value-number for the possibly simplified result or by inserting the
+   operation if INSERT is true.  If SIMPLIFY is false, return a value
+   number for the unsimplified expression.  */
 
 static tree
-vn_nary_build_or_lookup_1 (gimple_match_op *res_op, bool insert)
+vn_nary_build_or_lookup_1 (gimple_match_op *res_op, bool insert,
+			   bool simplify)
 {
   tree result = NULL_TREE;
   /* We will be creating a value number for
@@ -2333,15 +2335,16 @@ vn_nary_build_or_lookup_1 (gimple_match_op *res_op, bool insert)
      So first simplify and lookup this expression to see if it
      is already available.  */
   /* For simplification valueize.  */
-  unsigned i;
-  for (i = 0; i < res_op->num_ops; ++i)
-    if (TREE_CODE (res_op->ops[i]) == SSA_NAME)
-      {
-	tree tem = vn_valueize (res_op->ops[i]);
-	if (!tem)
-	  break;
-	res_op->ops[i] = tem;
-      }
+  unsigned i = 0;
+  if (simplify)
+    for (i = 0; i < res_op->num_ops; ++i)
+      if (TREE_CODE (res_op->ops[i]) == SSA_NAME)
+	{
+	  tree tem = vn_valueize (res_op->ops[i]);
+	  if (!tem)
+	    break;
+	  res_op->ops[i] = tem;
+	}
   /* If valueization of an operand fails (it is not available), skip
      simplification.  */
   bool res = false;
@@ -2440,7 +2443,7 @@ vn_nary_build_or_lookup_1 (gimple_match_op *res_op, bool insert)
 static tree
 vn_nary_build_or_lookup (gimple_match_op *res_op)
 {
-  return vn_nary_build_or_lookup_1 (res_op, true);
+  return vn_nary_build_or_lookup_1 (res_op, true, true);
 }
 
 /* Try to simplify the expression RCODE OPS... of type TYPE and return
@@ -2454,7 +2457,7 @@ vn_nary_simplify (vn_nary_op_t nary)
   gimple_match_op op (gimple_match_cond::UNCOND, nary->opcode,
 		      nary->type, nary->length);
   memcpy (op.ops, nary->op, sizeof (tree) * nary->length);
-  return vn_nary_build_or_lookup_1 (&op, false);
+  return vn_nary_build_or_lookup_1 (&op, false, true);
 }
 
 /* Elimination engine.  */
@@ -5006,7 +5009,7 @@ visit_nary_op (tree lhs, gassign *stmt)
 	      tree ops[2];
 	      gimple_match_op match_op (gimple_match_cond::UNCOND,
 					NEGATE_EXPR, type, rhs[i]);
-	      ops[i] = vn_nary_build_or_lookup_1 (&match_op, false);
+	      ops[i] = vn_nary_build_or_lookup_1 (&match_op, false, true);
 	      ops[j] = rhs[j];
 	      if (ops[i]
 		  && (ops[0] = vn_nary_op_lookup_pieces (2, code,
@@ -5014,7 +5017,7 @@ visit_nary_op (tree lhs, gassign *stmt)
 		{
 		  gimple_match_op match_op (gimple_match_cond::UNCOND,
 					    NEGATE_EXPR, type, ops[0]);
-		  result = vn_nary_build_or_lookup (&match_op);
+		  result = vn_nary_build_or_lookup_1 (&match_op, true, false);
 		  if (result)
 		    {
 		      bool changed = set_ssa_val_to (lhs, result);

Patch

diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c index 8058a1e3c6a..8b11def93bc 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -2333,15 +2333,16 @@  vn_nary_build_or_lookup_1 (gimple_match_op *res_op, bool insert)
      So first simplify and lookup this expression to see if it
      is already available.  */
   /* For simplification valueize.  */
-  unsigned i;
-  for (i = 0; i < res_op->num_ops; ++i)
-    if (TREE_CODE (res_op->ops[i]) == SSA_NAME)
-      {
-       tree tem = vn_valueize (res_op->ops[i]);
-       if (!tem)
-         break;
-       res_op->ops[i] = tem;
-      }
+  unsigned i = 0;
+  if (!insert)
+    for (; i < res_op->num_ops; ++i)
+      if (TREE_CODE (res_op->ops[i]) == SSA_NAME)
+       {
+         tree tem = vn_valueize (res_op->ops[i]);
+         if (!tem)
+           break;
+         res_op->ops[i] = tem;
+       }
   /* If valueization of an operand fails (it is not available), skip
      simplification.  */
   bool res = false;