-
Notifications
You must be signed in to change notification settings - Fork 2
Multiple overloads with context function arguments do not resolve #150
Comments
The error is correct. I think you have simply misunderstood how context functions work. The compiler complains about the lack of a defined given instance of type |
Seems like I have to, for multi-parameter context functions, however single parameter context functions do infer properly in polymorphic contexts: scala> def provide[A, B](f: A ?=> B) = (a: A) => f(using a)
def provide[A, B](f: (A) ?=> B): A => B
scala> :t provide(implicitly[String] + implicitly[Int])
String & Int => String Whereas a function with 2-parameter context function argument does not infer: scala> def provide2[A, B, C](f: (A, B) ?=> C) = (a: A, b: B) => f(using a, b)
def provide2[A, B, C](f: (A, B) ?=> C): (A, B) => C
scala> :t provide2(implicitly[String] + implicitly[Int])
1 |provide2(implicitly[String] + implicitly[Int])
| ^
|ambiguous implicit arguments: both value evidence$2 and value evidence$1 match type Int of parameter e of method implicitly in object Predef
1 |provide2(implicitly[String] + implicitly[Int])
| ^
|ambiguous implicit arguments: both value evidence$2 and value evidence$1 match type String of parameter e of method implicitly in object Predef (Moved to scala/scala3#10609)
Context functions do work with inference (in single parameter case) as above. Overloading doesn't work yet because it's not attempted as overloads automatically cancel context function inference, but it doesn't have to not work. (e.g. such as in scala/scala3#7790) |
But in your particular case how in your opinion should the compiler know that
should be e.g.
(which returns a
(which returns a |
String summon comes before the Int summon in the source code and scalac/dotc both generally avoid rearranging the order IME. (note how |
So you would expect that
would return a
would return a |
On the other hand, while the ordering of arguments is important for ordinary functions, for context functions the difference between |
Rearranging already changes the type in the single-parameter case - and the difference is observable due to whitebox macros potentially changing the types down the line based on the order of scala> :t provide { def x = implicitly[String] ; def y = implicitly[Int] }
String & Int => Unit
scala> :t provide { def x = implicitly[Int] ; def y = implicitly[String] }
Int & String => Unit Order also impacts normal, non-implicit contravariant narrowing in a similar fashion - which in Scala 2 wasn't commutative due to scala> trait Reader[-R] { def <+>[R1 <: R](that: Reader[R1]): Reader[R1] }; def ask[R]: Reader[R] = ???
// defined trait Reader
def ask[R] => Reader[R]
scala> :t ask[Int] <+> ask[String]
Reader[Int & String]
scala> :t ask[String] <+> ask[Int]
Reader[String & Int] I understand the reluctance to making context functions behave similarly - even though the counterpart contravariance behavior is similar and stable across versions - but multi-parameter context functions may be very hard to make use of without it.
That would be ideal - Haskell does it, it also supports all the cases here without type annotations. |
But is the way how types are printed in REPL the only problem or do the two cases you mentioned have some real impact on evaluation of code? |
The difference is observable with macros, at the minimum. |
I think there are two aspects to this issue:
If we monomorphize the example (that is, drop the type parameters on
we still end up with an overloaded
both choices for |
@b-studios When written with given inside the x.f {
given B(42)
println(summon[A].a)
println(summon[B].b)
} I think it's also unambiguous that
It could be reasonable for certain DSLs, e.g. there could be a "mode-switch" statement that adds a constraint and defines the type for the statements in the same block: def robotDsl(f: Mode1 ?=> Unit)
def robotDsl(f: Mode2 ?=> Unit)
robotDsl {
dryRunMode; step(right); step(down, 2);
} However, allow me to describe my specific use-case for context functions, I maintain a dependency injection library, distage. The point of the framework is to represent the program fully as a first-class graph value and make all parameters manageable / overridable. That includes implicits. It has a persistent issue in Scala 2, that makes using functions worse than using classes, because function eta-expansion causes eager resolution of implicits. Functions with implicit parameters cannot even be referred in a way that allows deferring implicit arguments to the framework, instead of resolving them from lexical scope. import distage._
import cats.effect._
def resourceCtor[F[_]: Sync]: Resource[F, String] = Resource.pure("hi")
def injectionModule[F[_]: TagK] = new ModuleDef {
// Sync implicit is managed
make[Sync[F]].from(...)
// implicit error, tries to get Sync immediately, instead of creating a reusable function to parameterize Sync later
make[String].fromResource(resourceCtor[F] _)
// to eta-expand must list all implicit arguments _and_ their types, does not scale
make[String].fromResource(resourceCtor[F](_: Sync[F]))
} Instead, the current workaround is to wrap the implicit function into a class. Because a class can be referred without causing implicit resolution to be performed immediately. Then macros generate a constructor function based on the class constructor and we've worked around the issue: // wrap the expression as a class
class ResourceCtor[F[_]: Sync](
) extends Lifecycle.OfCats(resourceCtor[F])
def fixedModule[F[_]: TagK] = new ModuleDef {
make[String].fromResource[ResourceCtor[F]] // reference to a type does not cause implicit resolution, Sync[F] is now managed
} But clearly this is very sub-par and intrusive. To fix this issue all that's required is to be able to pass an expression in such a way that captures all of it's requirements, including givens, floating all possible constraints to the context function type without triggering implicit search. It could be done without overloading if there was some kind of a context function super-type that could work as a catch-all for all context functions (such as if a union of all context function types didn't immediately cancel context function capture; or if typeclasses could work with context functions). Otherwise the current go-to method is a magnet pattern listing all function types: // a function with captured arguments yielding A
class Functoid[+A]
object Functoid {
// all normal functions:
inline implicit def apply[A, R](f: A => R): Functoid[R]
...
inline implicit def apply[A .. N, R](f: (A .. N) => R): Functoid[R]
// all context functions:
inline implicit def apply[Ac .. Nc, R](f: (Ac .. Nc) ?=> (A .. N) => R): Functoid[R]
inline implicit def apply[R](f: ContextFunctionN[FunctionN[R]]): Functoid[R]
} For which the expected best behavior is to choose the widest possible overload, capturing as many context parameters as possible, preferring to float constraints that are unresolved, similar to Haskell's treatment of higher-rank functions with typeclass parameters. |
I think I understand your usecase. But I am not sure how one could implement such a catch all implicit without giving up clarity in other use cases. In particular, note that the arguments to context functions are in covariant position and we do not allow overloads that only vary in this position in other cases. |
Hence my strong intuition that we want to avoid overloads that only vary in the arguments of context functions. What do you think @smarter ? |
Hmm, I'm not sure I follow scala> def f(f: Int => String) = 1; def f(f: Boolean => String)(using DummyImplicit) = 2
scala> def str[A](a: A) = a.toString
def str[A](a: A): String
scala> f(str[Int])
val res1: Int = 1
scala> f(str[Boolean])
val res2: Int = 2 These seem to differ only in function argument as well. Is there an example somewhere of this restriction on overloads? |
@neko-kai Sorry, you are completely right. I shouldn't have used "covariant", more "return position". It is a bit difficult to explain for me, but arguments to contextual function-parameters feel much more like returns than it is the case with explicit functions. |
Actually for @b-studios' example I would find the
This does compile and prints For the DSL example I would say something like
would be clearer |
@prolativ Regarding the DSL example: I completely agree that this would be a clear way to express it. For my example above: I still think both would make sense and I would like to see situations like this being ruled out. Contextual abstractions are already quite sophisticated and combining them with this kind of overloading (which also can be non-trivial) seems overkill. |
I do not understand your example. In current dotty context functions are flatly canceled by overloads – there is no way to choose the second overload whatsoever, with exception of an explicit context function literal, the interaction is currently disabled due to the need to figure it out and implement it. not for any profound reason. The same applies for interaction with unions or intersections, e.g. meaningless tautologies also cancel context capture: scala> def f(getInt: (Int ?=> Unit) | (Int ?=> Unit)) = getInt(using 0)
def f(getInt: ((Int) ?=> Unit) | ((Int) ?=> Unit)): Unit
scala> f(implicitly[Int])
1 |f(implicitly[Int])
| ^
|no implicit argument of type Int was found for parameter e of method implicitly in object Predef
scala> def f(getInt: (Int ?=> Unit) & (Int ?=> Unit)) = getInt(using 0)
def f(getInt: ((Int) ?=> Unit) & ((Int) ?=> Unit)): Unit
scala> f(implicitly[Int])
1 |f(implicitly[Int])
| ^
|no implicit argument of type Int was found for parameter e of method implicitly in object Predef
scala> def f(getInt: (Int ?=> Unit)) = getInt(using 0)
def f(getInt: (Int) ?=> Unit): Unit
scala> f(implicitly[Int]) Simply put, the compiler is not ready yet in this area, using it's current behavior as an example of "as-it-should-be" is not helpful.
But def withB(f: B ?=> Unit) = f(using B(42))
given B(1)
withB {
println(summon[B])
}
// B(42) Here Lexically, this makes sense. Now, if we were to add an empty overload: In the same situation, e.g. Haskell would float out the typeclass constraint into the type of the expression instead of resolving it eagerly, and choose the typeclass instance with more typeclass contexts, not less. This makes sense if you consider the expression's type separately from all other context:
With all implicits floated out into the context function type, without attempting to resolve implicits eagerly within the body, this has a type |
In case of
it is quite explicit that
the method call would desugar to
so this still would be consistent with |
While @neko-kai's use case is important, I am closing this issue since it is not a bug with respect to the current specification of context functions. |
Reopening as a feature request |
Minimized code
Output
Expectation
Expected success with output
The text was updated successfully, but these errors were encountered: