Jump to content

Solving problems by computer just got a lot faster


Recommended Posts

https://www.sciencenews.org/article/solving-problems-computer-just-got-lot-faster?tgt=nr

Quote

A new computer program works smarter, not harder, to solve problems faster than its predecessors.

The algorithm is designed to find the best solution to a given problem among all possible options. Whereas other computer programs winnow down the possibilities one at a time, the new program — presented July 12 at the International Conference on Machine Learning in Stockholm — rules out many choices at once.

 

Link to comment
Share on other sites

This work isn't nearly as general as the Science News article makes it sound. That and most of the contributions are theoretical analyses. That's not to shit on theoretical analysis (it's incredibly important to advancing what we do), but this is not some magic wand by any means.

Link to comment
Share on other sites

10 hours ago, legend said:

This work isn't nearly as general as the Science News article makes it sound. That and most of the contributions are theoretical analyses. That's not to shit on theoretical analysis (it's incredibly important to advancing what we do), but this is not some magic wand by any means.

I wasn't sure but wouldn't this program have a chance at eliminating the correct answer? I'm probably just completely misunderstanding it.

Link to comment
Share on other sites

38 minutes ago, Remarkableriots said:

I wasn't sure but wouldn't this program have a chance at eliminating the correct answer? I'm probably just completely misunderstanding it.

 

It's a method for optimizing a specific class of objective functions. Roughly, if I give you a set of items, the objective function scores how good that set is, and what we want to do is find the set that maximizes the score it will give. The method also assumes certain properties about how that objective function works (If those properties are not held, it's unclear whether it will work well at all). It's a sampling based approach so there is no guarantee it will actually find the input that maximizes the objective at all. But it has a pretty good chance of giving an input that scores pretty highly and the only reason to use such an algorithm is when computing with certainty the exact optimal answer to such a problem requires far too much compute.

Link to comment
Share on other sites

5 minutes ago, legend said:

 

It's a method for optimizing a specific class of objective functions. Roughly, if I give you a set of items, the objective function scores how good that set is, and what we want to do is find the set that maximizes the score it will give. The method also assumes certain properties about how that objective function works (If those properties are not held, it's unclear whether it will work well at all). It's a sampling based approach so there is no guarantee it will actually find the input that maximizes the objective at all. But it has a pretty good chance of giving an input that scores pretty highly and the only reason to use such an algorithm is when computing with certainty the exact optimal answer to such a problem requires far too much compute.

Thank you! I understand it better with that explanation.

 

I was wondering have you heard anything about how they were supposed to use a new binary code to increase processing? Seems like all the articles I can find are old. It was where they could use 1,0 and -1 by using a possitve, negative and no charge? I could be remembering that wrong.

Link to comment
Share on other sites

From the abstract:

Quote

Adaptive sampling achieves its exponential speedup at the expense of approximation. In theory, it is guaranteed to produce a solution that is a 1/3 approximation to the optimum

So this is much faster at finding a much worse result. I imagine there are some very good uses for this algorithm, and I can see its output being used as a starting point. However, for a lot of problems, the computing power required really isn't a big deal.

Link to comment
Share on other sites

53 minutes ago, Remarkableriots said:

Thank you! I understand it better with that explanation.

 

To be honest, it's not a well written paper, so don't beat yourself up too much :p A lot of work seems more complex than it is, because a lot of computer scientists are awful writers. I don't consider myself "great" at it either, but I do continue to improve.

 

Quote

I was wondering have you heard anything about how they were supposed to use a new binary code to increase processing? Seems like all the articles I can find are old. It was where they could use 1,0 and -1 by using a possitve, negative and no charge? I could be remembering that wrong.

 

38 minutes ago, Remarkableriots said:

 

When you first learn about binary systems, you learn that mathematically, there is nothing special about binary, nor even decimal which is what you tend to be used to. Mathematically, they're all equivalent. So the possibility of using other systems is always conceivable. But physically, there isn't really much reason to do that. Computer hardware tends to be easiest to design with an underlying binary system (charge, or no charge).

 

The article you posted seems to be suggesting that for quantum computers, it may be better to do otherwise; that using a different base will be better for that kind of machinery. Seems entirely plausible, but you probably haven't heard much more because quantum computers are progressing very slowly. If we get useful quantum computers in the next five years (meaning, we would actually benefit from using them to do something) I think we'll be doing well. But that should illustrate how early the field still is.

 

33 minutes ago, TwinIon said:

From the abstract:

So this is much faster at finding a much worse result. I imagine there are some very good uses for this algorithm, and I can see its output being used as a starting point. However, for a lot of problems, the computing power required really isn't a big deal.

 

Optimization problems tend to be incredibly hard to solve exactly except for very specific classes of algorithms. All of "deep learning" is one big optimization problem and even with all the compute available that people use on it, it's no where remotely near to being able to ensure an optimal answer. Worse still, the answer deep learning gives doesn't even have any approximation bound guarantees!

 

In this case though, the solution they propose is for a fairly specific class of optimization problems and even then, the conventional algorithm for this class still does pretty well. It's not going to take the world by storm :p 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...