Will an optimal algorithm halt when it has read enough of the input to generate a correct answer?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I'll ask the question informally first and then try to work up to something more formal.
Consider a linear search problem. We are presented with an unsorted array of numbers and are asked to determine whether a certain number is in the array. There seems to be no better algorithm than to search linearly through the array, testing each array element in turn to see if it's equal to the test number. The point here is that if we do find it, we don't have to search any further.
The question is whether this is more generally true. For example, if we are asked to find a clique of size $k$ in a graph, we can quit once we find one, at least if we use the naive algorithm which simply checks each combination of vertices of size $k$. Does the optimal algorithm also halt when it has read enough of the input?
Let's try to formalize this. (EDIT: the model of computation we will use here is a Turing machine.) An input $i in 0,1^n$ is just $n$ bits. Let's say algorithms $A$ and $B$ (EDIT: $A$ and $B$ are fixed) compute a problem $P$, with $p$ an instance of $P$. Define $A$ as optimal if there is no $B$ with better time performance. (EDIT: we'll assume worst-case analysis until further clarifications are necessary.)
EDIT: Here's a formalization of the time performance optimality measure; it's modeled on big O but different. If $f$ and $g$ are two functions defined on the natural numbers, then we'll say $f(n) = Phi(g(n))$ if there is a constant $C$ such that
$$ f(n) < g(n) textrm for all n > C$$
Let's put some other constraints on $P$. Some instances $p$ can be solved by looking at a prefix $q$ of $i$, that is, $i=qw$ for some non-empty $w$. Will $A$ stop as soon as it has read and computed on $q$, or must it read $w$ as well?
Here's an alternative phrasing which describes the counterexample I'm looking for. The counterexample would exhibit a specific $P,A,p,i,q$ such that $A$ is an optimal algorithm for $P$ and $p$ is a specific instance describable with $i$ bits where even though $p$ can be solved based on $q$, algorithm $A$ nonetheless necessarily reads all of $i$ to solve its problem.
logic computer-science computational-complexity
 |Â
show 12 more comments
up vote
1
down vote
favorite
I'll ask the question informally first and then try to work up to something more formal.
Consider a linear search problem. We are presented with an unsorted array of numbers and are asked to determine whether a certain number is in the array. There seems to be no better algorithm than to search linearly through the array, testing each array element in turn to see if it's equal to the test number. The point here is that if we do find it, we don't have to search any further.
The question is whether this is more generally true. For example, if we are asked to find a clique of size $k$ in a graph, we can quit once we find one, at least if we use the naive algorithm which simply checks each combination of vertices of size $k$. Does the optimal algorithm also halt when it has read enough of the input?
Let's try to formalize this. (EDIT: the model of computation we will use here is a Turing machine.) An input $i in 0,1^n$ is just $n$ bits. Let's say algorithms $A$ and $B$ (EDIT: $A$ and $B$ are fixed) compute a problem $P$, with $p$ an instance of $P$. Define $A$ as optimal if there is no $B$ with better time performance. (EDIT: we'll assume worst-case analysis until further clarifications are necessary.)
EDIT: Here's a formalization of the time performance optimality measure; it's modeled on big O but different. If $f$ and $g$ are two functions defined on the natural numbers, then we'll say $f(n) = Phi(g(n))$ if there is a constant $C$ such that
$$ f(n) < g(n) textrm for all n > C$$
Let's put some other constraints on $P$. Some instances $p$ can be solved by looking at a prefix $q$ of $i$, that is, $i=qw$ for some non-empty $w$. Will $A$ stop as soon as it has read and computed on $q$, or must it read $w$ as well?
Here's an alternative phrasing which describes the counterexample I'm looking for. The counterexample would exhibit a specific $P,A,p,i,q$ such that $A$ is an optimal algorithm for $P$ and $p$ is a specific instance describable with $i$ bits where even though $p$ can be solved based on $q$, algorithm $A$ nonetheless necessarily reads all of $i$ to solve its problem.
logic computer-science computational-complexity
I have doubts that "optimal" algorithms exist in the sense you're using it -- i.e. with so crisp comparisons that spending time to move past another bit of input before terminating would count as a difference. For any $n$, we can always make an algorithm that answers in time $|p|$ whenever $|p|<n$, simply by hard-coding the response to all short enough inputs -- and increasing $n$ would always give "strictly better performance".
â Henning Makholm
Aug 27 at 18:25
(This assumes we can use as many Turing machine states as we want -- but if we want a performance measure that penalize algorithms for having many states, arguing for "optimality" of any algorithm would seem to become infeasibly hard).
â Henning Makholm
Aug 27 at 18:26
If you have access to a quantum computer, Grover's Algorithm can give you a quadratic speedup over linear search.
â Adrian Keister
Aug 27 at 18:39
Your definition of optimality is unclear - why do you mention the specific instance $p$? In general, the relative efficiency of two algorithms $A$ and $B$ will depend on the instance. E.g., linear search working down from the end of the input will be faster than working from the beginning if the numbers involved are all large and the number you are looking at only appears near the end of the input.
â Rob Arthan
Aug 27 at 20:16
1
@mathreadler: the OP has said the model of computation is a Turing machine. The question is about the mathematical discipline of complexity theory not the engineering discipline of hardware design.
â Rob Arthan
Aug 28 at 19:48
 |Â
show 12 more comments
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I'll ask the question informally first and then try to work up to something more formal.
Consider a linear search problem. We are presented with an unsorted array of numbers and are asked to determine whether a certain number is in the array. There seems to be no better algorithm than to search linearly through the array, testing each array element in turn to see if it's equal to the test number. The point here is that if we do find it, we don't have to search any further.
The question is whether this is more generally true. For example, if we are asked to find a clique of size $k$ in a graph, we can quit once we find one, at least if we use the naive algorithm which simply checks each combination of vertices of size $k$. Does the optimal algorithm also halt when it has read enough of the input?
Let's try to formalize this. (EDIT: the model of computation we will use here is a Turing machine.) An input $i in 0,1^n$ is just $n$ bits. Let's say algorithms $A$ and $B$ (EDIT: $A$ and $B$ are fixed) compute a problem $P$, with $p$ an instance of $P$. Define $A$ as optimal if there is no $B$ with better time performance. (EDIT: we'll assume worst-case analysis until further clarifications are necessary.)
EDIT: Here's a formalization of the time performance optimality measure; it's modeled on big O but different. If $f$ and $g$ are two functions defined on the natural numbers, then we'll say $f(n) = Phi(g(n))$ if there is a constant $C$ such that
$$ f(n) < g(n) textrm for all n > C$$
Let's put some other constraints on $P$. Some instances $p$ can be solved by looking at a prefix $q$ of $i$, that is, $i=qw$ for some non-empty $w$. Will $A$ stop as soon as it has read and computed on $q$, or must it read $w$ as well?
Here's an alternative phrasing which describes the counterexample I'm looking for. The counterexample would exhibit a specific $P,A,p,i,q$ such that $A$ is an optimal algorithm for $P$ and $p$ is a specific instance describable with $i$ bits where even though $p$ can be solved based on $q$, algorithm $A$ nonetheless necessarily reads all of $i$ to solve its problem.
logic computer-science computational-complexity
I'll ask the question informally first and then try to work up to something more formal.
Consider a linear search problem. We are presented with an unsorted array of numbers and are asked to determine whether a certain number is in the array. There seems to be no better algorithm than to search linearly through the array, testing each array element in turn to see if it's equal to the test number. The point here is that if we do find it, we don't have to search any further.
The question is whether this is more generally true. For example, if we are asked to find a clique of size $k$ in a graph, we can quit once we find one, at least if we use the naive algorithm which simply checks each combination of vertices of size $k$. Does the optimal algorithm also halt when it has read enough of the input?
Let's try to formalize this. (EDIT: the model of computation we will use here is a Turing machine.) An input $i in 0,1^n$ is just $n$ bits. Let's say algorithms $A$ and $B$ (EDIT: $A$ and $B$ are fixed) compute a problem $P$, with $p$ an instance of $P$. Define $A$ as optimal if there is no $B$ with better time performance. (EDIT: we'll assume worst-case analysis until further clarifications are necessary.)
EDIT: Here's a formalization of the time performance optimality measure; it's modeled on big O but different. If $f$ and $g$ are two functions defined on the natural numbers, then we'll say $f(n) = Phi(g(n))$ if there is a constant $C$ such that
$$ f(n) < g(n) textrm for all n > C$$
Let's put some other constraints on $P$. Some instances $p$ can be solved by looking at a prefix $q$ of $i$, that is, $i=qw$ for some non-empty $w$. Will $A$ stop as soon as it has read and computed on $q$, or must it read $w$ as well?
Here's an alternative phrasing which describes the counterexample I'm looking for. The counterexample would exhibit a specific $P,A,p,i,q$ such that $A$ is an optimal algorithm for $P$ and $p$ is a specific instance describable with $i$ bits where even though $p$ can be solved based on $q$, algorithm $A$ nonetheless necessarily reads all of $i$ to solve its problem.
logic computer-science computational-complexity
edited Aug 28 at 18:58
asked Aug 27 at 18:13
ShyPerson
975715
975715
I have doubts that "optimal" algorithms exist in the sense you're using it -- i.e. with so crisp comparisons that spending time to move past another bit of input before terminating would count as a difference. For any $n$, we can always make an algorithm that answers in time $|p|$ whenever $|p|<n$, simply by hard-coding the response to all short enough inputs -- and increasing $n$ would always give "strictly better performance".
â Henning Makholm
Aug 27 at 18:25
(This assumes we can use as many Turing machine states as we want -- but if we want a performance measure that penalize algorithms for having many states, arguing for "optimality" of any algorithm would seem to become infeasibly hard).
â Henning Makholm
Aug 27 at 18:26
If you have access to a quantum computer, Grover's Algorithm can give you a quadratic speedup over linear search.
â Adrian Keister
Aug 27 at 18:39
Your definition of optimality is unclear - why do you mention the specific instance $p$? In general, the relative efficiency of two algorithms $A$ and $B$ will depend on the instance. E.g., linear search working down from the end of the input will be faster than working from the beginning if the numbers involved are all large and the number you are looking at only appears near the end of the input.
â Rob Arthan
Aug 27 at 20:16
1
@mathreadler: the OP has said the model of computation is a Turing machine. The question is about the mathematical discipline of complexity theory not the engineering discipline of hardware design.
â Rob Arthan
Aug 28 at 19:48
 |Â
show 12 more comments
I have doubts that "optimal" algorithms exist in the sense you're using it -- i.e. with so crisp comparisons that spending time to move past another bit of input before terminating would count as a difference. For any $n$, we can always make an algorithm that answers in time $|p|$ whenever $|p|<n$, simply by hard-coding the response to all short enough inputs -- and increasing $n$ would always give "strictly better performance".
â Henning Makholm
Aug 27 at 18:25
(This assumes we can use as many Turing machine states as we want -- but if we want a performance measure that penalize algorithms for having many states, arguing for "optimality" of any algorithm would seem to become infeasibly hard).
â Henning Makholm
Aug 27 at 18:26
If you have access to a quantum computer, Grover's Algorithm can give you a quadratic speedup over linear search.
â Adrian Keister
Aug 27 at 18:39
Your definition of optimality is unclear - why do you mention the specific instance $p$? In general, the relative efficiency of two algorithms $A$ and $B$ will depend on the instance. E.g., linear search working down from the end of the input will be faster than working from the beginning if the numbers involved are all large and the number you are looking at only appears near the end of the input.
â Rob Arthan
Aug 27 at 20:16
1
@mathreadler: the OP has said the model of computation is a Turing machine. The question is about the mathematical discipline of complexity theory not the engineering discipline of hardware design.
â Rob Arthan
Aug 28 at 19:48
I have doubts that "optimal" algorithms exist in the sense you're using it -- i.e. with so crisp comparisons that spending time to move past another bit of input before terminating would count as a difference. For any $n$, we can always make an algorithm that answers in time $|p|$ whenever $|p|<n$, simply by hard-coding the response to all short enough inputs -- and increasing $n$ would always give "strictly better performance".
â Henning Makholm
Aug 27 at 18:25
I have doubts that "optimal" algorithms exist in the sense you're using it -- i.e. with so crisp comparisons that spending time to move past another bit of input before terminating would count as a difference. For any $n$, we can always make an algorithm that answers in time $|p|$ whenever $|p|<n$, simply by hard-coding the response to all short enough inputs -- and increasing $n$ would always give "strictly better performance".
â Henning Makholm
Aug 27 at 18:25
(This assumes we can use as many Turing machine states as we want -- but if we want a performance measure that penalize algorithms for having many states, arguing for "optimality" of any algorithm would seem to become infeasibly hard).
â Henning Makholm
Aug 27 at 18:26
(This assumes we can use as many Turing machine states as we want -- but if we want a performance measure that penalize algorithms for having many states, arguing for "optimality" of any algorithm would seem to become infeasibly hard).
â Henning Makholm
Aug 27 at 18:26
If you have access to a quantum computer, Grover's Algorithm can give you a quadratic speedup over linear search.
â Adrian Keister
Aug 27 at 18:39
If you have access to a quantum computer, Grover's Algorithm can give you a quadratic speedup over linear search.
â Adrian Keister
Aug 27 at 18:39
Your definition of optimality is unclear - why do you mention the specific instance $p$? In general, the relative efficiency of two algorithms $A$ and $B$ will depend on the instance. E.g., linear search working down from the end of the input will be faster than working from the beginning if the numbers involved are all large and the number you are looking at only appears near the end of the input.
â Rob Arthan
Aug 27 at 20:16
Your definition of optimality is unclear - why do you mention the specific instance $p$? In general, the relative efficiency of two algorithms $A$ and $B$ will depend on the instance. E.g., linear search working down from the end of the input will be faster than working from the beginning if the numbers involved are all large and the number you are looking at only appears near the end of the input.
â Rob Arthan
Aug 27 at 20:16
1
1
@mathreadler: the OP has said the model of computation is a Turing machine. The question is about the mathematical discipline of complexity theory not the engineering discipline of hardware design.
â Rob Arthan
Aug 28 at 19:48
@mathreadler: the OP has said the model of computation is a Turing machine. The question is about the mathematical discipline of complexity theory not the engineering discipline of hardware design.
â Rob Arthan
Aug 28 at 19:48
 |Â
show 12 more comments
2 Answers
2
active
oldest
votes
up vote
1
down vote
With your clarified definitions I will still say that the answer is "vacuously yes", because optimal machines do not exist at all for any problems other than recognizing a regular language.
More precisely, if you have any machine $M$ that solves the problem, I can construct another machine that completes faster in the worst case for every sufficiently long input. Because even constant terms in the running time matters, we can adapt the linear speedup theorem to work without increasing the size of the tape alphabet:
My machine will work just like $M$, but it remembers the content of 10 squares on the original tape internally as part of its state; those 10 squares are not represented on the tape. Therefore it can do in a single step any part of the computation where $M$ stays within those $10$ squares. Only when $M$ moves out of the window to the left or the right does my machine need to make a step. For example if $M$ moves out of the window to the right, my machine will read a symbol on the tape, add it to the right end of its internal memory, while writing the leftmost symbol from its old state to the square that it just read. Then it moves in the direction where $M$ will next leave its new window.
(My machine starts knowing that the 10 squares to the immediate left of the input are blank. The infinite blank tape to the left may equally well represent itself shifted 10 squares, so I have no set-up costs).
Since $P$ is not regular, for long enough input $M$ has to do some back-and-forth movement on the tape in the worst case. And each time it changes direction, my machine gets to skip at least a few of $M$'s steps. So from a certain length onwards it is always at least a few steps quicker than $M$.
This speedup comes at a cost: in general I may need 1024 times as many states in my machine as $M$ has. As mentioned in comments, one might imagine cost models that penalize a machine for having many states, but that is certainly not the standard way to count efficiency for Turing machines, and is not specified in you definition of "optimality" how one would do that.
On the other hand, for a regular $P$ we can easily construct a machine that reads the tape at full speed, and stops with the right answer the instant it has read enough of the input that the answer can be known with certainty.
Thanks for your answer. What's the procedure now that I need to substantially rephrase my question? Do I open a new question or put the edits into this one?
â ShyPerson
Aug 29 at 0:32
@ShyPerson: At this point, I think asking a new one will be preferrable. (Before you do, however, consider the problem of determining whether the input is a SAT instance followed by a satisfying variable assignment. In some cases we can in principle answer "no" as soon as we read the first part of the input (because the formula is not satisfiable), but unless P=NP there are inputs where it is much faster to read everything and verify the truth assignment than to determine reliably whether the formula is satisfiable).
â Henning Makholm
Aug 29 at 0:36
Thanks; I've posted a new version here.
â ShyPerson
Sep 5 at 4:25
add a comment |Â
up vote
1
down vote
Hint: I call this a "hint" because I don't claim to have a formal proof, but I am sure the details can be filled in.
I claim that we can construct problems where we don't necessarily have to read all the input to solve the problem, but it is computationally more expensive to decide how much of the input we need than it is just to read all the input.
E.g., Let the problem space comprise triples $langle i, n, t rangle$ where $i, n in BbbN$ and $t$ is a sequence of tuples giving the state transitions of a Turing machine $T = T_t$. Let the set $P$ of acceptable inputs be the set of such triples such that $T$ halts on input $i$ after at most $n$ steps. There is an algorithm that solves this problem without necessarily reading all of its input by checking initial subsequences of $t$ to see whether they are closed under the transition relation. I.e., the algorithm keeps checking to see how much of the input it needs to know (perhaps for the specific $i$ and $n$ given as the first part of the input). The repeated simulations or applications of something like the Floyd-Warshall algorithm to make these checks will surely be much slower than just simulating $T$ on $i$ for $n$ steps.
Thanks for this interesting scenario. Do I correctly understand that it describes a problem where a relatively slow algorithm reads potentially less input while a faster algorithm reads all of it?
â ShyPerson
Sep 2 at 18:55
Yes, that is what I believe happens with my proposed example. As I say in the answer, I don't claim to have a formal proof of this.
â Rob Arthan
Sep 2 at 19:27
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
With your clarified definitions I will still say that the answer is "vacuously yes", because optimal machines do not exist at all for any problems other than recognizing a regular language.
More precisely, if you have any machine $M$ that solves the problem, I can construct another machine that completes faster in the worst case for every sufficiently long input. Because even constant terms in the running time matters, we can adapt the linear speedup theorem to work without increasing the size of the tape alphabet:
My machine will work just like $M$, but it remembers the content of 10 squares on the original tape internally as part of its state; those 10 squares are not represented on the tape. Therefore it can do in a single step any part of the computation where $M$ stays within those $10$ squares. Only when $M$ moves out of the window to the left or the right does my machine need to make a step. For example if $M$ moves out of the window to the right, my machine will read a symbol on the tape, add it to the right end of its internal memory, while writing the leftmost symbol from its old state to the square that it just read. Then it moves in the direction where $M$ will next leave its new window.
(My machine starts knowing that the 10 squares to the immediate left of the input are blank. The infinite blank tape to the left may equally well represent itself shifted 10 squares, so I have no set-up costs).
Since $P$ is not regular, for long enough input $M$ has to do some back-and-forth movement on the tape in the worst case. And each time it changes direction, my machine gets to skip at least a few of $M$'s steps. So from a certain length onwards it is always at least a few steps quicker than $M$.
This speedup comes at a cost: in general I may need 1024 times as many states in my machine as $M$ has. As mentioned in comments, one might imagine cost models that penalize a machine for having many states, but that is certainly not the standard way to count efficiency for Turing machines, and is not specified in you definition of "optimality" how one would do that.
On the other hand, for a regular $P$ we can easily construct a machine that reads the tape at full speed, and stops with the right answer the instant it has read enough of the input that the answer can be known with certainty.
Thanks for your answer. What's the procedure now that I need to substantially rephrase my question? Do I open a new question or put the edits into this one?
â ShyPerson
Aug 29 at 0:32
@ShyPerson: At this point, I think asking a new one will be preferrable. (Before you do, however, consider the problem of determining whether the input is a SAT instance followed by a satisfying variable assignment. In some cases we can in principle answer "no" as soon as we read the first part of the input (because the formula is not satisfiable), but unless P=NP there are inputs where it is much faster to read everything and verify the truth assignment than to determine reliably whether the formula is satisfiable).
â Henning Makholm
Aug 29 at 0:36
Thanks; I've posted a new version here.
â ShyPerson
Sep 5 at 4:25
add a comment |Â
up vote
1
down vote
With your clarified definitions I will still say that the answer is "vacuously yes", because optimal machines do not exist at all for any problems other than recognizing a regular language.
More precisely, if you have any machine $M$ that solves the problem, I can construct another machine that completes faster in the worst case for every sufficiently long input. Because even constant terms in the running time matters, we can adapt the linear speedup theorem to work without increasing the size of the tape alphabet:
My machine will work just like $M$, but it remembers the content of 10 squares on the original tape internally as part of its state; those 10 squares are not represented on the tape. Therefore it can do in a single step any part of the computation where $M$ stays within those $10$ squares. Only when $M$ moves out of the window to the left or the right does my machine need to make a step. For example if $M$ moves out of the window to the right, my machine will read a symbol on the tape, add it to the right end of its internal memory, while writing the leftmost symbol from its old state to the square that it just read. Then it moves in the direction where $M$ will next leave its new window.
(My machine starts knowing that the 10 squares to the immediate left of the input are blank. The infinite blank tape to the left may equally well represent itself shifted 10 squares, so I have no set-up costs).
Since $P$ is not regular, for long enough input $M$ has to do some back-and-forth movement on the tape in the worst case. And each time it changes direction, my machine gets to skip at least a few of $M$'s steps. So from a certain length onwards it is always at least a few steps quicker than $M$.
This speedup comes at a cost: in general I may need 1024 times as many states in my machine as $M$ has. As mentioned in comments, one might imagine cost models that penalize a machine for having many states, but that is certainly not the standard way to count efficiency for Turing machines, and is not specified in you definition of "optimality" how one would do that.
On the other hand, for a regular $P$ we can easily construct a machine that reads the tape at full speed, and stops with the right answer the instant it has read enough of the input that the answer can be known with certainty.
Thanks for your answer. What's the procedure now that I need to substantially rephrase my question? Do I open a new question or put the edits into this one?
â ShyPerson
Aug 29 at 0:32
@ShyPerson: At this point, I think asking a new one will be preferrable. (Before you do, however, consider the problem of determining whether the input is a SAT instance followed by a satisfying variable assignment. In some cases we can in principle answer "no" as soon as we read the first part of the input (because the formula is not satisfiable), but unless P=NP there are inputs where it is much faster to read everything and verify the truth assignment than to determine reliably whether the formula is satisfiable).
â Henning Makholm
Aug 29 at 0:36
Thanks; I've posted a new version here.
â ShyPerson
Sep 5 at 4:25
add a comment |Â
up vote
1
down vote
up vote
1
down vote
With your clarified definitions I will still say that the answer is "vacuously yes", because optimal machines do not exist at all for any problems other than recognizing a regular language.
More precisely, if you have any machine $M$ that solves the problem, I can construct another machine that completes faster in the worst case for every sufficiently long input. Because even constant terms in the running time matters, we can adapt the linear speedup theorem to work without increasing the size of the tape alphabet:
My machine will work just like $M$, but it remembers the content of 10 squares on the original tape internally as part of its state; those 10 squares are not represented on the tape. Therefore it can do in a single step any part of the computation where $M$ stays within those $10$ squares. Only when $M$ moves out of the window to the left or the right does my machine need to make a step. For example if $M$ moves out of the window to the right, my machine will read a symbol on the tape, add it to the right end of its internal memory, while writing the leftmost symbol from its old state to the square that it just read. Then it moves in the direction where $M$ will next leave its new window.
(My machine starts knowing that the 10 squares to the immediate left of the input are blank. The infinite blank tape to the left may equally well represent itself shifted 10 squares, so I have no set-up costs).
Since $P$ is not regular, for long enough input $M$ has to do some back-and-forth movement on the tape in the worst case. And each time it changes direction, my machine gets to skip at least a few of $M$'s steps. So from a certain length onwards it is always at least a few steps quicker than $M$.
This speedup comes at a cost: in general I may need 1024 times as many states in my machine as $M$ has. As mentioned in comments, one might imagine cost models that penalize a machine for having many states, but that is certainly not the standard way to count efficiency for Turing machines, and is not specified in you definition of "optimality" how one would do that.
On the other hand, for a regular $P$ we can easily construct a machine that reads the tape at full speed, and stops with the right answer the instant it has read enough of the input that the answer can be known with certainty.
With your clarified definitions I will still say that the answer is "vacuously yes", because optimal machines do not exist at all for any problems other than recognizing a regular language.
More precisely, if you have any machine $M$ that solves the problem, I can construct another machine that completes faster in the worst case for every sufficiently long input. Because even constant terms in the running time matters, we can adapt the linear speedup theorem to work without increasing the size of the tape alphabet:
My machine will work just like $M$, but it remembers the content of 10 squares on the original tape internally as part of its state; those 10 squares are not represented on the tape. Therefore it can do in a single step any part of the computation where $M$ stays within those $10$ squares. Only when $M$ moves out of the window to the left or the right does my machine need to make a step. For example if $M$ moves out of the window to the right, my machine will read a symbol on the tape, add it to the right end of its internal memory, while writing the leftmost symbol from its old state to the square that it just read. Then it moves in the direction where $M$ will next leave its new window.
(My machine starts knowing that the 10 squares to the immediate left of the input are blank. The infinite blank tape to the left may equally well represent itself shifted 10 squares, so I have no set-up costs).
Since $P$ is not regular, for long enough input $M$ has to do some back-and-forth movement on the tape in the worst case. And each time it changes direction, my machine gets to skip at least a few of $M$'s steps. So from a certain length onwards it is always at least a few steps quicker than $M$.
This speedup comes at a cost: in general I may need 1024 times as many states in my machine as $M$ has. As mentioned in comments, one might imagine cost models that penalize a machine for having many states, but that is certainly not the standard way to count efficiency for Turing machines, and is not specified in you definition of "optimality" how one would do that.
On the other hand, for a regular $P$ we can easily construct a machine that reads the tape at full speed, and stops with the right answer the instant it has read enough of the input that the answer can be known with certainty.
answered Aug 28 at 19:56
Henning Makholm
230k16296527
230k16296527
Thanks for your answer. What's the procedure now that I need to substantially rephrase my question? Do I open a new question or put the edits into this one?
â ShyPerson
Aug 29 at 0:32
@ShyPerson: At this point, I think asking a new one will be preferrable. (Before you do, however, consider the problem of determining whether the input is a SAT instance followed by a satisfying variable assignment. In some cases we can in principle answer "no" as soon as we read the first part of the input (because the formula is not satisfiable), but unless P=NP there are inputs where it is much faster to read everything and verify the truth assignment than to determine reliably whether the formula is satisfiable).
â Henning Makholm
Aug 29 at 0:36
Thanks; I've posted a new version here.
â ShyPerson
Sep 5 at 4:25
add a comment |Â
Thanks for your answer. What's the procedure now that I need to substantially rephrase my question? Do I open a new question or put the edits into this one?
â ShyPerson
Aug 29 at 0:32
@ShyPerson: At this point, I think asking a new one will be preferrable. (Before you do, however, consider the problem of determining whether the input is a SAT instance followed by a satisfying variable assignment. In some cases we can in principle answer "no" as soon as we read the first part of the input (because the formula is not satisfiable), but unless P=NP there are inputs where it is much faster to read everything and verify the truth assignment than to determine reliably whether the formula is satisfiable).
â Henning Makholm
Aug 29 at 0:36
Thanks; I've posted a new version here.
â ShyPerson
Sep 5 at 4:25
Thanks for your answer. What's the procedure now that I need to substantially rephrase my question? Do I open a new question or put the edits into this one?
â ShyPerson
Aug 29 at 0:32
Thanks for your answer. What's the procedure now that I need to substantially rephrase my question? Do I open a new question or put the edits into this one?
â ShyPerson
Aug 29 at 0:32
@ShyPerson: At this point, I think asking a new one will be preferrable. (Before you do, however, consider the problem of determining whether the input is a SAT instance followed by a satisfying variable assignment. In some cases we can in principle answer "no" as soon as we read the first part of the input (because the formula is not satisfiable), but unless P=NP there are inputs where it is much faster to read everything and verify the truth assignment than to determine reliably whether the formula is satisfiable).
â Henning Makholm
Aug 29 at 0:36
@ShyPerson: At this point, I think asking a new one will be preferrable. (Before you do, however, consider the problem of determining whether the input is a SAT instance followed by a satisfying variable assignment. In some cases we can in principle answer "no" as soon as we read the first part of the input (because the formula is not satisfiable), but unless P=NP there are inputs where it is much faster to read everything and verify the truth assignment than to determine reliably whether the formula is satisfiable).
â Henning Makholm
Aug 29 at 0:36
Thanks; I've posted a new version here.
â ShyPerson
Sep 5 at 4:25
Thanks; I've posted a new version here.
â ShyPerson
Sep 5 at 4:25
add a comment |Â
up vote
1
down vote
Hint: I call this a "hint" because I don't claim to have a formal proof, but I am sure the details can be filled in.
I claim that we can construct problems where we don't necessarily have to read all the input to solve the problem, but it is computationally more expensive to decide how much of the input we need than it is just to read all the input.
E.g., Let the problem space comprise triples $langle i, n, t rangle$ where $i, n in BbbN$ and $t$ is a sequence of tuples giving the state transitions of a Turing machine $T = T_t$. Let the set $P$ of acceptable inputs be the set of such triples such that $T$ halts on input $i$ after at most $n$ steps. There is an algorithm that solves this problem without necessarily reading all of its input by checking initial subsequences of $t$ to see whether they are closed under the transition relation. I.e., the algorithm keeps checking to see how much of the input it needs to know (perhaps for the specific $i$ and $n$ given as the first part of the input). The repeated simulations or applications of something like the Floyd-Warshall algorithm to make these checks will surely be much slower than just simulating $T$ on $i$ for $n$ steps.
Thanks for this interesting scenario. Do I correctly understand that it describes a problem where a relatively slow algorithm reads potentially less input while a faster algorithm reads all of it?
â ShyPerson
Sep 2 at 18:55
Yes, that is what I believe happens with my proposed example. As I say in the answer, I don't claim to have a formal proof of this.
â Rob Arthan
Sep 2 at 19:27
add a comment |Â
up vote
1
down vote
Hint: I call this a "hint" because I don't claim to have a formal proof, but I am sure the details can be filled in.
I claim that we can construct problems where we don't necessarily have to read all the input to solve the problem, but it is computationally more expensive to decide how much of the input we need than it is just to read all the input.
E.g., Let the problem space comprise triples $langle i, n, t rangle$ where $i, n in BbbN$ and $t$ is a sequence of tuples giving the state transitions of a Turing machine $T = T_t$. Let the set $P$ of acceptable inputs be the set of such triples such that $T$ halts on input $i$ after at most $n$ steps. There is an algorithm that solves this problem without necessarily reading all of its input by checking initial subsequences of $t$ to see whether they are closed under the transition relation. I.e., the algorithm keeps checking to see how much of the input it needs to know (perhaps for the specific $i$ and $n$ given as the first part of the input). The repeated simulations or applications of something like the Floyd-Warshall algorithm to make these checks will surely be much slower than just simulating $T$ on $i$ for $n$ steps.
Thanks for this interesting scenario. Do I correctly understand that it describes a problem where a relatively slow algorithm reads potentially less input while a faster algorithm reads all of it?
â ShyPerson
Sep 2 at 18:55
Yes, that is what I believe happens with my proposed example. As I say in the answer, I don't claim to have a formal proof of this.
â Rob Arthan
Sep 2 at 19:27
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Hint: I call this a "hint" because I don't claim to have a formal proof, but I am sure the details can be filled in.
I claim that we can construct problems where we don't necessarily have to read all the input to solve the problem, but it is computationally more expensive to decide how much of the input we need than it is just to read all the input.
E.g., Let the problem space comprise triples $langle i, n, t rangle$ where $i, n in BbbN$ and $t$ is a sequence of tuples giving the state transitions of a Turing machine $T = T_t$. Let the set $P$ of acceptable inputs be the set of such triples such that $T$ halts on input $i$ after at most $n$ steps. There is an algorithm that solves this problem without necessarily reading all of its input by checking initial subsequences of $t$ to see whether they are closed under the transition relation. I.e., the algorithm keeps checking to see how much of the input it needs to know (perhaps for the specific $i$ and $n$ given as the first part of the input). The repeated simulations or applications of something like the Floyd-Warshall algorithm to make these checks will surely be much slower than just simulating $T$ on $i$ for $n$ steps.
Hint: I call this a "hint" because I don't claim to have a formal proof, but I am sure the details can be filled in.
I claim that we can construct problems where we don't necessarily have to read all the input to solve the problem, but it is computationally more expensive to decide how much of the input we need than it is just to read all the input.
E.g., Let the problem space comprise triples $langle i, n, t rangle$ where $i, n in BbbN$ and $t$ is a sequence of tuples giving the state transitions of a Turing machine $T = T_t$. Let the set $P$ of acceptable inputs be the set of such triples such that $T$ halts on input $i$ after at most $n$ steps. There is an algorithm that solves this problem without necessarily reading all of its input by checking initial subsequences of $t$ to see whether they are closed under the transition relation. I.e., the algorithm keeps checking to see how much of the input it needs to know (perhaps for the specific $i$ and $n$ given as the first part of the input). The repeated simulations or applications of something like the Floyd-Warshall algorithm to make these checks will surely be much slower than just simulating $T$ on $i$ for $n$ steps.
edited Aug 28 at 20:26
answered Aug 28 at 20:15
Rob Arthan
27.5k42865
27.5k42865
Thanks for this interesting scenario. Do I correctly understand that it describes a problem where a relatively slow algorithm reads potentially less input while a faster algorithm reads all of it?
â ShyPerson
Sep 2 at 18:55
Yes, that is what I believe happens with my proposed example. As I say in the answer, I don't claim to have a formal proof of this.
â Rob Arthan
Sep 2 at 19:27
add a comment |Â
Thanks for this interesting scenario. Do I correctly understand that it describes a problem where a relatively slow algorithm reads potentially less input while a faster algorithm reads all of it?
â ShyPerson
Sep 2 at 18:55
Yes, that is what I believe happens with my proposed example. As I say in the answer, I don't claim to have a formal proof of this.
â Rob Arthan
Sep 2 at 19:27
Thanks for this interesting scenario. Do I correctly understand that it describes a problem where a relatively slow algorithm reads potentially less input while a faster algorithm reads all of it?
â ShyPerson
Sep 2 at 18:55
Thanks for this interesting scenario. Do I correctly understand that it describes a problem where a relatively slow algorithm reads potentially less input while a faster algorithm reads all of it?
â ShyPerson
Sep 2 at 18:55
Yes, that is what I believe happens with my proposed example. As I say in the answer, I don't claim to have a formal proof of this.
â Rob Arthan
Sep 2 at 19:27
Yes, that is what I believe happens with my proposed example. As I say in the answer, I don't claim to have a formal proof of this.
â Rob Arthan
Sep 2 at 19:27
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2896490%2fwill-an-optimal-algorithm-halt-when-it-has-read-enough-of-the-input-to-generate%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
I have doubts that "optimal" algorithms exist in the sense you're using it -- i.e. with so crisp comparisons that spending time to move past another bit of input before terminating would count as a difference. For any $n$, we can always make an algorithm that answers in time $|p|$ whenever $|p|<n$, simply by hard-coding the response to all short enough inputs -- and increasing $n$ would always give "strictly better performance".
â Henning Makholm
Aug 27 at 18:25
(This assumes we can use as many Turing machine states as we want -- but if we want a performance measure that penalize algorithms for having many states, arguing for "optimality" of any algorithm would seem to become infeasibly hard).
â Henning Makholm
Aug 27 at 18:26
If you have access to a quantum computer, Grover's Algorithm can give you a quadratic speedup over linear search.
â Adrian Keister
Aug 27 at 18:39
Your definition of optimality is unclear - why do you mention the specific instance $p$? In general, the relative efficiency of two algorithms $A$ and $B$ will depend on the instance. E.g., linear search working down from the end of the input will be faster than working from the beginning if the numbers involved are all large and the number you are looking at only appears near the end of the input.
â Rob Arthan
Aug 27 at 20:16
1
@mathreadler: the OP has said the model of computation is a Turing machine. The question is about the mathematical discipline of complexity theory not the engineering discipline of hardware design.
â Rob Arthan
Aug 28 at 19:48