tag:blogger.com,1999:blog-19805885880053734162024-02-19T15:56:52.391+09:00Lambda Calculus and Formal GrammarRandom notes on logic, language, and computationMakoto Kanazawahttp://www.blogger.com/profile/08107328482880816417noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-1980588588005373416.post-66003880080041755362014-09-15T23:30:00.000+09:002014-09-16T00:50:12.016+09:00PSPACE-completeness of LCFRS universal recognition<p>
About a month ago, Marco Kuhlmann <a href="https://twitter.com/khlmnn/status/501675437500936193">asked on twitter</a> whether the universal recognition problem for <em>non-permuting</em> LCFRSs is PSPACE-complete. (Non-permuting LCFRSs were called <em>monotone</em> LCFRSs by Michaelis and by Kracht. They do not allow arguments of the same nonterminal on the right-hand side of a rule to appear in reversed order on the left-hand side.) Good question. <a href="http://search.ieice.org/bin/summary.php?id=e75-d_1_78">Kaji et al. (1992)</a> proved that the universal recognition for arbitrary LCFRSs is PSPACE-complete by reduction from an arbitrary language in PSPACE. The LCFRS that is the output of the reduction is not non-permuting, and converting a given LCFRS to a non-permuting normal form may result in an exponential blowup in size, so this result by itself does not answer Marco's question. I couldn't give an answer immediately because I was not familiar with the details of Kaji et al.'s proof. I found the proof not so easy to read, but, after spending a few days trying to understand it, I found that it is easy to modify their construction to make the output always non-permuting. So the answer to Marco's question is, yes, it is PSPACE-complete.
</p>
<p>
Kaji et al.'s construction is designed to simulate an accepting computation of a polynomial-space-bounded Turing machine by a derivation of an LCFRS of rank 1 and dimension (or fan-out or arity) proportional to the the number of tape cells used in the computation. Although this construction works for arbitrary polynomial-space-bounded Turing machines, the reduction is from a fixed language \(L\) in PSPACE to the universal recognition problem for general LCFRSs, i.e., the reduction transforms a string \(w\) to a pair \((G_w,z_w)\) consisting of an LCFRS and a string such that \(w \in L\) if and only if \(z_w \in L(G_w)\). So you work with a fixed Turing machine \(M\) which accepts \(L\) and operates in space \(p(n)\), a fixed polynomial function of \(n\). The important point is that the function \(p(n)\) doesn't vary from input to input. (Otherwise the reduction can't be carried out in polynomial time.)
</p>
<p>
Every nonterminal \(A\) of \(G_w\) except the start nonterminal \(S\) has the same dimension \(d\), which is a constant multiple of \(p(|w|)\), the space used in a possible accepting computation on input \(w\). A tuple that is a possible argument of nonterminal \(A\) consists of two parts—most of the components of the tuple represent the contents of \(M\)'s tape, and the remaining components (of which there is just one in Kaji et al.'s original construction) act as a kind of "trash can". Each cell of \(M\)'s tape is represented by a tuple consisting of \(\epsilon\)s and 1s. Although there are more efficient coding, it suffices to take a tuple of the form
\[
\epsilon,\dots,\epsilon,1,\epsilon,\dots,\epsilon
\]
whose length \(t\) equals the size of the tape alphabet of \(M\). The position of the unique occurrence of \(1\) tells what symbol is occupying the cell. The head position and the state of \(M\) can be coded in nonterminals, so that a "derived fact"
\[
A_{q,i}(\ulcorner c_1 \urcorner,\dots,\ulcorner c_{p(n)} \urcorner,\epsilon)
\]
represents a configuration of \(M\) in which \(M\) is in state \(q\), its head is at the \(i\)th cell from the left, and the tape content is \(c_1 \dots c_{p(n)}\). Here, \(\ulcorner c \urcorner\) is the above tuple representation of the symbol \(c\), and the final argument of \(A_{q,i}\) is the trash can, which is always required to contain \(\epsilon\).
</p>
<p>
A single step in \(M\)'s computation on \(w = c_1 \dots c_n\) is accounted for by a single rule of \(G_w\). Let us assume for simplicity that a single step of \(M\) either just rewrites the symbol under the head (without moving) or just moves the head one cell to the left or right (without writing). (Remember that \(M\) is not part of the input to reduction, so we can take any polynomial-space-bounded Turing machine we like that accepts \(L\).) The first type of move corresponds to a rule of \(G_w\) of the form
\begin{equation}
\label{rule1}
A_{r,i}(\vec{x_1},\dots,\vec{x_{i-1}},\underbrace{\epsilon,\dots,\epsilon}_{l-1},x_{i,k},\underbrace{\epsilon,\dots,\epsilon}_{t-l},\vec{x_{i+1}},\dots,\vec{x_{p(n)}},x_{i,1} \dots x_{i,k-1}x_{i,k+1}x_{i,t}y) \leftarrow A_{q,i}(\vec{x_1},\dots,\vec{x_{p(n)}},y),
\end{equation}
and the second type of move corresponds to a rule of the form
\begin{equation}
\label{rule2}
A_{r,i \pm 1}(\vec{x_1},\dots,\vec{x_{p(n)}},y) \leftarrow A_{q,i}(\vec{x_1},\dots,\vec{x_{p(n)}},y).
\qquad\qquad\mbox{}
\end{equation}
Here, \(\vec{x_j}\) stands for a tuple of variables \(x_{j,1},\dots,x_{j,t}\) of length \(t\). The change from
\[
x_{i,1},\dots,x_{i,t}
\]
to
\[
\underbrace{\epsilon,\dots,\epsilon}_{l-1},x_{i,k},\underbrace{\epsilon,\dots,\epsilon}_{t-l}
\]
in (\ref{rule1}) expresses the rewriting of the symbol represented by
\[
\underbrace{\epsilon,\dots,\epsilon}_{k-1},1,\underbrace{\epsilon,\dots,\epsilon}_{t-k}
\]
with another symbol; it moves the \(k\)th component of the tuple to the \(l\)th component, while moving all the other components to the trash can.
</p>
<p>
The initial configuration of \(M\) corresponds to the unique terminating rule
\begin{equation}
\label{rule3}
A_{q_I,1}(\ulcorner c_1 \urcorner,\dots,\ulcorner c_n \urcorner,\ulcorner b \urcorner,\dots,\ulcorner b \urcorner, \epsilon) \leftarrow ,
\end{equation}
where \(q_I\) is the initial state of \(M\) and \(b\) stands for the blank symbol. For each final state \(q\) and \(i=1,\dots,p(n)\), there is the rule
\begin{equation}
\label{rule4}
S(x_{1,1}\dots x_{1,t} \dots x_{p(n),1} \dots x_{p(n),t} \# y) \leftarrow A_{q,i}(\vec{x_1},\dots,\vec{x_{p(n)}},y),
\;\mbox{}
\end{equation}
which concatenates all the tuples representing the tape and affixes \(\#\) followed by the content of the trash can. Setting \(z_w = 1^{p(n)} \# \) enforces the requirement that the string in the trash can be always empty.
</p>
<p>
Now rules of \(G_w\) of type (\ref{rule1}) are not non-permuting, because variables that get moved to the trash can may move over other variables. To avoid this, an easy solution is to place trash cans to the left and to the right of every tuple representing a tape cell:
\begin{equation}
\tag{$1'$}
A_{r,i}(y_0,\vec{x_1},y_1,\dots,\vec{x_{i-1}},y_{i-1} x_{i,1} \dots x_{i,k-1},\underbrace{\epsilon,\dots,\epsilon}_{l-1},x_{i,k},\underbrace{\epsilon,\dots,\epsilon}_{t-l},x_{i,k+1} \dots x_{i,t} y_i,\vec{x_{i+1}},y_{i+1},\dots,\vec{x_{p(n)}},y_{p(n)}) \leftarrow A_{q,i}(y_0,\vec{x_1},y_1,\dots,\vec{x_{p(n)}},y_{p(n)}).
\end{equation}
The dimension \(d\) of nonterminals (other than \(S\)) has changed, so we need to adjust rules (\ref{rule2}) and (\ref{rule3}) accordingly:
\begin{gather}
\tag{$2'$}
A_{r,i \pm 1}(y_0,\vec{x_1},y_1,\dots,\vec{x_{p(n)}},y_{p(n)}) \leftarrow A_{q,i}(y_0,\vec{x_1},y_1,\dots,\vec{x_{p(n)}},y_{p(n)}),
\\
\tag{$3'$}
A_{q_I,1}(\epsilon,\ulcorner c_1 \urcorner,\epsilon,\dots,\ulcorner c_n \urcorner,\epsilon,\ulcorner b \urcorner,\epsilon,\dots,\ulcorner b \urcorner, \epsilon) \leftarrow .
\end{gather}
To force each trash can to be empty, we now let \(z_w = (\# 1 \#)^{p(n)}\) and replace (\ref{rule4}) by
\begin{equation}
\tag{$4'$}
S(y_0 \# x_{1,1} \dots x_{1,t} \# y_1 \# x_{2,1} \dots x_{2,t} \# y_2 \dots x_{p(n),1} \dots x_{p(n),t} \# y_{p(n)}) \leftarrow A_{q,i}(y_0,\vec{x_1},y_1,\dots,\vec{x_{p(n)}},y_{p(n)}).
\end{equation}
</p>
<p>
With these changes, the grammar \(G_w\) is now non-permuting, and since it is of rank 1, it is also well-nested. So this shows that universal recognition for well-nested MCFGs (LCFRSs) is also PSPACE-complete, something I have always claimed without realizing that Kaji et al.'s proof needs to be modified to establish it.
</p>
Makoto Kanazawahttp://www.blogger.com/profile/08107328482880816417noreply@blogger.com0tag:blogger.com,1999:blog-1980588588005373416.post-19713293115036361122013-12-03T15:38:00.000+09:002016-04-14T13:46:35.432+09:00Machine-Based Approach to Pumping(January 1, 2014: The statement of "Harrison's theorem" in the original post was in error. The error is now corrected. See <a href="/2013/12/machine-based-approach-to-pumping.html#correction">here</a>.)<br />
<br />
(December 4, 2013: Scroll to <a href="/2013/12/machine-based-approach-to-pumping.html#update">the end of the post</a> for an update.)<br />
<br />
When I was a first-year Ph.D. student at Stanford, I took an automata theory course offered by the computer science department, taught by the late <a href="http://en.wikipedia.org/wiki/Robert_W._Floyd">Robert Floyd</a>. The course used a draft version of a textbook he and Richard Beigel were writing, later published as <a href="http://www.amazon.com/The-Language-Machines-Introduction-Computability/dp/0716782669">The Language of Machines</a> in 1994. The book, presumably influenced by <a href="http://www.sciencedirect.com/science/article/pii/S002200006780014X">an old paper by Dana Scott</a>, adopted a rather unconventional approach to automata theory. A clear separation between machines and programs and abstract notions of machines were two of such unconventional features. I quite liked the book, but it would probably be good to have another textbook with more standard notations and terminology, such as <a href="http://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/113318779X">Michael Sipser's</a>.<br />
<br />
Anyway, one of the interesting things about the draft of Floyd and Beigel's book was that they gave a proof of the pumping lemma for context-free languages using pushdown automata, rather than context-free grammars. I had already seen the standard, grammar-based proof, but I was fascinated by the PDA-based proof in the book. In the grammar-based proof, you look at a long path in a derivation tree that has two occurrences of the same nonterminal on it. In the machine-based proof, you look at a computation in which the stack exceeds a certain height. With the definition of the PDA adopted in the book, where you can push a symbol on the stack regardless of the top symbol of the stack, the height to be exceeded is simply \(|Q|^2\), where \(Q\) is the state set of the PDA.<br />
<br />
A tricky thing about this proof is that it won't do to just take a very long string, because a PDA can run indefinitely long without the stack ever exceeding a given height. But this is OK because the set of strings that a PDA accepts without the stack height ever exceeding \(h\) is regular, no matter what \(h\) is. In the margin next to the proof in Floyd and Beigel's draft, there was a note that said, "A hard exercise for a machine-based proof." Alas, the proof was replaced by the standard, grammar-based proof in the published version of the book.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPr3E1v48fiqclRl3sBFgFjyMYu-DJLiBA6Cae6Uh7K5IgfZRgml6nMNCrNgacIfU9ew4a3d2phLuQWGYF3DGVlFuSQ9MSllhDr6kpVBQL4no2THnyzytg2PQCmh1IskNYqKQMvcCmGen_/s1600/pumping.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPr3E1v48fiqclRl3sBFgFjyMYu-DJLiBA6Cae6Uh7K5IgfZRgml6nMNCrNgacIfU9ew4a3d2phLuQWGYF3DGVlFuSQ9MSllhDr6kpVBQL4no2THnyzytg2PQCmh1IskNYqKQMvcCmGen_/s320/pumping.png" /></a></div>I was always fond of this machine-based proof, and I even mentioned it when I presented my paper on <a href="http://link.springer.com/chapter/10.1007%2F978-3-642-02737-6_25">the pumping lemma for well-nested multiple context-free languages</a> (<a href="http://research.nii.ac.jp/~kanazawa/publications/well-nested.pdf">PDF</a>), first in Haifa in 2008, and then in Stuttgart in 2009. Unfortunately, I didn't keep the draft version of the book, but I'm fairly confident my reconstruction of the proof is the same as the one Floyd and Beigel had in the draft.<br />
<br />
Recently, I had a chance to think about this proof again, when I was writing <a href="http://www.illc.uva.nl/AC/AC2013/uploaded_files/inlineitem/18_Kanazawa.pdf">my just-finished paper</a> for the <a href="http://www.illc.uva.nl/AC/AC2013/">19th Amsterdam Colloquium</a>. The proof of one direction of the main result of this paper uses a pumping lemma for <em>deterministic</em> context-free languages from <a href="http://www.amazon.com/Introduction-Language-Addison-Wesley-computer-science/dp/0201029553">Michael Harrison's book</a> (Theorems 11.8.2 and 11.8.3). (It seems the result first appeared in <a href="http://dl.acm.org/citation.cfm?doid=321850.321851">Havel and Harrison's 1974 paper</a>.)<br />
<br />
Harrison proves his pumping lemma by analyzing derivation trees of strict deterministic grammars. I confess to not having read his proof, but it seems to me that it's much easier to prove it using deterministic PDA. In the case of deterministic PDA, you can't assume the stack is emptied before acceptance, so the bound to be exceeded in the proof is not about the stack height, but the <em>decrease</em> in stack height. Also, if you work with a model of PDA where the stack action is always sensitive to the top symbol of the stack, the bound becomes \((|Q|\cdot|\Gamma|)^2\), rather than \(|Q|^2\), where \(\Gamma\) is the stack alphabet. Other than that, the idea is essentially the same, and you can get a strong pumping property of the DCFLs easily.<br />
<br />
I actually wrote up this machine-based proof of Harrison's pumping lemma for my paper, but I had to cut it because the paper otherwise didn't fit within the 8-page limit.<br />
<br />
The thing is I've never seen the machine-based proof of the pumping lemma for CFL or DCFL anywhere else, and I don't know if it ever appeared in print. I noticed that Havel and Harrison refer to <a href="http://dl.acm.org/citation.cfm?id=805419">this 1969 paper</a> by William Ogden. This paper is mostly about stack languages, and the author doesn't say much about PDA in it (no explicit bound to be exceeded is mentioned), but <a href="http://www.ntis.gov/search/product.aspx?ABBR=AD682981">his 1968 dissertation</a> is titled "Intercalation theorems for pushdown store and stack languages", so it probably has a more detailed treatment there. I can't find Ogden's dissertation anywhere on the web, but I hope to obtain a copy at some point. (Incidentally, <a href="http://www.cs.cmu.edu/~scott/students.html">Ogden's Ph.D. advisor was Dana Scott</a>, and I think Robert Floyd became a professor at Stanford in 1969, so there's the connection.)<br />
<br />
(Note: The proof of the pumping lemma for linear indexed languages uses a similar argument. See <a href="http://link.springer.com/chapter/10.1007/978-3-642-37064-9_45">this paper by Shamir</a> or <a href="http://research.nii.ac.jp/~kanazawa/FormalGrammar/index.html">here (Lecture 4)</a>.)<br />
<br />
You can stop reading now (if you haven't already), but if you're interested, here's my write-up of the machine-based proof of (a corollary of) Harrison's theorem. (Thanks to <a href="http://www.mathjax.org">MathJax</a>, it's now fairly easy to copy-and-paste from a LaTeX source.)<br />
<br />
(January 1, 2014: The statement of the following theorem in the original post was in error.)<br />
<br />
<b>Theorem</b> (Harrison). Let $L \subseteq \Sigma^{\ast}$ be a DCFL. There exists a regular set $R \subseteq L$ satisfying the following property: for every $w \in L-R$, there exist $x_1,x_2,x_3,x_4,x_5$ such that<br />
<ol><li>$w = x_1 x_2 x_3 x_4 x_5$;</li>
<li>$x_2 x_4 \neq \epsilon$;</li>
<li>for every $z \in \Sigma^{\ast}$ and $n \in \mathbb{N}$, $x_1 x_2 x_3 x_4 z \in L$ if and only if $x_1 x_2^n x_3 x_4^n z \in L$.</li>
</ol>In the following proof, we write $(q,\gamma) \stackrel{a}{\Longrightarrow}_1 (q',\gamma')$ for $(a,q,\gamma) \vdash_M (\epsilon,q',\gamma')$, and $(q,\gamma) \stackrel{w}{\Longrightarrow} (q',\gamma')$ for $(w,q,\gamma) \vdash_M^{\ast} (\epsilon,q',\gamma')$. Note that $(q,\gamma) \stackrel{w}{\Longrightarrow} (q',\gamma') \stackrel{w'}{\Longrightarrow} (q'',\gamma'')$ implies $(q,\gamma) \stackrel{ww'}{\Longrightarrow} (q'',\gamma'')$.<br />
<br />
Let $M = (Q,\Sigma,\Gamma,\delta,q_0,Z_0,F)$ be a deterministic PDA that recognizes $L$. We may suppose without loss of generality that that the computation of $M$ never blocks or enters into an infinite loop, and $(p,\gamma) \in \delta(q,a,Y)$ implies $\gamma \in \{ \epsilon \} \cup \Gamma \cup \Gamma Y$. Let $h = (|Q| \cdot |\Gamma|)^2$. We divide $L$ into two sets $L_0, L_1$, where $L_0$ is a regular set, as follows.<br />
\begin{gather*}
L_1 = \{\, w \in L \mid w = w_1 w_2 w_3, (q_0,Z_0) \stackrel{w_1}{\Longrightarrow} (q,\alpha\beta) \stackrel{w_2}{\Longrightarrow} (r,\beta), |\alpha| = h, |\beta| \geq 1 \,\},\\
L_0 = L - L_1.
\end{gather*}
The reason that $L_0$ is regular is that if the stack height never shrinks by more than $h$ symbols, then all but the top $h$ symbols on the stack can be discarded, and the resulting bounded-height stack can be simulated by the finite control.<br />
<br />
Suppose $w \in L_1$. Let $m$ be the number of steps that $M$ takes until it accepts $w$. Let $\gamma_0 = Z_0$, the initial stack content, and for each $i=1,\dots,m$, let $q_i$ and $\gamma_i$ be the state and the stack content of $M$ after the $i$th step. We can write $w = a_1 \dots a_m$, with $a_i \in \Sigma \cup \{ \epsilon \}$, so that<br />
\[
(q_{i-1},\gamma_{i-1}) \stackrel{a_i}{\Longrightarrow}_1 (q_i,\gamma_i).
\]
Since $w \in L_1$, we must have $\gamma_k = \alpha \gamma_l$ and $\gamma_l \neq \epsilon$ for some $k < l \leq m$ and $\alpha \in \Gamma^h$. We may suppose $l$ to be the least number with this property. This implies $|\gamma_i| \leq |\gamma_k|$ for $i=0,\dots,l$.
Now for each $i \in [0,h]$, we mark the last moment up to $k$ as well as the first moment beyond $k$ when the stack has height $|\gamma_l|+i$. More precisely, for each $i \in [0,h]$, let $f(i)$ be the greatest number in $[0,k]$ such that $|\gamma_{f(i)}| = |\gamma_l|+i$, and $g(i)$ be the least number in $[k,l]$ such that $|\gamma_{g(i)}| = |\gamma_l|+i$. Because of the assumption about $\delta$, the stack height changes by at most 1 at each step, and $f(i)$ and $g(i)$ are always defined. Because of the way $f$ and $g$ are defined, $f(i) < j < g(i)$ implies $|\gamma_j| > |\gamma_{f(i)}| = |\gamma_{g(i)}| = |\gamma_l| + i$. (<b>Update</b> April 14, 2016: Thanks to Johannes Osterholzer for spotting an error here in the original post, which is now corrected.) For $i \in [0,l]$, let $Z_i$ be the top symbol of $\gamma_i$. Since $h = (|Q| \cdot |\Gamma|)^2$, there must be $i,j \in [0,h]$ such that $i < j$ and
\[
(q_{f(i)},Z_{f(i)}) = (q_{f(j)},Z_{f(j)}),
\quad
(q_{g(j)},Z_{g(j)}) = (q_{g(i)},Z_{g(i)}).
\]
This implies
\[
\gamma_{f(i)} = \gamma_{g(i)},\quad
\gamma_{f(j)} = \gamma_{g(j)} = \delta \gamma_{f(i)}
\]
for some $\delta \in \Gamma^+$. Let
\[
x_1 = a_1 \dots a_{f(i)},\quad
x_2 = a_{f(i)+1} \dots a_{f(j)},\quad
x_3 = a_{f(j)+1} \dots a_{g(j)},\quad
x_4 = a_{g(j)+1} \dots a_{g(i)}.
\]
Then we have
\[
(q_{f(i)},\gamma_{f(i)}) \stackrel{x_2^n}{\Longrightarrow} (q_{f(i)},\delta^n\gamma_{f(i)}) \stackrel{x_3}{\Longrightarrow} (q_{g(j)},\delta^n\gamma_{f(i)}) \stackrel{x_4^n}{\Longrightarrow} (q_{g(j)},\gamma_{f(i)})
\]
for all $n \geq 0$. Since $M$ never enters into an infinite loop, we must have $x_2 \neq \epsilon$. Let $w'$ be such that $w = a_1 \dots a_l w'$ and let $x_5 = a_{g(i)+1} \dots a_l w'$. It is now easy to verify the conditions 1–3 of the theorem.
<p><br />
<a name="update"><b>Update</b> (December 4, 2013).</a> It has occurred to me that the difficulty with the machine-based approach lies in the proof of Ogden's lemma (what Harrison calls the "iteration theorem"). What I called "Harrison's theorem" above is really a weakened form corresponding to the usual pumping lemma. <a href="http://www.sciencedirect.com/science/article/pii/0304397580900523#">This paper by King</a> says "Ogden [in his Ph.D. thesis] gave an iteration theorem for the family of deterministic context-free languages" and Harrison and Havel gave "a new proof of Ogden’s result for […] deterministic context-free languages." But <a href="http://www.eecs.berkeley.edu/Pubs/TechRpts/1986/CSD-86-271.pdf">this 1985 technical report by Harrison</a> says:<br />
<blockquote>"The key to giving a rigorous proof for deterministic iteration theorems is to use trees, not machines. This was a problem in [Ogden's Ph.D. thesis] where a deterministic iteration theorem was first expressed but not proved rigorously. It turns out to be easy to do with strict deterministic grammars." (page 9)<br />
</blockquote>Problem 5 in Section 6.2 of <a href="http://www.amazon.com/Introduction-Language-Addison-Wesley-computer-science/dp/0201029553">Harrison's book</a> asks for "a pda proof of the iteration theorem". Intriguingly, the next problem (Problem 6) is posed as a question "Using the result of Problem 5, can you find an iteration theorem for deterministic context-free languages?"<br />
</p><p><a name="correction"><b>Correction</b> (January 1, 2014)</a>. The statement of "Harrison's theorem" in the original post was in error and read as follows:<br />
</p><p><b>Theorem</b> (Harrison). Let $L \subseteq \Sigma^{\ast}$ be a DCFL. There exists a positive integer $p$ satisfying the following property: for every $w \in L$ with $|w| \geq p$, there exist $x_1,x_2,x_3,x_4,x_5$ such that<br />
<ol><li>$w = x_1 x_2 x_3 x_4 x_5$;</li>
<li>$x_2 x_4 \neq \epsilon$;</li>
<li>for every $z \in \Sigma^{\ast}$ and $n \in \mathbb{N}$, $x_1 x_2 x_3 x_4 z \in L$ if and only if $x_1 x_2^n x_3 x_4^n z \in L$.</li>
</ol>This is not even true; $L = \{\, w \in \{ a,b \}^{\ast} \mid \#_a(w) \geq \#_b(w) \,\}$ and $w = a^p$ constitute a counterexample. It seems I somehow managed to mix up the two versions of Harrison's iteration theorems that apply to two different classes of languages. The main result of <a href="http://www.illc.uva.nl/AC/AC2013/uploaded_files/inlineitem/18_Kanazawa.pdf">my Amsterdam Colloquium paper</a> is not affected, since it suffices to be able to find one $w \in L$ that satisfies the conditions of the theorem.<br />
</p>Makoto Kanazawahttp://www.blogger.com/profile/08107328482880816417noreply@blogger.com1tag:blogger.com,1999:blog-1980588588005373416.post-36490276363656707072010-12-20T11:57:00.000+09:002010-12-20T11:57:14.739+09:00Incompleteness via Tarski's Theorem<p>
Many textbooks, including <a href="http://makotokanazawa.blogspot.com/2010/03/mathematical-logic-and-computability.html">the one I'm using</a>, present an "easy" proof of Gödel's First Incompleteness Theorem via Tarski's Theorem on undefinability of truth, before presenting a proof more in line with Gödel's original proof. Tarski's Theorem (or a version thereof) states that the set <i>TR</i> of Gödel numbers of sentences of first-order arithmetic that are true in the standard model is not "arithmetical", i.e., is not definable by any formula of first-order arithmetic. In contrast, the set <i>PV</i> of Gödel numbers of sentences provable in any recursively axiomatized theory <i>is</i> arithmetical and has a Σ<sub>1</sub> defining formula. Therefore, this proof goes, the two sets can't be equal, and there must exist a true sentence that is not provable, unless the theory proves some false sentence.
</p>
<p>
In my logic class, I thought of mentioning the proof via Tarski's Thorem before going into any technicalities about arithmetization. I was going to actually prove Tarski's Theorem and only state that <i>PV</i> is arithmetical, leaving the proof of the latter fact for the next few lectures.
</p>
<p>
But then I realized the problem with this approach — Tarski's Theorem is not that easy to
prove. The proof given in Keisler et al.'s textbook uses (a weak version of) the Fixed Point Theorem, which depends on the fact that the "diagonal function" is arithmetical, which in turn relies on the fact that all computable functions are arithmetical, and ultimately rests on the existence of the Gödel beta function. In other words, this proof involves almost all machinery used in Gödel's original proof.
</p>
<p>
It is possible to prove the arithmetical definability (or representability) of the diagonal function (or some such devices needed to prove Tarski's Theorem) without using the beta function, as is done in Boolos and Jeffrey's book (at least until the third edition), but this depends on the details of the Gödel numbering of formulas, and does not work with the method used in Keisler et al.'s book.
</p>
<p>
So, rather than prove Tarski's Theorem, what I did in my class was to prove the arithmetical undefinability of the "satisfaction relation"
</p>
<p>
SAT = { (#(<i>A</i>(<i>x</i>)), <i>n</i>) | <i>A</i>(<b>n</b>) is true },
</p>
<p>
which holds of all pairs whose first component is the Gödel number of a formula and whose second component is a number that the formula is true of in the standard model. (Here, <b>n</b> is the numeral representing the natural number <i>n</i>.) This has an easy three-line proof, does not require the diagonal function on natural numbers, and, as noted <a href="http://antimeta.wordpress.com/2007/07/23/coding-truth-and-satisfaction/">here</a>, makes no assumption on the nature of the Gödel numbering (except injectivity) or on the vocabulary from which formulas are built.
</p>
<p>
To prove the incompleteness using the undefinability of <i>SAT</i>, all you need is to prove the arithmetical definability of the "provable satisfaction" relation
</p>
<p>
<i>PV</i><sub>2</sub> = { (#(<i>A</i>(<i>x</i>)), <i>n</i>) | <i>A</i>(<b>n</b>) is provable },
</p>
which, of course, hardly requires any more work than showing the arithmetical definability of the set <i>PV</i> of Gödel numbers of provable sentences. If no false sentence is provable, then <i>PV</i><sub>2</sub> must be a proper subset of <i>SAT</i>, and it follows that there is a true sentence that is not provable.
</p>
<p>
This is of course a rather trivial point, but I find it a little curious that the undefinability of <i>SAT</i> does not seem to be mentioned very often, not even in a book like Smullyan's <i> Gödel's Incompleteness Theorems</i>.
</p>Makoto Kanazawahttp://www.blogger.com/profile/08107328482880816417noreply@blogger.com0tag:blogger.com,1999:blog-1980588588005373416.post-35982277731296989992010-05-22T20:44:00.005+09:002012-01-18T16:32:38.734+09:00Labeled Bracketings and the Chomsky-Schützenberger Theorem<p>
I always thought that the Chomsky-Schützenberger Theorem is an easy consequence of the fact that for every CFG <i>G</i>, the strings generated by <i>G</i> are precisely the yields of the parse trees of <i>G</i>. Until recently, I did not have a good understanding of just how easy a consequence it is.
</p>
<p>
The theorem is usually stated as follows:
</p>
<p>
<b>Theorem</b> (Chomsky-Schützenberger). For every context-free language <i>L</i>, there exist a positive integer <i>n</i>, a homomorphism <i>h</i>, and a regular set <i>R</i> such that
</p>
<p>
<i>L</i> = <i>h</i>(<i>D</i><sub><i>n</i></sub> ∩ <i>R</i>),
</p>
<p>
where <i>D</i><sub><i>n</i></sub> is the Dyck language over the set Δ<sub><i>n</i></sub> of <i>n</i> pairs of parentheses.
</p>
<p>
In fact, the theorem can be strengthened to require that the set <i>R</i> be <em>local</em> in the sense that there are sets <i>J</i><sub>1</sub>, <i>J</i><sub>2</sub> ⊆ Δ<sub><i>n</i></sub> and <i>I</i> ⊆ Δ<sub><i>n</i></sub><sup>2</sup> such that
</p>
<p>
<i>R</i> = (<i>J</i><sub>1</sub> Δ<sub><i>n</i></sub><sup>*</sup> ∩ Δ<sub><i>n</i></sub><sup>*</sup> <i>J</i><sub>2</sub>) − Δ<sub><i>n</i></sub><sup>*</sup> (Δ<sub><i>n</i></sub><sup>2</sup> − <i>I</i>) Δ<sub><i>n</i></sub><sup>*</sup>.
</p>
<p>
This strengthening is often not mentioned in textbooks; it is explicitly stated in Chomsky's (1963) "Formal properties of grammars". Another fact mentioned by Chomsky (1963) is that <i>D</i><sub><i>n</i></sub> may be replaced with the set <i>D</i>′<sub><i>n</i></sub> of <em>Dyck primes</em>, which are elements of <i>D</i><sub><i>n</i></sub> whose first and last symbols form a matching pair of parentheses. (Also, the homomorphism <i>h</i> is in fact <em>alphabetic</em> in the sense that it maps each symbol in Δ<sub><i>n</i></sub> either to a symbol or to the empty string, but this is always clear from the proof.)
</p>
<p>
I had thought that the set <i>D</i><sub><i>n</i></sub> ∩ <i>R</i> in the statement of the theorem can just be taken to be an alternative representation of the set <i>PT</i>(<i>G</i>) of parse trees of any CFG <i>G</i> for <i>L</i>. This is essentially correct, but not literally true. Chomsky expressed the qualification as restrictions on the rules of <i>G</i> (see below).
</p>
<p>
Proofs of the theorem given in some textbooks (e.g., Salomaa 1972 and Berstel 1979) fail to make any connection between <i>D</i><sub><i>n</i></sub> ∩ <i>R</i> and <i>PT</i>(<i>G</i>), thus bypassing a main concern for Chomsky. The proof in Kozen's (1997) <i>Automata and Computability</i>—perhaps the clearest textbook treatment of the theorem—makes a clear connection between the two sets without explicitly mentioning it. (Another easy line of approach to the theorem, via pushdown automata (e.g., Harrison 1978) only makes an indirect connection between the two sets, through PDA computations.)
</p>
<p>
Here's how I came to view the connection. Recall that a set <i>L</i> of trees over some finite alphabet Γ of labels is <em>local</em> if there are a set <i>J</i> ⊆ Γ and a finite set <i>I</i> ⊆ Γ<sup>+</sup> such that any tree <i>T</i> is in <i>L</i> if and only if it satisfies the following conditions:
<ul>
<li>the label of the root is in <i>J</i>, and
<li>for every node, the string obtained by concatenating its label with the labels of its children is in <i>I</i>.
</ul>
As is well-known, the set <i>PT</i>(<i>G</i>) is local for any CFG <i>G</i>.
</p>
<p>
Let τ be the obvious translation that maps a tree to its labeled bracketing representation—that is to say, if <i>T</i> is a tree with root label σ and immediate subtrees <i>T</i><sub>1</sub>, …, <i>T</i><sub><i>n</i></sub>, then τ(<i>T</i>) = [<sub>σ</sub> τ(<i>T</i><sub>1</sub>) … τ(<i>T</i><sub><i>n</i></sub>) ]<sub>σ</sub>. If <i>T</i> is a tree over Γ with |Γ| = <i>n</i>, then τ(<i>T</i>) is an element of <i>D</i>′<sub><i>n</i></sub>, where each pair of parentheses in Δ<sub><i>n</i></sub> corresponds to a symbol in Γ. Let us call a set <i>L</i> of trees over Γ <em>super-local</em> if τ(<i>L</i>) = <i>D</i>′<sub><i>n</i></sub> ∩ <i>R</i> for some local set <i>R</i> ⊆ Δ<sub><i>n</i></sub><sup>*</sup>. It is clear that a set <i>L</i> of trees is super-local if there are sets <i>J</i><sub>1</sub>, <i>J</i><sub>2</sub> ⊆ Γ, and <i>I</i><sub>1</sub>, <i>I</i><sub>2</sub>, <i>I</i><sub>3</sub> ⊆ Γ<sup>2</sup> such that for every tree <i>T</i>, <i>T</i> is in <i>L</i> if and only if it satisfies the following conditions:
<ul>
<li>the label of the root is in <i>J</i><sub>1</sub>,
<li>the label of every leaf is in <i>J</i><sub>2</sub>,
<li>if a node labeled by <i>Y</i> is the first child of a node labeled by <i>X</i>, then <i>X</i><i>Y</i> ∈ <i>I</i><sub>1</sub>,
<li>if a node labeled by <i>Y</i> is the last child of a node labeled by <i>X</i>, then <i>Y</i><i>X</i> ∈ <i>I</i><sub>2</sub>, and
<li>if a node labeled by <i>X</i> and a node labeled by <i>Y</i> are adjacent siblings (in this order), then <i>X</i><i>Y</i> ∈ <i>I</i><sub>3</sub>.
</ul>
</p>
<p>
Clearly, any super-local set of trees of bounded out-degree must be local, but it is easy to see that there are local sets, including sets of the form <i>PT</i>(<i>G</i>), that are not super-local. The set of "rule trees" of a CFG, where a rule, rather than a nonterminal, labels each node, need not be super-local either.
</p>
<p>
The strengthened form of the Chomsky-Schützenberger Theorem is an immediate consequence of the following easy lemma:
</p>
<p>
<b>Lemma.</b> If <i>L</i> is a local set of trees over Γ, then there exist a super-local set <i>L</i>′ over Γ′ and a relabeling <i>g</i>: Γ′ → Γ such that <i>g</i> maps <i>L</i>′ onto <i>L</i>.
</p>
<p>
The idea is to add to the label of each node information about the labels of all its siblings. This construction can be seen to underly Kozen's (1997) proof of the Chomsky-Schützenberger Theorem. While a "rule tree" of a CFG labels each node by the rule expanding it, the tree used by Kozen puts information about the rule on the children of the node expanded by the rule. It is in fact sufficient for each child to be labeled by the right-hand side of the rule expanding its parent (plus its position among its siblings).
</p>
<p>
<i>Proof.</i> Let <i>J</i> ⊆ Γ, <i>I</i> ⊆ Γ<sup>+</sup> be the finite sets witnessing the locality of <i>L</i>. Let
</p>
<p>
Γ′ = { [<i>X</i>]<sub>1</sub> | <i>X</i> ∈ <i>J</i> } ∪ { [α]<sub><i>i</i></sub> | <i>X</i> ∈ Γ, <i>X</i>α ∈ <i>I</i>, 1 ≤ <i>i</i> ≤ |α| },
</p>
<p>
and set
</p>
<p>
<i>J</i><sub>1</sub> = { [<i>X</i>]<sub>1</sub> | <i>X</i> ∈ <i>J</i> },<br />
<i>J</i><sub>2</sub> = { [<i>X</i><sub>1</sub> … <i>X</i><sub><i>l</i></sub>]<sub><i>i</i></sub> | <i>X</i><sub><i>i</i></sub> ∈ <i>I</i> },<br />
<i>I</i><sub>1</sub> = { [<i>X</i><sub>1</sub> … <i>X</i><sub><i>l</i></sub>]<sub><i>i</i></sub> [<i>Y</i><sub>1</sub> … <i>Y</i><sub><i>m</i></sub>]<sub>1</sub> | <i>X</i><sub><i>i</i></sub> <i>Y</i><sub>1</sub> … <i>Y</i><sub><i>m</i></sub> ∈ <i>I</i>, <i>m</i> ≥ 1 },<br />
<i>I</i><sub>2</sub> = { [<i>Y</i><sub>1</sub> … <i>Y</i><sub><i>m</i></sub>]<sub><i>m</i></sub> [<i>X</i><sub>1</sub> … <i>X</i><sub><i>l</i></sub>]<sub><i>i</i></sub> | <i>X</i><sub><i>i</i></sub> <i>Y</i><sub>1</sub> … <i>Y</i><sub><i>m</i></sub> ∈ <i>I</i>, <i>m</i> ≥ 1 },<br />
<i>I</i><sub>3</sub> = { [<i>X</i><sub>1</sub> … <i>X</i><sub><i>l</i></sub>]<sub><i>i</i></sub> [<i>X</i><sub>1</sub> … <i>X</i><sub><i>l</i></sub>]<sub><i>i</i>+1</sub> | 1 ≤ <i>i</i> < <i>l</i>}.
</p>
<p>
Let <i>L</i>′ be the super-local set determined by <i>J</i><sub>1</sub>, <i>J</i><sub>2</sub>, <i>I</i><sub>1</sub>, <i>I</i><sub>2</sub>, <i>I</i><sub>3</sub>, and let
</p>
<p>
<i>g</i>([<i>X</i><sub>1</sub> … <i>X</i><sub><i>l</i></sub>]<sub><i>i</i></sub>) = <i>X</i><sub><i>i</i></sub>.
</p>
<p>
Now it is clear that <i>g</i> gives a one-one correspondence between the trees in <i>L</i>′ and the trees in <i>L</i>.
</p>
<p>
Note that since the conditions given by <i>I</i><sub>1</sub>, <i>I</i><sub>2</sub>, <i>I</i><sub>3</sub> in the above proof are not independent, the definition of either <i>I</i><sub>1</sub> or <i>I</i><sub>2</sub> (but not both) may be changed to drop the part that says <i>X</i><sub><i>i</i></sub><i>Y</i><sub>1</sub> … <i>Y</i><sub><i>m</i></sub> ∈ <i>I</i>, without thereby affecting the resulting super-local set <i>L</i>′. If we wish to have τ(<i>L</i>′) = <i>D</i><sub><i>n</i></sub> ∩ <i>R</i>, rather than τ(<i>L</i>′) = <i>D</i>′<sub><i>n</i></sub> ∩ <i>R</i>, that's possible too by changing the label of the root of every tree to a special symbol that only appears at the root.
</p>
<p>
In the above proof, each node label in a tree from <i>L</i> is changed to the sequence of original labels of all its siblings, including itself, plus its position among its siblings. Clearly, there are more economical ways of adding information to each node label to create a super-local set. For instance, we can use labels of the form <i>X</i><sub><i>i</i></sub> … <i>X</i><sub><i>m</i></sub> consisting of the node's original label followed by the original labels of all its <em>right</em> siblings.
</p>
<p>
Let us call a tree language <i>super-regular</i> if it can be expressed in the form τ<sup>-1</sup>(<i>D</i>′<sub><i>m</i></sub> ∩ <i>R</i>) with <i>R</i> <i>regular</i>. Every super-regular tree language is regular, since it can be recognized by a "left-to-right" one-visit tree-walking automaton (cf. Okhotin, Salomaa, and Domaratzki 2002). The converse is of course not true, and the super-regular tree languages don't even contain all the local tree languages. Here is an example of a Chomsky normal form CFG <i>G</i> such that <i>PT</i>(<i>G</i>) is not super-regular. This shows that the set <i>D</i><sub><i>n</i></sub> ∩ <i>R</i> in the Chomsky-Schützenberger Theorem, even in its weak form, cannot in general be taken to be τ(<i>PT</i>(<i>G</i>)), unless the grammar <i>G</i> satisfies some special conditions.
</p>
<p>
<i>S</i> → <i>S</i> <i>A</i>
<br />
<i>S</i> → <i>B</i> <i>S</i>
<br />
<i>S</i> → <i>a</i>
<br />
<i>A</i> → <i>a</i>
<br />
<i>B</i> → <i>a</i>
</p>
<p>
That the set of parse trees of this grammar is not super-regular can be seen as follows. Consider strings of the form
</p>
<p>
(*) [<sub><i>S</i></sub> <i>x</i><sub>1</sub> … [<sub><i>S</i></sub> <i>x</i><sub><i>k</i></sub> [<sub><i>S</i></sub> [<sub><i>a</i></sub> ]<sub><i>a</i></sub> ]<sub><i>S</i></sub> <i>y</i><sub><i>k</i></sub> ]<sub><i>S</i></sub> … <i>y</i><sub>1</sub> ]<sub><i>S</i></sub>, with <i>x</i><sub><i>i</i></sub> ∈ { [<sub><i>A</i></sub> [<sub><i>a</i></sub> ]<sub><i>a</i></sub> ]<sub><i>A</i></sub>, ε } and <i>y</i><sub><i>i</i></sub> ∈ { [<sub><i>B</i></sub> [<sub><i>a</i></sub> ]<sub><i>a</i></sub> ]<sub><i>B</i></sub>, ε } (<i>i</i> = 1, … <i>k</i>).
</p>
<p>
Any string of the form (*) is in τ(<i>PT</i>(<i>G</i>)) if and only if for all <i>i</i> = 1, … <i>k</i>, either
</p>
<p>
<i>x</i><sub><i>i</i></sub> = [<sub><i>A</i></sub> [<sub><i>a</i></sub> ]<sub><i>a</i></sub> ]<sub><i>A</i></sub> and <i>y</i><sub><i>i</i></sub> = ε
</p>
<p>
or
</p>
<p>
<i>x</i><sub><i>i</i></sub> = ε and <i>y</i><sub><i>i</i></sub> = [<sub><i>B</i></sub> [<sub><i>a</i></sub> ]<sub><i>a</i></sub> ]<sub><i>B</i></sub>.
</p>
<p>
This means that the 2<sup><i>k</i></sup> strings of the form
</p>
<p>
(**) [<sub><i>S</i></sub> <i>x</i><sub>1</sub> … [<sub><i>S</i></sub> <i>x</i><sub><i>k</i></sub>, with <i>x</i><sub><i>i</i></sub> ∈ { [<sub><i>A</i></sub> [<sub><i>a</i></sub> ]<sub><i>a</i></sub> ]<sub><i>A</i></sub>, ε } (<i>i</i> = 1, … <i>k</i>)
</p>
<p>
are such that no two of them stand in the Myhill-Nerode right-congruence relation ≡<sub>τ(<i>PT</i>(<i>G</i>))</sub> associated with τ(<i>PT</i>(<i>G</i>)).
</p>
<p>
Now let <i>D</i>′<sub>4</sub> be the set of Dyck primes over { [<sub><i>S</i></sub>,]<sub><i>S</i></sub>,[<sub><i>A</i></sub>,]<sub><i>A</i></sub>,[<sub><i>B</i></sub>,]<sub><i>B</i></sub>,[<sub><i>a</i></sub>,]<sub><i>a</i></sub> }, and <i>R</i> be any regular set of strings. Clearly, all strings of the form (**) stand in the relation ≡<sub><i>D</i>′<sub>4</sub></sub> to each other. Since <i>R</i> is regular, ≡<sub><i>R</i></sub> has only finitely many equivalence classes. So if we choose large enough <i>k</i>, there are distinct strings <i>w</i><sub>1</sub>, <i>w</i><sub>2</sub> of the form (**) that stand in the relation ≡<sub><i>R</i></sub>. Since for any string languages <i>L</i><sub>1</sub> and <i>L</i><sub>2</sub>, ≡<sub><i>L</i><sub>1</sub> ∩ <i>L</i><sub>2</sub></sub> is a coarser equivalence relation than ≡<sub><i>L</i><sub>1</sub></sub> ∩ ≡<sub><i>L</i><sub>2</sub></sub>, the strings <i>w</i><sub>1</sub>, <i>w</i><sub>2</sub> must stand in the relation ≡<sub><i>D</i>′<sub>4</sub> ∩ <i>R</i></sub>. Therefore, we cannot have <i>D</i>′<sub>4</sub> ∩ <i>R</i> = τ(<i>PT</i>(<i>G</i>)) for any regular set <i>R</i>. Since τ(<i>PT</i>(<i>G</i>)) = <i>D</i>′<sub>4</sub> ∩ <i>R</i> is implied by τ(<i>PT</i>(<i>G</i>)) = <i>D</i><sub>4</sub> ∩ <i>R</i>, we cannot have the latter, either, for any regular <i>R</i>.
</p>
<p>
<strong>Update January 18, 2012</strong>: A similar proof appears in <a href="http://arxiv.org/abs/cs/0606062">Libkin 2006</a>.
</p>
<p>
When is <i>PT</i>(<i>G</i>) super-regular? Chomsky's (1963) proposed sufficient condition was that <i>G</i> be a <em>modified normal grammar</em> in the sense that <i>G</i> is in Chomsky normal form and contains no pair of rules
</p>
<p>
<i>A</i> → <i>B</i> <i>C</i><br />
<i>D</i> → <i>C</i> <i>E</i><br />
</p>
<p>
for any nonterminals <i>A</i>, <i>B</i>, <i>C</i>, <i>D</i>, <i>E</i>. However, Chomsky's proof that (in our words) <i>PT</i>(<i>G</i>) is super-regular for any modified normal grammar <i>G</i> does not work. Adding a further condition that says that if
</p>
<p>
<i>A</i> → <i>B</i> <i>C</i><br />
<i>A</i> → <i>D</i> <i>E</i><br />
<i>F</i> → <i>B</i> <i>E</i>
</p>
<p>
are rules of the grammar, then so is
</p>
<p>
<i>A</i> → <i>B</i> <i>E</i>
</p>
<p>
guarantees that <i>PT</i>(<i>G</i>) is super-local. I have so far been unable to tell whether there is any modified normal grammar <i>G</i> for which <i>PT</i>(<i>G</i>) is not super-regular.
</p>Makoto Kanazawahttp://www.blogger.com/profile/08107328482880816417noreply@blogger.com0tag:blogger.com,1999:blog-1980588588005373416.post-14353038037601298092010-03-26T13:17:00.012+09:002010-03-26T13:54:12.016+09:00Mathematical Logic and ComputabilityI'll be teaching logic to graduate students in philosophy this coming semester. The textbook I'll use is H. Jerome Keisler et al.'s (1996) <a href="http://www.amazon.com/Mathematical-Computability-International-Applied-Mathematics/dp/0079129315/ref=sr_1_6?ie=UTF8&s=books&qid=1269577091&sr=8-6"><i>Mathematical Logic and Computability</i></a>. Despite the fact that it has many typos, I like this book. It uses tableaus for proofs and register machines for computation. It contains a detailed treatment of Gödel's Incompleteness Theorems. The book's level is definitely more adequate for my purposes than Boolos and Jeffrey's <i>Computability and Logic</i>. I'll see how much of the book I can cover in two semesters.<br />
<br />
The book is out of print, but I found a PDF version at the following URL:<br />
<br />
<a href="http://www.cs.mum.edu/courses/corazza/LogicAndComputability.pdf">http://www.cs.mum.edu/courses/corazza/LogicAndComputability.pdf</a><br />
<br />
This version is different from the older version that shows up in <a href="http://scholar.google.com/scholar?hl=en&q=author%3AKeisler+intitle%3A%22mathematical+logic+and+computability%22">Google Scholar</a>. It's not exactly the same as the printed book, but it's very close. (The title page of the PDF version lists Paul Corazza, H. Jerome Keisler, Kenneth Kunen, Terrence Millar, Arnold Miller, and Joel Robbin as the authors.)<br />
<br />
The <a href="http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.rml/1081878072">review by Leon Harkleroad</a> states that the definition of a recursively enumerable set in an exercise to Chapter 5 is incorrect. I see nothing wrong with the definition, which reads:<br />
<br />
A subset <i>A</i> ⊂ <b>N</b> is <b>recursively enumerable</b> (or r.e.) if <i>A</i> = ∅ or <i>A</i> is the range of a total computable function<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdPRoOWmIMnpoe0f2bFkCJD5vX-6VcV7tf6xI8V_6qT_E7YYsNJuz8aNKsceLPAsyliqA-BvpTuoB8jQRekc0AerdF5KfFFUNFiAceodadUUEsC7QNfvMkZ4Hffu1yW0bS2Ais96RALguW/s1600/GNUMWIN.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="267" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdPRoOWmIMnpoe0f2bFkCJD5vX-6VcV7tf6xI8V_6qT_E7YYsNJuz8aNKsceLPAsyliqA-BvpTuoB8jQRekc0AerdF5KfFFUNFiAceodadUUEsC7QNfvMkZ4Hffu1yW0bS2Ais96RALguW/s320/GNUMWIN.png" width="320" /></a>The book comes with a software package designed for DOS and Windows. In the words of the authors, "The programs are copyrighted by the authors of this book and are part of the public domain." I can't find the software anywhere on the web, but fortunately I still keep the diskette that came with my copy of the book. I was able to run it on my Mac using <a href="http://www.codeweavers.com/products/cxmac/">CrossOver Mac</a>. With <a href="http://winebottler.kronenberg.org/">WineBottler</a> I had less success.Makoto Kanazawahttp://www.blogger.com/profile/08107328482880816417noreply@blogger.com0tag:blogger.com,1999:blog-1980588588005373416.post-23816752953707903622010-03-02T11:10:00.010+09:002010-03-02T12:07:55.401+09:00Kit Fine's situation semanticsKit Fine gave an invited talk at the <a href="http://abelard.flet.keio.ac.jp/ontology/index.php?InterOntology10">Third Interdisciplinary Ontology Conference</a> on Sunday, titled "A State-Based Approach to the Frame Problem". He gives a kind of situation semantics to the classical propositional language, and then extends it to a hybrid modal language for reasoning about change.<br /><br />A model for the propositional language is an ordered triple (S,≤,[ ]), where (S,≤) is a partially ordered set of <span style="font-style:italic;">states</span> (his term for partial situations), subject to certain conditions, and [ ] is a valuation function assigning each propositional variable a pair ([p]<sup>+</sup>,[p]<sup>-</sup>) of sets of verifying and falsifying states, subject to certain natural conditions. The recursive clauses for ∧ and ∨ are interesting:<br /><br />s verifies B ∧ C if for some t and u, t verifies B, u verifies C, and s = t+u (the l.u.b of {t,u}).<br />s falsifies B ∧ C if s falsifies B or s falsifies C or for some t and u, t falsifies B, u falsifies C, and s = t+u.<br /><br />s verifies B ∨ C if s verifies B or s verifies C or for some t and u, t verifies B, u verifies C, and s = t+u.<br />s falsifies B ∨ C if for some t and u, t falsifies B, u falsifies C, and s = t+u.<br /><br />Let (S,≤) be the upper semi-lattice generated by three states s<sub>1</sub>, s<sub>2</sub>, s<sub>3</sub>. Then (S,≤) is a state space according to Fine's definition. Let p<sub>1</sub>, p<sub>2</sub>, p<sub>3</sub> be propositional variables, and for each i = 1,2,3, let<br /><br />[p<sub>i</sub>]<sup>+</sup> = { s<sub>i</sub> }, [p<sub>i</sub>]<sup>-</sup> = ∅.<br /><br />Then (S,≤,[ ]) is a model. The formula (p<sub>1</sub> ∧ p<sub>2</sub>) ∨ p<sub>3</sub> is verified by the states s<sub>1</sub>+s<sub>2</sub>, s<sub>3</sub>, and s<sub>1</sub>+s<sub>2</sub>+s<sub>3</sub>, but not by any other states. This means that the recursive clauses do not preserve the following condition on the sets of verifying and falsifying states of a formula:<br /><br />s verifies A, u verifies A, and s ≤ t ≤ u imply t verifies A.<br />s falsifies A, u falsifies A, and s ≤ t ≤ u imply t falsifies A.Makoto Kanazawahttp://www.blogger.com/profile/08107328482880816417noreply@blogger.com0