-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathgs-sentence-structure.tex
2353 lines (2134 loc) · 121 KB
/
gs-sentence-structure.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
%% -*- coding:utf-8 -*-
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% $RCSfile: grammatiktheorie-include.tex,v $
%% $Revision: 1.13 $
%% $Date: 2010/11/16 08:40:32 $
%% Author: Stefan Mueller (CL Uni-Bremen)
%% Purpose:
%% Language: LaTeX
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{German clause structure}
\label{chap-german-sentence-structure}
This chapter deals with the basic sentence structure of German. Section~\ref{sec-german-order}
introduces the phenomena that have to be covered. As Brigitta Haftka formulated it in the title of
her paper, German is a verb second language with verb last order and free constituent order
\citep{Haftka96a}. This first sounds contradictory, but as will be shown in the following section,
these three properties are indeed independent. I first motivate the categorization of German as
an SOV language in Section~\ref{sec-german-sov}, then I discuss the free constituent order (Section~\ref{sec-free-order-phen}) and the V2
property (Section~\ref{sec-v2-phen}). Verbal complexes interact with free constituent order and are discussed in
Section~\ref{sec-vc-phen}. Frontings of parts of the verbal complex and non-verbal arguments are discussed in
Section~\ref{sec-pvp-phen}.
Section~\ref{sec-analysis-v1-v2} provides the analysis of these phenomena.
\section{The phenomenon}
\label{sec-german-order}
(\mex{1}) provides examples of the main clause types in German: (\mex{1}a) is a verb last (VL) sentence,
(\mex{1}b) is a verb first (V1) sentence, and (\mex{1}c) a verb second (V2) sentence:
\eal
\ex
\gll dass Peter Maria ein Buch gibt\\
that Peter Maria a book gives\\
\glt `that Peter gives a book to Maria'
\ex
\gll Gibt Peter Maria ein Buch?\\
gives Peter Maria a book\\
\glt `Does Peter give a book to Maria?'
\ex
\gll Peter gibt Maria ein Buch.\\
Peter gives Maria a book\\
\glt `Peter gives a book to Maria.'
\zl
The following subsections deal with all these sentences types and address the question whether one
of them is basic.
\subsection{German as a SOV language}
\label{sec-german-sov}
It is assumed by many researchers that German is an SOV language, although this order is only
visible in embedded clauses like (\mex{0}a) and not in yes/no questions like (\mex{0}b) and
declarative main clauses like (\mex{0}c). The reason for this assumption is that German patterns
with many SOV languages and differs from SVO languages (for example Scandinavian languages). The
analysis of German as an SOV language is almost as old as Transformational Grammar: it was first
suggested by \citet*[\page34]{Bierwisch63a}.
Bierwisch attributes the assumption of an underlying verb"=final order to \citet{Fourquet57a}. A German translation of the
French manuscript cited by Bierwisch can be found in \citew[\page117--135]{Fourquet70a}. For other proposals, see \citew{Bach62a},
\citew{Reis74a}, \citew{Koster75a}, and \citew[Chapter~1]{Thiersch78a}.
%, \citet{denBesten83a} wohl angeblich schon 77 am MIT
Analyses which assume that
German has an underlying SOV pattern were also suggested in \gpsg \citep[\page110]{Jacobs86a},
LFG \citep[Section~2.1.4]{Berman96a-u} and HPSG (\citealp*[Section~4.7]{KW91a}; \citealp{Oliva92a}; \citealp*{Netter92};
\citealp*{Kiss93}; \citealp*{Frank94}; \citealp*{Kiss95a}; \citealp{Feldhaus97},
\citealp{Meurers2000b}; \citealp{Mueller2005c}).
The assumption of verb"=final order\label{page-verbletzt} as the base order is motivated by the following observations:\footnote{%
For points 1 and 2, see \citew[\page34--36]{Bierwisch63a}. For point~\ref{SOV-Skopus} see \citew[Section~2.3]{Netter92}.%
}
\begin{enumerate}
\item Verb particles form a close unit with the verb.
\eal
\ex
\gll weil er morgen an-fängt\\
beause he tomorrow \textsc{prt}-starts\\
\glt `because he is starting tomorrow'
\ex
\gll Er fängt morgen an.\\
he starts tomorrow \textsc{prt}\\
\glt `He is starting tomorrow.'
\zl
This unit can only be seen in verb"=final structures, which speaks for the fact that this structure reflects the base order.
\item Verbs formed by backformation often cannot be separated.
Verbs which are derived from a noun by back-formation\is{back-formation} (\eg \emph{uraufführen}
`to perform something for the first time', can often not be divided into their component parts and
V2 clauses are therefore ruled out (This was first mentioned by \citet{Hoehle91b} in unpublished
work. The first published source is \citew[\page 62]{Haider93a}):
\eal
\ex[]{
\gll weil sie das Stück heute urauf-führen\\
because they the play today \textsc{prt}-lead\\
\glt `because they are performing the play for the first time today'
}
\ex[*]{
\gll Sie urauf"|führen heute das Stück.\\
they \textsc{prt}-lead today the play\\
}
\ex[*]{
\gll Sie führen heute das Stück urauf.\\
they lead today the play \textsc{prt}\\
}
\zl
The examples show that there is only one possible position for the verb. This order is the one that
is assumed to be the base order.
\item Some constructions allow SOV order only.
Similarly, it is sometimes impossible to realize the verb in initial position when elements like
\emph{mehr als} `more than' are present in the clause \citep{Haider97c,Meinunger2001a}:
\eal
\ex[]{
\gll dass Hans seinen Profit letztes Jahr mehr als verdreifachte\\
that Hans his profit last year more than tripled\\
\glt `that Hans increased his profit last year by a factor greater than three'
}
\ex[]{
\gll Hans hat seinen Profit letztes Jahr mehr als verdreifacht.\\
Hans has his profit last year more than tripled\\
\glt `Hans increased his profit last year by a factor greater than three.'
}
\ex[*]{
\gll Hans verdreifachte seinen Profit letztes Jahr mehr als.\\
Hans tripled his profit last year more than\\
}
\zl
So, it is possible to realize the adjunct together with the verb in final position, but there are
constraints regarding the placement of the finite verb in initial position.
\item Verbs in non"=finite clauses and in finite subordinate clauses with a conjunction are
always in final position (I am ignoring the possibility of extraposing constituents):
\eal
\ex
\gll Der Clown versucht, Kurt-Martin die Ware zu geben.\\
the clown tries Kurt-Martin the goods to give\\
\glt `The clown is trying to give Kurt-Martin the goods.'
\ex
\gll dass der Clown Kurt-Martin die Ware gibt\\
that the clown Kurt-Martin the goods gives\\
\glt `that the clown gives Kurt-Martin the goods'
\zl
The English translation shows that English has VO order where German has an OV order.
\item If one compares the position of the verb in German to Danish\is{Danish} (Danish is an SVO language
like English), then one can clearly see that the verbs in German form a cluster at the end of the sentence,
whereas they occur before any objects in Danish \citep{Oersnes2009b}:
\eal
\label{ex-VO-OV}
\ex
\gll dass er ihn gesehen$_3$ haben$_2$ muss$_1$\\
that he him seen have must\\
\ex
\gll at han må$_1$ have$_2$ set$_3$ ham\\
that he must have seen him\\
\glt `that he must have seen him'
\zl
\item\label{SOV-Skopus}\is{scope|(} The scope relations of the adverbs in (\ref{bsp-absichtlich-nicht-anal}) depend on their order:
the left"=most adverb has scope over the two following elements.\footnote{%
At this point, it should be mentioned that there seem to be exceptions from the rule that modifiers to the left take scope over those to
their right. \citet*[\page47]{Kasper94a} discusses examples such as (i), which go back to \citet*[\page137]{BV72}.
\eal
\label{bsp-peter-liest-gut-wegen}
\ex
\gll Peter liest gut wegen der Nachhilfestunden.\\
Peter reads well because.of the tutoring\\
\glt `Peter can read well thanks to the tutoring.'
\ex
\gll Peter liest wegen der Nachhilfestunden gut.\\
Peter reads because.of the tutoring well\\
\zl
% Kiss95b:212
As \citet[Section~6]{Koster75a} and \citet*[\page67]{Reis80a} have shown, these are not particularly convincing counter"=examples
as the right sentence bracket is not filled in these examples and it must therefore not necessarily constitute normal reordering inside
of the middle field, but could instead be a case of extraposition\is{extraposition}.
As noted by Koster and Reis, these examples become ungrammatical if one fills the right bracket and does not extrapose the causal adjunct:
\eal
\ex[*]{
\gll Hans hat gut wegen der Nachhilfestunden gelesen.\\
Hans has well because.of the tutoring read\\
}
\ex[]{
\gll Hans hat gut gelesen wegen der Nachhilfestunden.\\
Hans has well read because.of the tutoring\\
\glt `Hans has been reading well because of the tutoring.'
}
\zl
However, the following example from \citet[\page 383]{Crysmann2004a} shows that, even with the right bracket occupied, one can still have an
order where an adjunct to the right has scope over one to the left:
\ea
\gll Da muß es schon erhebliche Probleme mit der Ausrüstung gegeben haben, da wegen schlechten
Wetters ein Reinhold Messmer niemals aufgäbe.\\
there must it already serious problems with the equipment given have since because.of bad weather a Reinhold Messmer never
would.give.up\\
\glt `There really must have been some serious problems with the equipment because someone like Reinhold Messmer would never give
up just because of some bad weather.'
%\ex Stefan ist wohl deshalb krank geworden, weil er äußerst hart wegen der Konferenz in Bremen gearbeitet hat.
\z
Nevertheless, this does not change anything regarding the fact that the corresponding cases in (\ref{bsp-absichtlich-nicht-anal})
and (\ref{bsp-absichtlich-nicht-anal-v1}) have the same meaning regardless of the position of the verb. The general means of semantic
composition may well have to be implemented in the way suggested by Crysmann.
Another word of caution is in order here: There are SVO languages like French that also have a left
to right scoping of adjuncts \citep[\page 156--161]{BGK2004a-u}. So, the argumentation above should not be seen as the only
fact supporting the SOV status of German. In any case the analyses of German that were
worked out in various frameworks can explain the facts nicely.
}
This was explained with the following structure:
\eal
\label{bsp-absichtlich-nicht-anal}
\ex
\gll weil er [absichtlich [nicht lacht]]\\
because he \spacebr{}intentionally \spacebr{}not laughs\\
\glt `because he is intentionally not laughing'
\ex
\gll weil er [nicht [absichtlich lacht]]\\
because he \spacebr{}not \spacebr{}intentionally laughs\\
\glt `because he is not laughing intentionally'
\zl
If one compares (\mex{0}) and (\mex{1}) one can see that scope relations are not affected by verb
position. If one assumes that sentences with verb"=second order have the underlying structure in
(\mex{0}), then this fact requires no further explanation. (\mex{1}) shows the structure for (\mex{0}):
\eal
\label{bsp-absichtlich-nicht-anal-v1}
\ex
\gll Er lacht$_i$ [absichtlich [nicht \_$_i$]].\\
he laughs \spacebr{}intentionally \spacebr{}not\\
\glt `He is intentionally not laughing.'
\ex
\gll Er lacht$_i$ [nicht [absichtlich \_$_i$]].\\
he laughs \spacebr{}not \spacebr{}intentionally\\
\glt `He is not laughing intentionally.'
\zl\is{scope}
%\item Verum-Fokus
\nocite{Hoehle88a,Hoehle97a}
\end{enumerate}\is{verb final language}\is{scope|)}
These properties have been taken as evidence for an underlying SOV order of German. That is, V1 and
V2 sentences are assumed to be derived from or to be somehow related to SOV sentences. It is
possible though to represent the clause types on their own right without relating them. Respective
proposals will be discussed in Chapter~\ref{chap-alternatives}. I assumed such an analysis for ten
years and I think the basic sentence structures can be explained quite well. However, the apparent
multiple frontings, which will be discussed in the next chapter, do not integrate nicely into the
alternative analyses. This caused me to drop my analysis and to revise my grammar in a way that is
inspired by early transformational analyses.
\subsection{German as a language with free constituent order}
\label{sec-free-order-phen}
As was already mentioned in the introduction, German is a language with rather free constituent
order. For example, a verb with three arguments allows for six different orders of the
arguments. This is exemplified with the ditransitive verb \emph{geben} in (\mex{1}):
\eal
\label{ex-free-order}
\ex
\gll {}[weil] der Mann der Frau das Buch gibt\\
{}\spacebr{}because the.\nom{} man the.\dat{} woman the.\acc{} book gives\\
\glt `because the man gives the book to the woman'
\ex
\gll {}[weil] der Mann das Buch der Frau gibt\\
{}\spacebr{}because the.\nom{} man the.\acc{} book the.\dat{} woman gives\\
\ex
\gll {}[weil] das Buch der Mann der Frau gibt\\
{}\spacebr{}because the.\acc{} book the.\nom{} man the.\dat{} woman gives\\
\ex
\gll {}[weil] das Buch der Frau der Mann gibt\\
{}\spacebr{}because the.\acc{} book the.\dat{} woman the.\nom{} man gives\\
\ex
\gll {}[weil] der Frau der Mann das Buch gibt\\
{}\spacebr{}because the.\dat{} woman the.\nom{} man the.\acc{} book gives\\
\ex
\gll {}[weil] der Frau das Buch der Mann gibt\\
{}\spacebr{}because the.\dat{} woman the.\acc{} book the.\nom{} man gives\\
\zl
Adjuncts can be placed anywhere between the arguments as the examples in (\mex{1}) show.
\eal
\ex
\gll {}[weil] jetzt der Mann der Frau das Buch gibt\\
{}\spacebr{}because now the.\nom{} man the.\dat{} woman the.\acc{} book gives\\
\glt `because the man gives the book to the woman now'
\ex
\gll {}[weil] der Mann jetzt der Frau das Buch gibt\\
{}\spacebr{}because the.\nom{} man now the.\dat{} woman the.\acc{} book gives\\
\glt `because the man gives the book to the woman now'
\ex
\gll {}[weil] der Mann der Frau jetzt das Buch gibt\\
{}\spacebr{}because the.\nom{} man the.\dat{} woman now the.\acc{} book gives\\
\glt `because the man gives the book to the woman now'
\ex
\gll {}[weil] der Mann der Frau das Buch jetzt gibt\\
{}\spacebr{}because the.\nom{} man the.\dat{} woman the.\acc{} book now gives\\
\glt `because the man gives the book to the woman now'
\zl
(\mex{0}) is the result of inserting the adverb \emph{jetzt} `now' into every possible position in
(\mex{-1}a). Of course adverbs can be inserted into each of the other sentences in (\mex{-1}) in the
same way and it is also possible to have several adjuncts per clause in all the positions. (\mex{1})
is an example by \citet[\page 145]{Uszkoreit87a} that illustrates this point:
\ea
\gll \emph{Gestern} hatte \emph{in} \emph{der} \emph{Mittagspause} der Vorarbeiter \emph{in}
\emph{der} \emph{Werkzeugkammer} dem Lehrling \emph{aus Boshaftigkeit} \emph{langsam} zehn
schmierige Gußeisenscheiben \emph{unbemerkt} in die Hosentasche gesteckt. \\
yesterday had during the lunch.break the foreman in the tool.shop the apprentice maliciously slowly ten
greasy cast.iron.disks unnoticed in the pocket put\\
\glt `Yesterday during lunch break, the foreman maliciously and
unnoticed, put ten greasy cast iron disks slowly into the
apprentice's pocket.'
\z
In transformational theories it is sometimes assumed that there is a base configuration from which
all other orders are derived. For instance, there could be a VP including the verb and the two
objects and this VP is combined with the subject to form a complete sentence. For all orders in
which one of the objects preceedes the subject it is assumed that there is a movement process that
takes the object out of the VP and attaches it to the left of the sentence.
An argument that has often been used to support this analysis is the fact that scope ambiguities
exist in sentences with reorderings which are not present in the base order. The explanation of such
ambiguities comes from the assumption that the scope of quantifiers
can be derived from their position before movement as well as their position after movement. When
there has not been any movement, then there is only one reading possible. If movement has taken
place, however, then there are two possible readings \citep[\page ]{Frey93a}:
\eal
\ex
\gll Es ist nicht der Fall, daß er mindestens einem Verleger fast jedes Gedicht anbot.\\
it is not the case that he at.least one publisher almost every poem offered\\
\glt `It is not the case that he offered at least one publisher almost every poem.'
\ex
\gll Es ist nicht der Fall, daß er fast jedes Gedicht$_i$ mindestens einem Verleger \_$_i$ anbot.\\
it is not the case that he almost every poem at.least one publisher {} offered\\
\glt `It is not the case that he offered almost every poem to at least one publisher.'
\zl
The position from which the NP \emph{jedes Gedicht} `every poem' is supposed to be moved is marked
by a trace (\_$_i$) in the example above.
It turns out that approaches assuming traces run into problems as they predict certain readings for sentences with multiple traces, which
do not exist (see \citealp[\page 146]{Kiss2001a} and \citealp[Section~2.6]{Fanselow2001a}).
For instance in an example such as (\mex{1}), it should be possible to interpret \emph{mindestens einem Verleger} `at least one publisher' at
the position of \_$_i$, which would lead to a reading where \emph{fast jedes Gedicht} `almost every poem' has scope over \emph{mindestens einem Verleger}
`at least one publisher'.
\ea
\gll Ich glaube, dass mindestens einem Verleger$_i$ fast jedes Gedicht$_j$ nur dieser Dichter \_$_i$ \_$_j$ angeboten hat.\\
I believe that at.least one publisher almost every poem only this poet {} {} offered has\\
\glt `I think that only this poet offered almost every poem to at least one publisher.'
\z
This reading does not exist, however.
\is{scope|)}
The alternative to a movement analysis is called \emph{base generation}\is{base generation} in
transformational frameworks. The possible orders are not derived by movement but are licensed by
grammar rules directly. Such a base-generation analysis, that is the direct licensing of orders without
any additional mechanisms, is the most common analysis in non-transformational frameworks like HPSG \citep{Pollard90a},
LFG \citep{Berman2003a}, Construction Grammar \citep{Micelli2012a} and Dependency Grammar \citep{Eroms2000a,GO2009a} and I provide such an analysis in Section~\ref{sec-scrambling-analysis}.
\is{constituent order|)}
\subsection{German as a verb second language}
\label{sec-v2-phen}
German is a verb second (V2) language (\citealp[Chapter~2.4]{Erdmann1886a};
\citealp[\page 69, \page 77]{Paul1919a}), that is, (almost) any constituent (an adjunct, subject or
complement) can be placed infront of the finite verb. (\mex{1}) shows some prototypical examples again involving the ditransitive verb
\emph{geben} `to give':
\eal
\ex[]{
\gll Der Mann gibt der Frau das Buch.\\
the man gives the woman the book\\
\glt `The man gives the woman the book.'
}
\ex[]{
\gll Der Frau gibt der Mann das Buch.\\
the woman gives the man the book\\
\glt `The man gives the woman the book.'
}
\ex[]{
\gll Das Buch gibt der Mann der Frau.\\
the book gives the man the woman\\
\glt `The man gives the woman the book.'
}
\ex[]{
\gll Jetzt gibt der Mann der Frau das Buch.\\
now gives the man the woman the book\\
\glt `The man gives the woman the book now.'
}
\zl
If this is compared with English, one sees that English has XP SVO order, that is the basic SVO
order stays intact and one constituent is placed infront of the sentence into which it belongs:
\eal
\ex The woman, the man gives the book.
\ex The book, the man gives the woman.
\ex Now, the man gives the woman the book.
\zl
Languages like Danish on the other hand are V2 languages like German but nevertheless SVO languages
(see the discussion of (\ref{ex-VO-OV}) on page~\pageref{ex-VO-OV}). Although the verb in embedded
sentences like (\ref{ex-VO-OV}) precedes the object and follows the subject, the finite verb appears
initially and one of the constituents is fronted. The resulting orders are identical to the ones we
see in German.
Examples such as (\mex{1}) show that occupation of the prefield cannot simply be explained as an ordering variant of an element dependent on
the finite verb (in analogy to reorderings in the middle field):
\ea
\gll{}[Um zwei Millionen Mark]$_i$ soll er versucht haben, [eine Versicherung \_$_i$ zu betrügen].\footnotemark\\
\spacebr{}around two million Deutschmarks should he tried have \spacebr{}an insurance.company
{} to defraud\\
\footnotetext{%
taz, 04.05.2001, p.\,20.
}
\glt `He supposedly tried to defraud an insurance company of two million Deutschmarks.'
\z
%
The head that governs the PP (\emph{betrügen} `defraud') is located inside of the infinitive clause. The PP as such is not directly dependent on the finite
verb and can therefore not have reached the prefield by means of a simple local reordering operation. This
shows that the dependency between \emph{betrügen} and \emph{um zwei Millionen} `around two million
Deutschmarks' is a long distance dependency: an element belonging to a deeply embedded head has been fronted over several phrasal borders.
Such long distance dependencies are often modeled by devices that assume that there is a position in
the local domain where one would expect the fronted constituent. This is indicated by the \_$_i$,
which is called a gap or a trace. The gap is related to the filler. The alternative to assuming such a gap is
to establish some dependency between the filler and the head on which the filler is dependent. This
is done in Dependency Grammar \citep{Hudson2000a} and in traceless approaches in HPSG
\citep*{BMS2001a} and LFG \citep{KZ89a}. The question is whether
it is reasonable to assume that even simple V2 sentences, that is sentences in which the filler does
not belong to a deeply embedded head, also involve a filler-gap dependency. Approaches that assume
that sentences like (\mex{1}a) are just a possible linearization variant of the verb and its
dependents will have problems in explaining the ambiguity of this sentence. (\mex{1}a) has two
readings, which correspond to the readings of (\mex{1}b) and (\mex{1}c):\todostefan{If reordering in
the MF can have this effect, this argument is void. Vorfeldbesetzung then would be just a formal
reordering. The only argument would then be uniformity.}
\eal
\ex\label{ex-oft-liest-er-das-buch-nicht}
\gll Oft liest er das Buch nicht.\\
often reads he the book not\\
\glt `It is often that he does not read the book.' or `It is not the case that he reads the book
often.'
\ex
\gll dass er das Buch nicht oft liest\\
that he the book not often reads\\
\glt `It is not the case that he reads the book often.'
\ex
\gll dass er das Buch oft nicht liest\\
that he the book often not reads\\
\glt `It is often that he does not read the book.'
\zl
If one assumes that there is a filler-gap dependency in (\mex{0}a), one can assume that the
dependency can be introduced before the negation is combined with the verb or after the
combination. This would immedeatly explain the two readings that exist for (\mex{0}a). Approaches
that assume that the order is a simple ordering variant of the involved constituents would predict
that (\mex{0}a) has the reading of (\mex{0}c) since (\mex{0}a) and (\mex{0}c) have the same order of
\emph{oft} `often' and \emph{nicht} `not' and the order is important for scope determination in German.
\subsection{Distribution of complementizer and finite verb}
\subsection{Verbal complexes}
\label{sec-vc-phen}
It is common to assume that verb and objects form a phrase in VO languages like English. However,
for languages like German, it seems more appropriate to assume that verbs in the right sentence
bracket form a verbal complex and that this verbal complex acts like one complex predicate when it
is combined with the nonverbal arguments. The following examples support this view. If one would
assume a structure like the one in (\mex{1}a), it is difficult to explain the ordering of
(\ref{ex-of}) because the auxiliary \emph{wird} `will' is located between two elements
of the verb phrase.
\eal
\ex
\label{ex-uf}
\gll dass Karl [[das Buch lesen] können] wird]\\
that Karl \hspaceThis{[[}the book read can will\\
\glt `that Karl will be able to read the book'
\ex
\label{ex-of}
\gll dass Karl das Buch wird lesen können\\
that Karl the book will read can\\
\glt `that Karl will be able to the read the book.'
\zl
Furthermore, the sentences in (\mex{1}) are not ruled out by such an analysis since \emph{das Buch
lesen} `the book read' forms a phrase which would be predicted to be able to scramble left in the
middle-field as in (\mex{1}a) or appear in a so-called pied-piping construction with a relative
clause as in (\mex{1}b).
\eal
\ex[*]{
\gll dass [das Buch lesen] Karl wird\\
that \spacebr{}the book read Karl will\\
}
\ex[*]{
\gll das Buch, [das lesen] Karl wird\\
the book \spacebr{}that read Karl will\\
}
\zl
%
\citet*{HN94a}\ia{Hinrichs}\ia{Nakazawa} therefore suggest that (certain) verbal complements are
saturated before non-verbal ones. This means that, in the analysis of (\ref{ex-uf}) and
(\ref{ex-of}), \emph{lesen} `to read' is first combined with \emph{können} `can' and the resulting
verbal complex is then combined with \emph{wird} `will':
\ea
\gll dass Karl das Buch [[lesen können] wird]\\
that Karl the book \hspaceThis{[[}read can will\\
\z
\emph{wird} `will' can be placed to the right of the embedded verbal complex (as in (\mex{0})), or indeed to the left as
in \pref{ex-of}. After the construction of the verbal complex \emph{lesen können wird}, it is then combined with the
arguments of the involved verbs, that is with \emph{Karl} and \emph{das Buch} `the book'.\footnote{%
This kind of structure has already been suggested by \citet*{Johnson86a} in connection with an
analysis of partial verb phrase fronting.
}
There are also coordination data, such as the example in (\mex{1}), which support this kind of approach.
\ea
\gll Ich liebte ihn, und ich fühlte, daß er mich auch geliebt hat oder doch, daß er mich hätte lieben wollen oder lieben müssen.\footnotemark\\
I loved him and I felt that he me also loved had or PRT that he me would.have love want or love must.\\
\footnotetext{%
\citep*[\page 36]{Hoberg81a}
}
\iw{müssen}
\glt `I loved him and felt that he loved me too, or at least he would have wanted to love me or would have had to.'
\z
If one assumes that modal verbs form a verbal complex, \emph{lieben wollen} `love want' and
\emph{lieben müssen} `love must' are constituents and as such they can be coordinated in a symmetric
coordination. The result of the coordination can then function as the argument of \emph{hätte}
`had'.
Arguments of the verbs that are part of a verbal complex may be scrambled as the following example
from \citet[\page 128]{Haider90b} shows:
\ea\label{ex-weil-es-ihr-jemand-zu-lesen-versprochen-hat}
\gll weil es ihr jemand zu lesen versprochen hat\\
because it.\acc{} her.\dat{} somebody.\nom{} to read promised has\\
\glt `because somebody promised her to read it'
\z
\emph{jemand} `somebody' depends on \emph{hat} `has', \emph{ihm} `him' depends on \emph{versprochen}
`promised' and \emph{es} `it' depends on \emph{zu lesen} `to read'. In principle all six
permutations of these arguments are possible again and hence the verbal complex acts like a simplex
ditransitive verb.
\subsection{Partial verb phrase fronting}
\label{sec-pvp-phen}
The left peripheral elements of this verbal complex can (in some cases together with the adjacent material
from the middle field) be moved into the prefield:
\eal
\ex
\gll Gegeben hat er der Frau das Buch.\\
given has he the woman the book\\
\glt `He gave the woman the book.'
\ex
\gll Das Buch gegeben hat er der Frau.\\
the book given has he the woman\\
\ex
\gll Der Frau gegeben hat er das Buch.\\
the woman given has he the book\\
\ex
\gll Der Frau das Buch gegeben hat er.\\
the woman the book given has he\\
\zl
Since the verbal projections in (\mex{1}a--c) are partial, such frontings are called \emph{partial verb
phrase frontings}.
\section{The analysis}
\label{sec-analysis-v1-v2}
The following analysis uses Head-driven Phrase Structure Grammar (HPSG) as its main framework \citep{ps2}. It is, of course,
not possible to provide a comprehensive introduction to HPSG here, so a certain acquaintance with the general assumptions and mechanisms is
assumed for the following argumentation. The interested reader may refer to
\citew{MuellerLehrbuch3,MuellerHPSGHandbook} for introductions that are
compatible with what is presented here. In Section~\ref{sec-annahmen}, I will go over some basic assumptions to aid the understanding of the analysis, and
will also show how the relatively free ordering of constituents in the German \emph{Mittelfeld} can be analyzed. In Section~\ref{sec-v1}, I will recapitulate a
verb-movement analysis for verb-first word orderings and in Section~\ref{sec-v2} I discuss the analysis of verb-second sentences. Section~\ref{sec-pred-compl}
will deal with the analysis of predicate complexes and the fronting of partial projections.
\subsection{Background assumptions}
\label{sec-annahmen}
\label{sec-scrambling-analysis}
Every modern linguistic theory makes use of features in order to describe linguistic objects. In HPSG grammars, features are
systematically organized into `bundles'. These bundles correspond to certain characteristics of a linguistic object: for example, syntactic features
form one feature bundle, and semantic features form another. HPSG is a theory about linguistic signs in the sense of Saussure \citeyearpar{Saussure16a}\ia{Saussure}.
The modelled linguistic signs are pairs of form and meaning.
(\mex{1}) shows the feature geometry of signs that I will assume in the
following:
\ea
\ms[sign]
{ phonology & \type{list~of~phoneme~strings}\\
synsem & \onems[synsem]{ local \ms[local]{ category & \ms{ head & \type{head} \\% maj & maj\/ \\} \\
spr & \type{list of synsem-objects} \\
comps & \type{list of synsem-objects} \\
arg-st & \type{list of synsem-objects} \\
} \\
content & \type{cont} \\
} \\
nonlocal \type{nonloc} \\
lex \type{boolean}\\
} \\
}
\z
The value of \textsc{phonology}\is{phonology} is a list of phonological forms.
Usually, the orthographic form is used to improve readability.
\textsc{synsem}\is{feature!synsem@\textsc{synsem}} contains syntactic and semantic information.
The feature \textsc{local}\is{feature!loc@\textsc{loc}} (\textsc{loc}) is called as such because syntactic and semantic
information in this path are those which are relevant in local contexts. In contrast,
there is, of course, also non-local information.
Such information is contained in the path \textsc{synsem$|$\-nonloc}\is{feature!nonloc@\textsc{nonloc}}. I will expand
on this in Section~\ref{sec-v2}.
%
Information about the syntactic category of a sign (\textsc{category})\is{feature!cat@\textsc{cat}} and information about its
semantic content (\textsc{content})\is{feature!cont@\textsc{cont}} are `local information'.
\textsc{head}\is{feature!head@\textsc{head}}, \spr, \comps,\isfeat{comps} and \argst\isfeat{arg-st} belong to the features which are
included in the path \textsc{synsem$|$\-loc$|$\-cat} in the feature description.
%
The value of \textsc{head} is a feature structure which specifies the syntactic characteristics that a certain lexical sign
shares with its projections, that is, with phrasal signs whose head is the corresponding lexical sign.
%
The \textsc{arg-st} feature provides information about the argument structure\is{valence} of a particular sign. Its value is a list which
includes the elements (possibly only partially specified) with which the sign has to be combined to produce a grammatically
complete phrase. The elements are mapped to valence features like \spr and \comps. I follow
\citet{Pollard90a} in assuming that finite verbs have all their arguments on the \compsl, that is,
there is no difference between subjects and complements as far as finite verbs are concerned. In SVO
languages like English and Danish, the subject is represented under \spr and all other arguments under \comps.
The \lexv has the value + with lexical signs and predicate complexes and $-$ with phrasal projections.\footnote{%
\citet{Muysken82a} suggests a \textsc{min} feature in \xbart which corresponds to the
\textsc{lex} feature. A \textsc{max} feature, in the way that Muysken uses it, is not needed since the
maximality of a projection can be ascertained by the number of elements in its valence list: maximal
projections are completely saturated and therefore have empty valence lists.%
}
The lexical item in (\mex{1}) is an example of the finite form of the verb \emph{kennen} `to know'.
%\begin{figure}[htb]
\eas
Lexical item for \emph{kennt} `knows':\\
\label{le-kennt}
\ms[word]{
phon & \phonliste{ kennt }\\
synsem & \onems{ loc \ms{ cat & \ms{ head & \ms[verb]{ vform & fin \\} \\
spr & \eliste\\
comps & \sliste{ NP[\type{nom}]\ind{1}, NP[\type{acc}]\ind{2} } \\
} \\
cont & \onems{ hook|index \ibox{3}\\
rels \liste{ \ms[kennen]{
arg0 & \ibox{3}\\
arg1 & \ibox{1}\\
arg2 & \ibox{2}\\
} }\\
}\\
}\\
nonlocal \ms{ inherited$|$slash & \eliste \\
to-bind$|$slash & \eliste \\
} \\
lex $+$\\
} \\
}
\zs
%\vspace{-\baselineskip}\end{figure}
%
\emph{kennen} `to know' requires a subject (NP[\type{nom}]) and an accusative object (NP[\type{acc}]).
NP[\type{nom}] and NP[\type{acc}] are abbreviations for feature descriptions which are similar to
(\mex{0}). This requirement is represented on the \argstl, but since this list is identical to the
\compsl for finite verbs, it is not given here. It is in the lexical entry that the syntactic
information is linked to the semantic information. The subscript box on the NPs indicates the
referential index of that particular NP. This is identified with an argument role of the
\emph{kennen} relation. The semantic contribution of signs consist of an index and a list of
relations that are contributed by the sign. The index corresponds to a referential variable for
nouns and for an event variable for verbs. The referential index of a sign is usually linked to its
\textsc{arg0}. I assume Minimal Recursion Semantics (MRS; \citealp*{CFPS2005a}) as the format of the
representation of semantic information. This choice is not important for the analysis of the syntax
of the German clause that is discussed in this chapter and for the analysis of apparent multiple
frontings that is discussed in the following chapter. So the semantic representations are
abbreviated in the following. However, the semantic representation is important when it comes to the
representation of information structure and hence there will be a brief introduction to MRS in
Section~\ref{sec-intro-MRS}.
Heads are combined with their required elements by means of a very general rule, which (when applied to the
conventions for writing phrase structure rules) can be represented as follows:
\ea
\label{h-c-regel}
H[\comps \ibox{1} $\oplus$ \ibox{3}] $\to$ H[\comps \ibox{1} $\oplus$ \sliste{ \ibox{2} } $\oplus$ \ibox{3} ]~~~ \ibox{2}
\z
The rule in (\mex{0}) combines an element \iboxb{2} from the \compsl of a head with the head itself.
The \compsl of the head is split into three lists using the relation append ($\oplus$), which splits
a list in two parts (or combines two lists into a new one). The first list is \ibox{1}, the second
list is the list containing \ibox{2} and the third list is \ibox{3}. If the \compsl of the head
contains just one element, \ibox{1} and \ibox{3} will be the empty list and since the \compsl of the mother is the concatenation of \ibox{1} and \ibox{3}, the \compsl of the mother node will be the empty list. The H in the rule stands for `Head'. Depending on which syntactic category a rule is instantiated by, the
H can stand for noun, adjective, verb, preposition or another syntactic category.
Figure~\vref{abb-weil-er-das-buch-kennt} is an example analysis for the sentence in (\mex{1}).\footnote{%
In the following figures, H stands for `head', C for `complement', A for `adjunct', F for `filler'
and CL for `cluster'.
}
\ea
\gll weil er das Buch kennt\\
because he the book knows\\
\glt `because he knows the book'
\z
\begin{figure}
\centering
\begin{forest}
sm edges
[V{[\textit{fin}, \comps \eliste]}
[\ibox{1} NP{[\textit{nom}]}
[er;he]]
[V{[\textit{fin}, \comps \sliste{ \ibox{1} }]}
[\ibox{2} NP{[\textit{acc}]}
[das Buch;the book ,roof]]
[V{[\textit{fin}, \comps \sliste{ \ibox{1}, \ibox{2} }]}
[kennt;knows]]]]
\end{forest}
\caption{Analysis of \emph{weil er das Buch kennt} `because he knows the book'}\label{abb-weil-er-das-buch-kennt}
\end{figure}
Grammatical rules in HPSG are also described using feature descriptions. The rule in (\ref{h-c-regel})
corresponds to Schema~\ref{schema-bin}:
\begin{samepage}
\begin{schema}[Head-Complement Schema]
%~\\*
\label{schema-bin}
\type{head-complement-phrase} \impl\\
\ms{
synsem & \onems{ loc$|$cat$|$comps \ibox{1} $\oplus$ \ibox{3}\\
lex $-$\\
}\\
head-dtr & \onems{ synsem$|$loc$|$cat$|$comps \ibox{1} $\oplus$ \sliste{ \ibox{2} } $\oplus$ \ibox{3} \\
}\\
non-head-dtrs & \liste{\onems{ synsem \ibox{2} \\ }}\\[2mm]
}
\end{schema}
\end{samepage}
%
In this schema, the head daughter as well as the non-head daughters are represented as values of features
(as value of \textsc{head-dtr} and as element in the list under \textsc{non-head-dtrs}). Since there are
also rules with more than one non-head daughters in HPSG grammars, the value of \textsc{non-head-dtrs} is a
list. The surface ordering of the daughters in signs licensed by these kinds of
schemata is not in any sense determined by the schemata themselves. Special linearization rules, which
are factored out from the dominance schemata, ensure the correct serialization of
constituents. Therefore, Schema~\ref{schema-bin} allows both head-complement as well as complement-head orderings.
The sequence in which the arguments are combined with their head is not specified by the schema. The
splitting of the lists with append allows the combination of any element of the \compsl with the head. The only condition for the possibility of combining a head and an complement is the adjacency of the
respective constituents. It is possible then to analyze (\mex{1}) using Schema~\ref{schema-bin}.
\ea
\gll weil das Buch jeder kennt\\
because the book everyone knows\\
\glt 'because everyone knows the book'
\z
This is shown in Figure~\vref{abb-weil-das-buch-jeder-kennt}.
\begin{figure}
\centering
\begin{forest}
sm edges
[V{[\textit{fin}, \comps \eliste]}
[\ibox{2} NP{[\textit{acc}]}
[das Buch;the book, roof]]
[V{[\textit{fin}, \comps \sliste{ \ibox{2} }]}
[\ibox{1} NP{[\textit{nom}]}
[jeder;everyone]]
[V{[\textit{fin}, \comps \sliste{ \ibox{1}, \ibox{2} }]}
[kennt;knows]]]]
\end{forest}
\caption{Analysis of \emph{weil das Buch jeder kennt} `because everybody knows the book'}\label{abb-weil-das-buch-jeder-kennt}
\end{figure}
\iboxt{1} and \ibox{3} can be lists containing elements or they can be the empty list. For languages
that do not allow for scrambling either \ibox{1} or \ibox{3} will always be the empty list. For
instance English and Danish combine the head with the complements in the order the elements are
given in the \compsl. Since \ibox{1} is assumed to be the empty list for such languages, Schema~\ref{schema-bin} delivers the right result. The nice effect of this analysis is that
languages that do not allow for scrambling have more constraints in their grammar (namely the additional
constraint that \ibox{1} = \eliste), while languages with less constrained constituent order have
fewer constraints in their grammar. This should be compared with movment"=based analyses where less
restrictive constituent order results in more complex analyses.
This analysis resembles Gunji's analysis for Japanese \citeyearpar{Gunji86a}. Gunji suggests the use of a set-valued
valence feature, which also results in a variable order of argument saturation. For a similar analysis in the terms
of the Minimalist Program, see \citew{Fanselow2001a}. \citet[Section~3.1]{Hoffmann95a-u} and
\citet{SB2006a-u} suggest respective Categorial Grammar analyses.
In the lexical item for \emph{kennt} `knows' in (\ref{le-kennt}), the meaning of \emph{kennt} is represented as the
value of \textsc{cont}. The Semantics Principle \citep[\page 56]{ps2} ensures that, in Head"=Complement structures, the semantic contribution
of the head is identified with the semantic contribution of the mother. In this way, it is ensured that the meaning of \emph{er das Buch kennt}
is present on the highest node in Figure~\vref{abb-weil-er-das-buch-kennt-semp}. The association with the various arguments is already ensured by
the corresponding co-indexation in the lexical entry of the verb.\footnote{%
The formula $kennen(er, buch)$ is a radical simplification. It is not possible to go into
the semantic contribution of definite NPs or the analysis of quantifiers here. See
\citew*{CFPS2005a} for an analysis of scope phenomena in Minimal Recursion Semantics.%
}
\begin{figure}
\centering
\begin{forest}
sm edges
[V{[\textsc{cont} \ibox{1}]}
[NP{[\textit{nom}]}
[er;he]]
[V{[\textsc{cont} \ibox{1}]}
[NP{[\textit{acc}]}
[das Buch;the book, roof]]
[V{[\textsc{cont}\,\ibox{1}\,kennen{(er, buch)}]}
[kennt;knows]]]]
\end{forest}
\caption{Analysis of \emph{weil er das Buch kennt} `becaue he knows the book'}\label{abb-weil-er-das-buch-kennt-semp}
\end{figure}
After considering the syntactic and semantic analysis of Head"=Complement structures, I now turn to adjunct
structures.
%
Modifiers are treated as functors in HPSG. They select the head that they modify via the feature \textsc{mod}. The
adjunct can therefore determine the syntactic characteristics of the head that it modifies. Furthermore, it can access
the semantic content of the head and embed this under its own. The analysis of adjuncts will be made clearer by examining
the following example (\mex{1}):
\ea
\gll weil er das Buch nicht kennt\\
because he the book not knows\\
\glt `because he doesn't know the book'
\z
%
\emph{nicht} `not' modifies \emph{kennt} `knows' and embeds the relation $kennen(er, buch)$ under the
negation. The semantic contribution of \emph{nicht kennt} `not knows' is therefore $\neg kennen(er, buch)$.
The lexical entry for \emph{nicht} is shown in (\mex{1}).
\ea
Lexical entry for \emph{nicht} `not':\\
\ms{
cat & \ms{ head & \ms[adv]{ mod & \onems{ loc \onems{ cat$|$head \type{verb}\\
cont \ibox{1}\\
}\\
}\\
}\\
spr & \eliste\\
comps & \eliste\\
}\\
cont & $\neg$ \ibox{1}\\
}
\z
This entry can modify a verb in head-adjunct structures which are licensed by Schema~\ref{ha-schema}.
\begin{samepage}
\begin{schema}[Head-Adjunct Schema]
\label{ha-schema}
\type{head-adjunct-phrase} \impl\\
\ms{
head-dtr & \ms{ synsem & \ibox{2}\\
}\\[2mm]
non-head-dtrs & \liste{ \ms{ synsem$|$loc & \ms{ cat & \ms{ head$|$mod & \ibox{2}\\
spr & \eliste\\
comps & \eliste\\
}\\
}\\
}}\\
}
\end{schema}
\end{samepage}
Pollard and Sag's Semantics Principle ensures that the semantic content in head-adjunct structures
is contributed by the adjunct daughter. Figure~\vref{abb-weil-er-das-buch-nicht-kennt-semp} shows
this analysis in detail.\todostefan{Add tree labels to all figures}
\begin{figure}
\centering
\begin{forest}
sm edges
[V{[\textsc{cont} \ibox{1}]}
[NP{[\textit{nom}]}
[er;he]]
[V{[\textsc{cont} \ibox{1}]}
[NP{[\textit{acc}]}, fit=band
[das Buch;the book, roof]]
[V{[\textsc{cont} \ibox{1}]}
[Adv\feattab{\textsc{mod} \ibox{2} \textsc{cont} \ibox{3},\\
\textsc{cont} \ibox{1} $\neg$ \ibox{3}}
[nicht;not]]
[\ibox{2} V{[\textsc{cont} \ibox{3} kennen{(er, buch)}]}
[kennt;know]]]]]
\end{forest}
\caption{Analysis of \emph{weil er das Buch nicht kennt} `because he does not know the book'}\label{abb-weil-er-das-buch-nicht-kennt-semp}
\end{figure}
The \textsc{mod} value of the adjunct and the \synsemv of the verb are co-indexed by the Head"=Adjunct Schema \iboxb{2}.
Inside the lexical entry for \emph{nicht}, the \contv of the modified verb (\iboxt{3} in Figure~\ref{abb-weil-er-das-buch-nicht-kennt-semp})
is co-indexed with the argument of $\neg$. The semantic content of \emph{nicht} (\ibox{1} $\neg kennen(er, buch)$) becomes the semantic
content of the entire Head"=Adjunct structure and is passed along the head path until it reaches the highest node.
After this recapitulation of some basic assumptions, the following section will present a verb-movement analysis for
verb-initial word order in German.
\subsection{V1}
\label{sec-v1}
As is common practice in Transformational Grammar and its successive models (\citealp*[\page 34]{Bierwisch63a}; \citealp{Bach62a}; \citealp{Reis74a};
\citealp[Chapter~1]{Thiersch78a}), I will assume that verb-first sentences have a structure that is
parallel to the one of verb-final sentences and that an empty element fills the position occupied by
the verb in verb-last sentences.\footnote{%
The alternative is that they are flat structures, which allow the verb to be positioned in both initial
and final position \citep{Uszkoreit87a,Pollard90a}, or linearization analyses
\citep{Reape92a,Reape94a,Mueller99a,Mueller2002b,Kathol95a,Kathol2000a}. In linearization analyses, the domain
in which constituents can be permuatated is expanded so that, despite being a binary branching structure, verb-first
and verb-final orderings can be derived.
The differing possibilities will be discussed further in Chapter~\ref{chap-alternatives}.%
}
A radically simplified variant of the transformational analysis of (\mex{1}b) is presented in Figure~\vref{fig-verb-movement-gb}.
\eal
\ex
\gll dass er das Buch kennt\\
that he the book knows\\
\glt `that he knows the book'\label{bsp-dass-er-das-buch-kennt}
\ex
\gll Kennt$_i$ er das Buch \_$_i$?\\
knows{} he the book\\
\glt `Does he know the book?'\label{bsp-kennt-er-das-buch}
\zl
\begin{figure}
\centering
\begin{forest}
sm edges
[CP
[\cnull
[kennt;knows]]
[VP
[NP
[er;he]]
[VP
[NP
[das Buch;the book, roof]]
[\vnull
[\trace]]]]]
\end{forest}
\caption{\label{fig-verb-movement-gb}Analysis of \emph{Kennt er das Buch?} `Does he know the book?' with Move-$\alpha$}
\end{figure}
The verb is moved from verb-final position to C$^0$.\footnote{%
In more recent analyses the verb is adjoined to C$^0$. While V-to-c-movement analyses work well for German and
Dutch\il{Dutch} they fail for other V2 languages that allow for the combination of complementizers
with V2 clauses \citep{Fanselow2009b}. This will be discussed in more detail in
Subsection~\ref{sec-v2-germanic}.
} This movement can be viewed as creating a new tree structure out
of another, i.e. as a derivation. In the analysis of (\mex{0}b), two trees enter a relation with each other $-$ the
tree with verb-final ordering and the tree where the verb was moved into first position. One can alternatively assume a
representational model where the original positions of elements are marked by traces (see %\citew{McCawley68a}; Pullum2007a:3 sagt, dass das nicht MTS ist
Koster \citeyear[\page ]{Koster78b-u}; \citeyear[\page 235]{Koster87a-u};
%\citealp[\page 66, Fußnote~4]{Bierwisch83a};
\citealp{KT91a}; \citealp[Section~1.4]{Haider93a};
\citealp[\page 14]{Frey93a}; \citealp[\page 87--88, 177--178]{Lohnstein93a-u}; \citealp[\page 38]{FC94a}; \citealp[\page 58]{Veenstra98a}, for example). This kind of representational view
is also assumed in HPSG. In HPSG analyses, verb-movement is modeled by a verb-trace in final position coupled with the percolation of
the properties of the verb trace in the syntactic tree.
In what follows, I discuss another option for modeling verb-movement.
The C-head in Figure~\ref{fig-verb-movement-gb} has different syntactic characteristics from
V$^0$ in verb-final orders: the valence of the verb in final position does not correspond to
the valence of the element in C. The functional head in C is combined with a VP (an IP in several
works), whereas the verb in final structures requires a subject and an object.
In HPSG, the connection between the element in V1-position and the actual verb can be captured by an analysis which
assumes that there is a verb trace in verb-initial structures that has the same valence properties and the same semantic
contribution as an overt finite verb in final position and is also present in the same
position.\footnote{%
In the grammar developed in this book, it is impossible to say that a head follows or precedes its
dependents if the head is empty. The reason is that the head daughter and the non-head daughters
are the values of different features: the head daughter is the value of \textsc{head-dtr} and the
non-head daughters are members of the \textsc{non-head-dtrs} list. It is only the \phonvs of the
daughters that are serialized \citep{Hoehle94a}. So in a structure like [NP$_1$ [NP$_2$ t]] one cannot tell whether
NP$_2$ precedes t or follows it since in the AVM these two objects are just presented on top of each
other and the phonology does not show any reflex of t that would help us to infer the order. Note
however that t has the \initialv `$-$' and hence the phonology of t is appended to the end of the
phonology of NP$_2$. It does not matter whether we append the empty string at the end or at the
beginning of a list, but the \initialv of the head matters when NP$_1$ is combined with [NP$_2$ t]:
the complex phrase [NP$_2$ t] has to be serialized to the right of NP$_1$. If both NP$_1$ and
NP$_2$ contain phonological material, the material contributed by NP$_1$ will precede the material
from NP$_2$. So, we will always know that the trace is in a unit that contains other material and
this unit is serialized as if there would be a visible head in it. This means that despite Höhle's
claim to the contrary traces can (roughly) be localized in structures.
Note that \citet{GSag2000a-u} represent both head and non-head daughters in the same list. If one
assumes that this list is ordered according to the surface order of the constituents, traces are
linearized.
Traces will be shown in final position in the tree visualizations throughout this book.
}
The element in intial-position
is licensed by a lexical rule, which licenses a verb that takes the initial position and selects for
a projection of the verb trace. To make this clearer, we will take a closer look at the sentence in (\ref{bsp-kennt-er-das-buch}):
the syntactic aspects of the analysis of (\ref{bsp-kennt-er-das-buch}) are shown in Figure~\vref{fig-verb-movement-syn}.
\begin{figure}
\centering
\begin{forest}
sm edges